Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
NASA Astrophysics Data System (ADS)
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
A support vector machine approach for classification of welding defects from ultrasonic signals
NASA Astrophysics Data System (ADS)
Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming
2014-07-01
Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.
NASA Astrophysics Data System (ADS)
Asyirah, B. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In manufacturing a variety of parts, plastic injection moulding is widely use. The injection moulding process parameters have played important role that affects the product's quality and productivity. There are many approaches in minimising the warpage ans shrinkage such as artificial neural network, genetic algorithm, glowworm swarm optimisation and hybrid approaches are addressed. In this paper, a systematic methodology for determining a warpage and shrinkage in injection moulding process especially in thin shell plastic parts are presented. To identify the effects of the machining parameters on the warpage and shrinkage value, response surface methodology is applied. In thos study, a part of electronic night lamp are chosen as the model. Firstly, experimental design were used to determine the injection parameters on warpage for different thickness value. The software used to analyse the warpage is Autodesk Moldflow Insight (AMI) 2012.
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo
2008-10-29
We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.
Modelling soil water retention using support vector machines with genetic algorithm optimisation.
Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L
2014-01-01
This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.
A support vector machine for predicting defibrillation outcomes from waveform metrics.
Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J
2014-03-01
Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John BO
2008-01-01
Background We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024–1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581–590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Results Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6°C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, ε of 0.21) and an RMSE of 45.1°C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3°C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5°C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. Conclusion With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors. PMID:18959785
Prediction of multi performance characteristics of wire EDM process using grey ANFIS
NASA Astrophysics Data System (ADS)
Kumanan, Somasundaram; Nair, Anish
2017-09-01
Super alloys are used to fabricate components in ultra-supercritical power plants. These hard to machine materials are processed using non-traditional machining methods like Wire cut electrical discharge machining and needs attention. This paper details about multi performance optimization of wire EDM process using Grey ANFIS. Experiments are designed to establish the performance characteristics of wire EDM such as surface roughness, material removal rate, wire wear rate and geometric tolerances. The control parameters are pulse on time, pulse off time, current, voltage, flushing pressure, wire tension, table feed and wire speed. Grey relational analysis is employed to optimise the multi objectives. Analysis of variance of the grey grades is used to identify the critical parameters. A regression model is developed and used to generate datasets for the training of proposed adaptive neuro fuzzy inference system. The developed prediction model is tested for its prediction ability.
Energy landscapes for a machine learning application to series data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, Andrew J.; Stevenson, Jacob D.; Das, Ritankar
2016-03-28
Methods developed to explore and characterise potential energy landscapes are applied to the corresponding landscapes obtained from optimisation of a cost function in machine learning. We consider neural network predictions for the outcome of local geometry optimisation in a triatomic cluster, where four distinct local minima exist. The accuracy of the predictions is compared for fits using data from single and multiple points in the series of atomic configurations resulting from local geometry optimisation and for alternative neural networks. The machine learning solution landscapes are visualised using disconnectivity graphs, and signatures in the effective heat capacity are analysed in termsmore » of distributions of local minima and their properties.« less
Bisele, Maria; Bencsik, Martin; Lewis, Martin G C; Barnett, Cleveland T
2017-01-01
Assessment methods in human locomotion often involve the description of normalised graphical profiles and/or the extraction of discrete variables. Whilst useful, these approaches may not represent the full complexity of gait data. Multivariate statistical methods, such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA), have been adopted since they have the potential to overcome these data handling issues. The aim of the current study was to develop and optimise a specific machine learning algorithm for processing human locomotion data. Twenty participants ran at a self-selected speed across a 15m runway in barefoot and shod conditions. Ground reaction forces (BW) and kinematics were measured at 1000 Hz and 100 Hz, respectively from which joint angles (°), joint moments (N.m.kg-1) and joint powers (W.kg-1) for the hip, knee and ankle joints were calculated in all three anatomical planes. Using PCA and DFA, power spectra of the kinematic and kinetic variables were used as a training database for the development of a machine learning algorithm. All possible combinations of 10 out of 20 participants were explored to find the iteration of individuals that would optimise the machine learning algorithm. The results showed that the algorithm was able to successfully predict whether a participant ran shod or barefoot in 93.5% of cases. To the authors' knowledge, this is the first study to optimise the development of a machine learning algorithm.
Bisele, Maria; Bencsik, Martin; Lewis, Martin G. C.
2017-01-01
Assessment methods in human locomotion often involve the description of normalised graphical profiles and/or the extraction of discrete variables. Whilst useful, these approaches may not represent the full complexity of gait data. Multivariate statistical methods, such as Principal Component Analysis (PCA) and Discriminant Function Analysis (DFA), have been adopted since they have the potential to overcome these data handling issues. The aim of the current study was to develop and optimise a specific machine learning algorithm for processing human locomotion data. Twenty participants ran at a self-selected speed across a 15m runway in barefoot and shod conditions. Ground reaction forces (BW) and kinematics were measured at 1000 Hz and 100 Hz, respectively from which joint angles (°), joint moments (N.m.kg-1) and joint powers (W.kg-1) for the hip, knee and ankle joints were calculated in all three anatomical planes. Using PCA and DFA, power spectra of the kinematic and kinetic variables were used as a training database for the development of a machine learning algorithm. All possible combinations of 10 out of 20 participants were explored to find the iteration of individuals that would optimise the machine learning algorithm. The results showed that the algorithm was able to successfully predict whether a participant ran shod or barefoot in 93.5% of cases. To the authors’ knowledge, this is the first study to optimise the development of a machine learning algorithm. PMID:28886059
Support vector machines and generalisation in HEP
NASA Astrophysics Data System (ADS)
Bevan, Adrian; Gamboa Goñi, Rodrigo; Hays, Jon; Stevenson, Tom
2017-10-01
We review the concept of Support Vector Machines (SVMs) and discuss examples of their use in a number of scenarios. Several SVM implementations have been used in HEP and we exemplify this algorithm using the Toolkit for Multivariate Analysis (TMVA) implementation. We discuss examples relevant to HEP including background suppression for H → τ + τ - at the LHC with several different kernel functions. Performance benchmarking leads to the issue of generalisation of hyper-parameter selection. The avoidance of fine tuning (over training or over fitting) in MVA hyper-parameter optimisation, i.e. the ability to ensure generalised performance of an MVA that is independent of the training, validation and test samples, is of utmost importance. We discuss this issue and compare and contrast performance of hold-out and k-fold cross-validation. We have extended the SVM functionality and introduced tools to facilitate cross validation in TMVA and present results based on these improvements.
Fast femtosecond laser ablation for efficient cutting of sintered alumina substrates
NASA Astrophysics Data System (ADS)
Oosterbeek, Reece N.; Ward, Thomas; Ashforth, Simon; Bodley, Owen; Rodda, Andrew E.; Simpson, M. Cather
2016-09-01
Fast, accurate cutting of technical ceramics is a significant technological challenge because of these materials' typical high mechanical strength and thermal resistance. Femtosecond pulsed lasers offer significant promise for meeting this challenge. Femtosecond pulses can machine nearly any material with small kerf and little to no collateral damage to the surrounding material. The main drawback to femtosecond laser machining of ceramics is slow processing speed. In this work we report on the improvement of femtosecond laser cutting of sintered alumina substrates through optimisation of laser processing parameters. The femtosecond laser ablation thresholds for sintered alumina were measured using the diagonal scan method. Incubation effects were found to fit a defect accumulation model, with Fth,1=6.0 J/cm2 (±0.3) and Fth,∞=2.5 J/cm2 (±0.2). The focal length and depth, laser power, number of passes, and material translation speed were optimised for ablation speed and high quality. Optimal conditions of 500 mW power, 100 mm focal length, 2000 μm/s material translation speed, with 14 passes, produced complete cutting of the alumina substrate at an overall processing speed of 143 μm/s - more than 4 times faster than the maximum reported overall processing speed previously achieved by Wang et al. [1]. This process significantly increases processing speeds of alumina substrates, thereby reducing costs, making femtosecond laser machining a more viable option for industrial users.
Intelligent inversion method for pre-stack seismic big data based on MapReduce
NASA Astrophysics Data System (ADS)
Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua
2018-01-01
Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.
2017-09-01
This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.
Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process
NASA Astrophysics Data System (ADS)
Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.
2018-03-01
Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.
Devos, Olivier; Downey, Gerard; Duponchel, Ludovic
2014-04-01
Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Othman, M. H.; Rosli, M. S.; Hasan, S.; Amin, A. M.; Hashim, M. Y.; Marwah, O. M. F.; Amin, S. Y. M.
2018-03-01
The fundamental knowledge of flow behaviour is essential in producing various plastic parts injection moulding process. Moreover, the adaptation of advanced polymer-nanocomposites such as polypropylene-nanoclay with natural fibres, for instance Gigantochloa Scortechinii may boost up the mechanical properties of the parts. Therefore, this project was proposed with the objective to optimise the processing condition of injected mould polypropylene-nanoclay-Gigantochloa Scortechini fibres based on the flow behaviour, which was melt flow index. At first, Gigantochloa Scortechinii fibres have to be preheated at temperature 120°C and then mixed with polypropylene, maleic anhydride modified polypropylene oligomers (PPgMA) and nanoclay by using Brabender Plastograph machine. Next, forms of pellets were produced from the samples by using Granulator machine for use in the injection moulding process. The design of experiments that was used in the injection moulding process was Taguchi Method Orthogonal Array -L934. Melt Flow Index (MF) was selected as the response. Based on the results, the value of MFI increased when the fiber content increase from 0% to 3%, which was 17.78 g/10min to 22.07 g/10min and decreased from 3% to 6%, which was 22.07 g/10min to 20.05 g/10min and 3%, which gives the highest value of MFI. Based on the signal to ratio analysis, the most influential parameter that affects the value of MFI was the melt temperature. The optimum parameter for 3% were 170°C melt temperature, 35% packing pressure, 30% screw speed and 3 second filling time.
Machine learning prediction for classification of outcomes in local minimisation
NASA Astrophysics Data System (ADS)
Das, Ritankar; Wales, David J.
2017-01-01
Machine learning schemes are employed to predict which local minimum will result from local energy minimisation of random starting configurations for a triatomic cluster. The input data consists of structural information at one or more of the configurations in optimisation sequences that converge to one of four distinct local minima. The ability to make reliable predictions, in terms of the energy or other properties of interest, could save significant computational resources in sampling procedures that involve systematic geometry optimisation. Results are compared for two energy minimisation schemes, and for neural network and quadratic functions of the inputs.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
An improved PSO-SVM model for online recognition defects in eddy current testing
NASA Astrophysics Data System (ADS)
Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin
2013-12-01
Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.
Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method
NASA Astrophysics Data System (ADS)
Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak
2018-03-01
The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.
New machine-learning algorithms for prediction of Parkinson's disease
NASA Astrophysics Data System (ADS)
Mandal, Indrajit; Sairam, N.
2014-03-01
This article presents an enhanced prediction accuracy of diagnosis of Parkinson's disease (PD) to prevent the delay and misdiagnosis of patients using the proposed robust inference system. New machine-learning methods are proposed and performance comparisons are based on specificity, sensitivity, accuracy and other measurable parameters. The robust methods of treating Parkinson's disease (PD) includes sparse multinomial logistic regression, rotation forest ensemble with support vector machines and principal components analysis, artificial neural networks, boosting methods. A new ensemble method comprising of the Bayesian network optimised by Tabu search algorithm as classifier and Haar wavelets as projection filter is used for relevant feature selection and ranking. The highest accuracy obtained by linear logistic regression and sparse multinomial logistic regression is 100% and sensitivity, specificity of 0.983 and 0.996, respectively. All the experiments are conducted over 95% and 99% confidence levels and establish the results with corrected t-tests. This work shows a high degree of advancement in software reliability and quality of the computer-aided diagnosis system and experimentally shows best results with supportive statistical inference.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
Robustness analysis of bogie suspension components Pareto optimised values
NASA Astrophysics Data System (ADS)
Mousavi Bideleh, Seyed Milad
2017-08-01
Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
Optimisation of process parameters on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.
2017-09-01
This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.
Multiobjective optimisation of bogie suspension to boost speed on curves
NASA Astrophysics Data System (ADS)
Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor
2016-01-01
To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.
Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry
2016-01-01
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
Optimisation study of a vehicle bumper subsystem with fuzzy parameters
NASA Astrophysics Data System (ADS)
Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.
2012-10-01
This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).
NASA Astrophysics Data System (ADS)
Fouladi, Ehsan; Mojallali, Hamed
2018-01-01
In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.
NASA Astrophysics Data System (ADS)
Devillez, Arnaud; Dudzinski, Daniel
2007-01-01
Today the knowledge of a process is very important for engineers to find optimal combination of control parameters warranting productivity, quality and functioning without defects and failures. In our laboratory, we carry out research in the field of high speed machining with modelling, simulation and experimental approaches. The aim of our investigation is to develop a software allowing the cutting conditions optimisation to limit the number of predictive tests, and the process monitoring to prevent any trouble during machining operations. This software is based on models and experimental data sets which constitute the knowledge of the process. In this paper, we deal with the problem of vibrations occurring during a machining operation. These vibrations may cause some failures and defects to the process, like workpiece surface alteration and rapid tool wear. To measure on line the tool micro-movements, we equipped a lathe with a specific instrumentation using eddy current sensors. Obtained signals were correlated with surface finish and a signal processing algorithm was used to determine if a test is stable or unstable. Then, a fuzzy classification method was proposed to classify the tests in a space defined by the width of cut and the cutting speed. Finally, it was shown that the fuzzy classification takes into account of the measurements incertitude to compute the stability limit or stability lobes of the process.
An illustration of new methods in machine condition monitoring, Part I: stochastic resonance
NASA Astrophysics Data System (ADS)
Worden, K.; Antoniadou, I.; Marchesiello, S.; Mba, C.; Garibaldi, L.
2017-05-01
There have been many recent developments in the application of data-based methods to machine condition monitoring. A powerful methodology based on machine learning has emerged, where diagnostics are based on a two-step procedure: extraction of damage-sensitive features, followed by unsupervised learning (novelty detection) or supervised learning (classification). The objective of the current pair of papers is simply to illustrate one state-of-the-art procedure for each step, using synthetic data representative of reality in terms of size and complexity. The first paper in the pair will deal with feature extraction. Although some papers have appeared in the recent past considering stochastic resonance as a means of amplifying damage information in signals, they have largely relied on ad hoc specifications of the resonator used. In contrast, the current paper will adopt a principled optimisation-based approach to the resonator design. The paper will also show that a discrete dynamical system can provide all the benefits of a continuous system, but also provide a considerable speed-up in terms of simulation time in order to facilitate the optimisation approach.
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site. PMID:29370230
Illias, Hazlee Azil; Zhao Liang, Wee
2018-01-01
Early detection of power transformer fault is important because it can reduce the maintenance cost of the transformer and it can ensure continuous electricity supply in power systems. Dissolved Gas Analysis (DGA) technique is commonly used to identify oil-filled power transformer fault type but utilisation of artificial intelligence method with optimisation methods has shown convincing results. In this work, a hybrid support vector machine (SVM) with modified evolutionary particle swarm optimisation (EPSO) algorithm was proposed to determine the transformer fault type. The superiority of the modified PSO technique with SVM was evaluated by comparing the results with the actual fault diagnosis, unoptimised SVM and previous reported works. Data reduction was also applied using stepwise regression prior to the training process of SVM to reduce the training time. It was found that the proposed hybrid SVM-Modified EPSO (MEPSO)-Time Varying Acceleration Coefficient (TVAC) technique results in the highest correct identification percentage of faults in a power transformer compared to other PSO algorithms. Thus, the proposed technique can be one of the potential solutions to identify the transformer fault type based on DGA data on site.
NASA Astrophysics Data System (ADS)
Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian
2017-08-01
With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.
Optimisation of cavity parameters for lasers based on AlGaInAsP/InP solid solutions (λ = 1470 nm)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veselov, D A; Ayusheva, K R; Shashkin, I S
2015-10-31
We have studied the effect of laser cavity parameters on the light–current characteristics of lasers based on the AlGaInAs/GaInAsP/InP solid solution system that emit in the spectral range 1400 – 1600 nm. It has been shown that optimisation of cavity parameters (chip length and front facet reflectivity) allows one to improve heat removal from the laser, without changing other laser characteristics. An increase in the maximum output optical power of the laser by 0.5 W has been demonstrated due to cavity design optimisation. (lasers)
Comparaison de méthodes d'identification des paramètres d'une machine asynchrone
NASA Astrophysics Data System (ADS)
Bellaaj-Mrabet, N.; Jelassi, K.
1998-07-01
Interests, in Genetic Algorithms (G.A.) expands rapidly. This paper consists initially to apply G.A. for identifying induction motor parameters. Next, we compare the performances with classical methods like Maximum Likelihood and classical electrotechnical methods. These methods are applied on three induction motors of different powers to compare results following a set of criteria. Les algorithmes génétiques sont des méthodes adaptatives de plus en plus utilisée pour la résolution de certains problèmes d'optimisation. Le présent travail consiste d'une part, à mettre en œuvre un A.G sur des problèmes d'identification des machines électriques, et d'autre part à comparer ses performances avec les méthodes classiques tels que la méthode du maximum de vraisemblance et la méthode électrotechnique basée sur des essais à vides et en court-circuit. Ces méthodes sont appliquées sur des machines asynchrones de différentes puissances. Les résultats obtenus sont comparés selon certains critères, permettant de conclure sur la validité et la performance de chaque méthode.
Haering, Diane; Huchez, Aurore; Barbier, Franck; Holvoët, Patrice; Begon, Mickaël
2017-01-01
Introduction Teaching acrobatic skills with a minimal amount of repetition is a major challenge for coaches. Biomechanical, statistical or computer simulation tools can help them identify the most determinant factors of performance. Release parameters, change in moment of inertia and segmental momentum transfers were identified in the prediction of acrobatics success. The purpose of the present study was to evaluate the relative contribution of these parameters in performance throughout expertise or optimisation based improvements. The counter movement forward in flight (CMFIF) was chosen for its intrinsic dichotomy between the accessibility of its attempt and complexity of its mastery. Methods Three repetitions of the CMFIF performed by eight novice and eight advanced female gymnasts were recorded using a motion capture system. Optimal aerial techniques that maximise rotation potential at regrasp were also computed. A 14-segment-multibody-model defined through the Rigid Body Dynamics Library was used to compute recorded and optimal kinematics, and biomechanical parameters. A stepwise multiple linear regression was used to determine the relative contribution of these parameters in novice recorded, novice optimised, advanced recorded and advanced optimised trials. Finally, fixed effects of expertise and optimisation were tested through a mixed-effects analysis. Results and discussion Variation in release state only contributed to performances in novice recorded trials. Moment of inertia contribution to performance increased from novice recorded, to novice optimised, advanced recorded, and advanced optimised trials. Contribution to performance of momentum transfer to the trunk during the flight prevailed in all recorded trials. Although optimisation decreased transfer contribution, momentum transfer to the arms appeared. Conclusion Findings suggest that novices should be coached on both contact and aerial technique. Inversely, mainly improved aerial technique helped advanced gymnasts increase their performance. For both, reduction of the moment of inertia should be focused on. The method proposed in this article could be generalized to any aerial skill learning investigation. PMID:28422954
NASA Astrophysics Data System (ADS)
Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.
Acoustic Resonator Optimisation for Airborne Particle Manipulation
NASA Astrophysics Data System (ADS)
Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian
Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.
Warpage analysis in injection moulding process
NASA Astrophysics Data System (ADS)
Hidayah, M. H. N.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
This study was concentrated on the effects of process parameters in plastic injection moulding process towards warpage problem by using Autodesk Moldflow Insight (AMI) software for the simulation. In this study, plastic dispenser of dental floss has been analysed with thermoplastic material of Polypropylene (PP) used as the moulded material and details properties of 80 Tonne Nessei NEX 1000 injection moulding machine also has been used in this study. The variable parameters of the process are packing pressure, packing time, melt temperature and cooling time. Minimization of warpage obtained from the optimization and analysis data from the Design Expert software. Integration of Response Surface Methodology (RSM), Center Composite Design (CCD) with polynomial models that has been obtained from Design of Experiment (DOE) is the method used in this study. The results show that packing pressure is the main factor that will contribute to the formation of warpage in x-axis and y-axis. While in z-axis, the main factor is melt temperature and packing time is the less significant among the four parameters in x, y and z-axes. From optimal processing parameter, the value of warpage in x, y and z-axis have been optimised by 21.60%, 26.45% and 24.53%, respectively.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
An introduction to quantum machine learning
NASA Astrophysics Data System (ADS)
Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco
2015-04-01
Machine learning algorithms learn a desired input-output relation from examples in order to interpret new inputs. This is important for tasks such as image and speech recognition or strategy optimisation, with growing applications in the IT industry. In the last couple of years, researchers investigated if quantum computing can help to improve classical machine learning algorithms. Ideas range from running computationally costly algorithms or their subroutines efficiently on a quantum computer to the translation of stochastic methods into the language of quantum theory. This contribution gives a systematic overview of the emerging field of quantum machine learning. It presents the approaches as well as technical details in an accessible way, and discusses the potential of a future theory of quantum learning.
Advanced treatment planning using direct 4D optimisation for pencil-beam scanned particle therapy
NASA Astrophysics Data System (ADS)
Bernatowicz, Kinga; Zhang, Ye; Perrin, Rosalind; Weber, Damien C.; Lomax, Antony J.
2017-08-01
We report on development of a new four-dimensional (4D) optimisation approach for scanned proton beams, which incorporates both irregular motion patterns and the delivery dynamics of the treatment machine into the plan optimiser. Furthermore, we assess the effectiveness of this technique to reduce dose to critical structures in proximity to moving targets, while maintaining effective target dose homogeneity and coverage. The proposed approach has been tested using both a simulated phantom and a clinical liver cancer case, and allows for realistic 4D calculations and optimisation using irregular breathing patterns extracted from e.g. 4DCT-MRI (4D computed tomography-magnetic resonance imaging). 4D dose distributions resulting from our 4D optimisation can achieve almost the same quality as static plans, independent of the studied geometry/anatomy or selected motion (regular and irregular). Additionally, current implementation of the 4D optimisation approach requires less than 3 min to find the solution for a single field planned on 4DCT of a liver cancer patient. Although 4D optimisation allows for realistic calculations using irregular breathing patterns, it is very sensitive to variations from the planned motion. Based on a sensitivity analysis, target dose homogeneity comparable to static plans (D5-D95 <5%) has been found only for differences in amplitude of up to 1 mm, for changes in respiratory phase <200 ms and for changes in the breathing period of <20 ms in comparison to the motions used during optimisation. As such, methods to robustly deliver 4D optimised plans employing 4D intensity-modulated delivery are discussed.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
NASA Astrophysics Data System (ADS)
Kuang, Yang; Daniels, Alice; Zhu, Meiling
2017-08-01
This paper presents a sandwiched piezoelectric transducer (SPT) for energy harvesting in large force environments with increased load capacity and electric power output. The SPT uses (1) flex end-caps to amplify the applied load force so as to increase its power output and (2) a sandwiched piezoelectric-substrate structure to reduce the stress concentration in the piezoelectric material so as to increase the load capacity. A coupled piezoelectric-circuit finite element model (CPC-FEM) was developed, which is able to directly predict the electric power output of the SPT connected to a load resistor. The CPC-FEM was used to study the effects of various parameters of the SPT on the performance to obtain an optimal design. These parameters included the substrate thickness, the end-cap material and thickness, the electrode length, the joint length, the end-cap internal angle and the PZT thickness. A prototype with optimised parameters was tested on a loading machine, and the experimental results were compared with simulation. A good agreement was observed between simulation and experiment. When subjected to a 1 kN 2 Hz sinusoidal force applied by the loading machine, the SPT produced an average power of 4.68 mW. The application of the SPT as a footwear energy harvester was demonstrated by fitting the SPT into a boot and performing the tests on a treadmill, and the SPT generated an average power of 2.5 mW at a walking speed of 4.8 km h-1.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
Machine learning for outcome prediction of acute ischemic stroke post intra-arterial therapy.
Asadi, Hamed; Dowling, Richard; Yan, Bernard; Mitchell, Peter
2014-01-01
Stroke is a major cause of death and disability. Accurately predicting stroke outcome from a set of predictive variables may identify high-risk patients and guide treatment approaches, leading to decreased morbidity. Logistic regression models allow for the identification and validation of predictive variables. However, advanced machine learning algorithms offer an alternative, in particular, for large-scale multi-institutional data, with the advantage of easily incorporating newly available data to improve prediction performance. Our aim was to design and compare different machine learning methods, capable of predicting the outcome of endovascular intervention in acute anterior circulation ischaemic stroke. We conducted a retrospective study of a prospectively collected database of acute ischaemic stroke treated by endovascular intervention. Using SPSS®, MATLAB®, and Rapidminer®, classical statistics as well as artificial neural network and support vector algorithms were applied to design a supervised machine capable of classifying these predictors into potential good and poor outcomes. These algorithms were trained, validated and tested using randomly divided data. We included 107 consecutive acute anterior circulation ischaemic stroke patients treated by endovascular technique. Sixty-six were male and the mean age of 65.3. All the available demographic, procedural and clinical factors were included into the models. The final confusion matrix of the neural network, demonstrated an overall congruency of ∼ 80% between the target and output classes, with favourable receiving operative characteristics. However, after optimisation, the support vector machine had a relatively better performance, with a root mean squared error of 2.064 (SD: ± 0.408). We showed promising accuracy of outcome prediction, using supervised machine learning algorithms, with potential for incorporation of larger multicenter datasets, likely further improving prediction. Finally, we propose that a robust machine learning system can potentially optimise the selection process for endovascular versus medical treatment in the management of acute stroke.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
Optimisation de fonctionnements de pompe à chaleur chimique : synchronisation et commande du procédé
NASA Astrophysics Data System (ADS)
Cassou, T.; Amouroux, M.; Labat, P.
1995-04-01
We present the mathematical modelling of a chemical heat pump and the associated simulator. This simulator is able to determine the influence of different parameters (which would be associated to the heat exchanges or to the chemical kinetics), but also to simulate the main operating modes. An optimal management of process represents the objective to reach; we materialize it by a continuous and steady production of the power delivered by the machine. Nous présentons le modèle mathématique d'un pilote de pompe à chaleur chimique et le simulateur numérique correspondant. Ce simulateur est capable de déterminer l'influence de divers paramètres (qu'ils soient liés aux échanges de chaleur ou à la cinétique chimique), mais aussi de simuler les principaux modes de fonctionnement. Une gestion optimale du procédé représente le but à atteindre: une conduite optimisée du système permet, par une gestion des différentes phases, une production continue et stable de la puissance délivrée par la machine.
SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres
NASA Astrophysics Data System (ADS)
Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei
2015-10-01
Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
Optimisation of a parallel ocean general circulation model
NASA Astrophysics Data System (ADS)
Beare, M. I.; Stevens, D. P.
1997-10-01
This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
NASA Astrophysics Data System (ADS)
Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.
2017-09-01
This paper presents a systematic methodology to analyse the warpage of the side arm part using Autodesk Moldflow Insight software. Response Surface Methodology (RSM) was proposed to optimise the processing parameters that will result in optimal solutions by efficiently minimising the warpage of the side arm part. The variable parameters considered in this study was based on most significant parameters affecting warpage stated by previous researchers, that is melt temperature, mould temperature and packing pressure while adding packing time and cooling time as these is the commonly used parameters by researchers. The results show that warpage was improved by 10.15% and the most significant parameters affecting warpage are packing pressure.
Optimisation of confinement in a fusion reactor using a nonlinear turbulence model
NASA Astrophysics Data System (ADS)
Highcock, E. G.; Mandell, N. R.; Barnes, M.
2018-04-01
The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrere, M.; Kaeppelin, V.; Torregrosa, F.
2006-11-13
In order to face the requirements for P+/N junctions requested for < 45 nm ITRS nodes, new doping techniques are studied. Among them Plasma Immersion Ion Implantation (PIII) has been largely studied. IBS has designed and developed its own PIII machine named PULSION registered . This machine is using a pulsed plasma. As other modem technological applications of low pressure plasma, PULSION registered needs a precise control over plasma parameters in order to optimise process characteristics. In order to improve pulsed plasma discharge devoted to PIII, a nitrogen pulsed plasma has been studied in the inductively coupled plasma (ICP) ofmore » PULSION registered and an argon pulsed plasma has been studied in the helicon discharge of the laboratory reactor of LPIIM (PHYSIS). Measurements of the Ion Energy Distribution Function (IEDF) with EQP300 (Hidden) have been performed in both pulsed plasma. This study has been done for different energies which allow to reconstruct the IEDF resolved in time (TREMS). By comparing these results, we found that the beginning of the plasma pulse, named ignition, exhaust at least three phases, or more. All these results allowed us to explain plasma dynamics during the pulse while observing transitions between capacitive and inductive coupling. This study leads in a better understanding of changes in discharge parameters as plasma potential, electron temperature, ion density.« less
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Warpage analysis on thin shell part using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
The optimisation of moulding parameters appropriate to reduce warpage defects produce using Autodesk Moldflow Insight (AMI) 2012 software The product is injected by using Acrylonitrile-Butadiene-Styrene (ABS) materials. This analysis has processing parameter that varies in melting temperature, mould temperature, packing pressure and packing time. Design of Experiments (DOE) has been integrated to obtain a polynomial model using Response Surface Methodology (RSM). The Glowworm Swarm Optimisation (GSO) method is used to predict a best combination parameters to minimise warpage defect in order to produce high quality parts.
rPM6 parameters for phosphorous and sulphur-containing open-shell molecules
NASA Astrophysics Data System (ADS)
Saito, Toru; Takano, Yu
2018-03-01
In this article, we have introduced a reparameterisation of PM6 (rPM6) for phosphorus and sulphur to achieve a better description of open-shell species containing the two elements. Two sets of the parameters have been optimised separately using our training sets. The performance of the spin-unrestricted rPM6 (UrPM6) method with the optimised parameters is evaluated against 14 radical species, which contain either phosphorus or sulphur atom, comparing with the original UPM6 and the spin-unrestricted density functional theory (UDFT) methods. The standard UPM6 calculations fail to describe the adiabatic singlet-triplet energy gaps correctly, and may cause significant structural mismatches with UDFT-optimised geometries. Leaving aside three difficult cases, tests on 11 open-shell molecules strongly indicate the superior performance of UrPM6, which provides much better agreement with the results of UDFT methods for geometric and electronic properties.
Van Dyk, Jacob; Zubizarreta, Eduardo; Lievens, Yolande
2017-11-01
With increasing recognition of growing cancer incidence globally, efficient means of expanding radiotherapy capacity is imperative, and understanding the factors impacting human and financial needs is valuable. A time-driven activity-based costing analysis was performed, using a base case of 2-machine departments, with defined cost inputs and operating parameters. Four income groups were analysed, ranging from low to high income. Scenario analyses included department size, operating hours, fractionation, treatment complexity, efficiency, and centralised versus decentralised care. The base case cost/course is US$5,368 in HICs, US$2,028 in LICs; the annual operating cost is US$4,595,000 and US$1,736,000, respectively. Economies of scale show cost/course decreasing with increasing department size, mainly related to the equipment cost and most prominent up to 3 linacs. The cost in HICs is two or three times as high as in U-MICs or LICs, respectively. Decreasing operating hours below 8h/day has a dramatic impact on the cost/course. IMRT increases the cost/course by 22%. Centralising preparatory activities has a moderate impact on the costs. The results indicate trends that are useful for optimising local and regional circumstances. This methodology can provide input into a uniform and accepted approach to evaluating the cost of radiotherapy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Model-Free Machine Learning in Biomedicine: Feasibility Study in Type 1 Diabetes
Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G.
2016-01-01
Although reinforcement learning (RL) is suitable for highly uncertain systems, the applicability of this class of algorithms to medical treatment may be limited by the patient variability which dictates individualised tuning for their usually multiple algorithmic parameters. This study explores the feasibility of RL in the framework of artificial pancreas development for type 1 diabetes (T1D). In this approach, an Actor-Critic (AC) learning algorithm is designed and developed for the optimisation of insulin infusion for personalised glucose regulation. AC optimises the daily basal insulin rate and insulin:carbohydrate ratio for each patient, on the basis of his/her measured glucose profile. Automatic, personalised tuning of AC is based on the estimation of information transfer (IT) from insulin to glucose signals. Insulin-to-glucose IT is linked to patient-specific characteristics related to total daily insulin needs and insulin sensitivity (SI). The AC algorithm is evaluated using an FDA-accepted T1D simulator on a large patient database under a complex meal protocol, meal uncertainty and diurnal SI variation. The results showed that 95.66% of time was spent in normoglycaemia in the presence of meal uncertainty and 93.02% when meal uncertainty and SI variation were simultaneously considered. The time spent in hypoglycaemia was 0.27% in both cases. The novel tuning method reduced the risk of severe hypoglycaemia, especially in patients with low SI. PMID:27441367
NASA Astrophysics Data System (ADS)
Silversides, Katherine L.; Melkumyan, Arman
2017-03-01
Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.
NASA Astrophysics Data System (ADS)
Vasquez Padilla, Ricardo; Soo Too, Yen Chean; Benito, Regano; McNaughton, Robbie; Stein, Wes
2018-01-01
In this paper, optimisation of the supercritical CO? Brayton cycles integrated with a solar receiver, which provides heat input to the cycle, was performed. Four S-CO? Brayton cycle configurations were analysed and optimum operating conditions were obtained by using a multi-objective thermodynamic optimisation. Four different sets, each including two objective parameters, were considered individually. The individual multi-objective optimisation was performed by using Non-dominated Sorting Genetic Algorithm. The effect of reheating, solar receiver pressure drop and cycle parameters on the overall exergy and cycle thermal efficiency was analysed. The results showed that, for all configurations, the overall exergy efficiency of the solarised systems achieved at maximum value between 700°C and 750°C and the optimum value is adversely affected by the solar receiver pressure drop. In addition, the optimum cycle high pressure was in the range of 24.2-25.9 MPa, depending on the configurations and reheat condition.
Mikaeli, S; Thorsén, G; Karlberg, B
2001-01-12
A novel approach to multivariate evaluation of separation electrolytes for micellar electrokinetic chromatography is presented. An initial screening of the experimental parameters is performed using a Plackett-Burman design. Significant parameters are further evaluated using full factorial designs. The total resolution of the separation is calculated and used as response. The proposed scheme has been applied to the optimisation of the separation of phenols and the chiral separation of (+)-1-(9-anthryl)-2-propyl chloroformate-derivatized amino acids. A total of eight experimental parameters were evaluated and optimal conditions found in less than 48 experiments.
Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning
Rouet-Leduc, Bertrand; Barros, Kipton Marcos; Lookman, Turab; ...
2016-04-26
A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.
NASA Astrophysics Data System (ADS)
Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor
2012-08-01
The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.
The seasonal behaviour of carbon fluxes in the Amazon: fusion of FLUXNET data and the ORCHIDEE model
NASA Astrophysics Data System (ADS)
Verbeeck, H.; Peylin, P.; Bacour, C.; Ciais, P.
2009-04-01
Eddy covariance measurements at the Santarém (km 67) site revealed an unexpected seasonal pattern in carbon fluxes which could not be simulated by existing state-of-the-art global ecosystem models (Saleska et al., Sciece 2003). An unexpected high carbon uptake was measured during dry season. In contrast, carbon release was observed in the wet season. There are several possible (combined) underlying mechanisms of this phenomenon: (1) an increased soil respiration due to soil moisture in the wet season, (2) increased photosynthesis during the dry season due to deep rooting, hydraulic lift, increased radiation and/or a leaf flush. The objective of this study is to optimise the ORCHIDEE model using eddy covariance data in order to be able to mimic the seasonal response of carbon fluxes to dry/wet conditions in tropical forest ecosystems. By doing this, we try to identify the underlying mechanisms of this seasonal response. The ORCHIDEE model is a state of the art mechanistic global vegetation model that can be run at local or global scale. It calculates the carbon and water cycle in the different soil and vegetation pools and resolves the diurnal cycle of fluxes. ORCHIDEE is built on the concept of plant functional types (PFT) to describe vegetation. To bring the different carbon pool sizes to realistic values, spin-up runs are used. ORCHIDEE uses climate variables as drivers together with a number of ecosystem parameters that have been assessed from laboratory and in situ experiments. These parameters are still associated with a large uncertainty and may vary between and within PFTs in a way that is currently not informed or captured by the model. Recently, the development of assimilation techniques allows the objective use of eddy covariance data to improve our knowledge of these parameters in a statistically coherent approach. We use a Bayesian optimisation approach. This approach is based on the minimization of a cost function containing the mismatch between simulated model output and observations as well as the mismatch between a priori and optimized parameters. The parameters can be optimized on different time scales (annually, monthly, daily). For this study the model is optimised at local scale for 5 eddy flux sites: 4 sites in Brazil and one in French Guyana. The seasonal behaviour of C fluxes in response to wet and dry conditions differs among these sites. Key processes that are optimised include: the effect of the soil water on heterotrophic soil respiration, the effect of soil water availability on stomatal conductance and photosynthesis, and phenology. By optimising several key parameters we could improve the simulation of the seasonal pattern of NEE significantly. Nevertheless, posterior parameters should be interpreted with care, because resulting parameter values might compensate for uncertainties on the model structure or other parameters. Moreover, several critical issues appeared during this study e.g. how to assimilate latent and sensible heat data, when the energy balance is not closed in the data? Optimisation of the Q10 parameter showed that on some sites respiration was not sensitive at all to temperature, which show only small variations in this region. Considering this, one could question the reliability of the partitioned fluxes (GPP/Reco) at these sites. This study also tests if there is coherence between optimised parameter values of different sites within the tropical forest PFT and if the forward model response to climate variations is similar between sites.
Two-machine flow shop scheduling integrated with preventive maintenance planning
NASA Astrophysics Data System (ADS)
Wang, Shijin; Liu, Ming
2016-02-01
This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.
Using Machine-Learning and Visualisation to Facilitate Learner Interpretation of Source Material
ERIC Educational Resources Information Center
Wolff, Annika; Mulholland, Paul; Zdrahal, Zdenek
2014-01-01
This paper describes an approach for supporting inquiry learning from source materials, realised and tested through a tool-kit. The approach is optimised for tasks that require a student to make interpretations across sets of resources, where opinions and justifications may be hard to articulate. We adopt a dialogue-based approach to learning…
New Trends in Forging Technologies
NASA Astrophysics Data System (ADS)
Behrens, B.-A.; Hagen, T.; Knigge, J.; Elgaly, I.; Hadifi, T.; Bouguecha, A.
2011-05-01
Limited natural resources increase the demand on highly efficient machinery and transportation means. New energy-saving mobility concepts call for design optimisation through downsizing of components and choice of corrosion resistant materials possessing high strength to density ratios. Component downsizing can be performed either by constructive structural optimisation or by substituting heavy materials with lighter high-strength ones. In this context, forging plays an important role in manufacturing load-optimised structural components. At the Institute of Metal Forming and Metal-Forming Machines (IFUM) various innovative forging technologies have been developed. With regard to structural optimisation, different strategies for localised reinforcement of components were investigated. Locally induced strain hardening by means of cold forging under a superimposed hydrostatic pressure could be realised. In addition, controlled martensitic zones could be created through forming induced phase conversion in metastable austenitic steels. Other research focused on the replacement of heavy steel parts with high-strength nonferrous alloys or hybrid material compounds. Several forging processes of magnesium, aluminium and titanium alloys for different aeronautical and automotive applications were developed. The whole process chain from material characterisation via simulation-based process design to the production of the parts has been considered. The feasibility of forging complex shaped geometries using these alloys was confirmed. In spite of the difficulties encountered due to machine noise and high temperature, acoustic emission (AE) technique has been successfully applied for online monitoring of forging defects. New AE analysis algorithm has been developed, so that different signal patterns due to various events such as product/die cracking or die wear could be detected and classified. Further, the feasibility of the mentioned forging technologies was proven by means of the finite element analysis (FEA). For example, the integrity of forging dies with respect to crack initiation due to thermo-mechanical fatigue as well as the ductile damage of forgings was investigated with the help of cumulative damage models. In this paper some of the mentioned approaches are described.
Coil optimisation for transcranial magnetic stimulation in realistic head geometry.
Koponen, Lari M; Nieminen, Jaakko O; Mutanen, Tuomas P; Stenroos, Matti; Ilmoniemi, Risto J
Transcranial magnetic stimulation (TMS) allows focal, non-invasive stimulation of the cortex. A TMS pulse is inherently weakly coupled to the cortex; thus, magnetic stimulation requires both high current and high voltage to reach sufficient intensity. These requirements limit, for example, the maximum repetition rate and the maximum number of consecutive pulses with the same coil due to the rise of its temperature. To develop methods to optimise, design, and manufacture energy-efficient TMS coils in realistic head geometry with an arbitrary overall coil shape. We derive a semi-analytical integration scheme for computing the magnetic field energy of an arbitrary surface current distribution, compute the electric field induced by this distribution with a boundary element method, and optimise a TMS coil for focal stimulation. Additionally, we introduce a method for manufacturing such a coil by using Litz wire and a coil former machined from polyvinyl chloride. We designed, manufactured, and validated an optimised TMS coil and applied it to brain stimulation. Our simulations indicate that this coil requires less than half the power of a commercial figure-of-eight coil, with a 41% reduction due to the optimised winding geometry and a partial contribution due to our thinner coil former and reduced conductor height. With the optimised coil, the resting motor threshold of abductor pollicis brevis was reached with the capacitor voltage below 600 V and peak current below 3000 A. The described method allows designing practical TMS coils that have considerably higher efficiency than conventional figure-of-eight coils. Copyright © 2017 Elsevier Inc. All rights reserved.
Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0
NASA Astrophysics Data System (ADS)
Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; Luke, Catherine M.
2016-08-01
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter.
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
Gray, John
2017-01-01
Machine-to-machine (M2M) communication is a key enabling technology for industrial internet of things (IIoT)-empowered industrial networks, where machines communicate with one another for collaborative automation and intelligent optimisation. This new industrial computing paradigm features high-quality connectivity, ubiquitous messaging, and interoperable interactions between machines. However, manufacturing IIoT applications have specificities that distinguish them from many other internet of things (IoT) scenarios in machine communications. By highlighting the key requirements and the major technical gaps of M2M in industrial applications, this article describes a collaboration-oriented M2M (CoM2M) messaging mechanism focusing on flexible connectivity and discovery, ubiquitous messaging, and semantic interoperability that are well suited for the production line-scale interoperability of manufacturing applications. The designs toward machine collaboration and data interoperability at both the communication and semantic level are presented. Then, the application scenarios of the presented methods are illustrated with a proof-of-concept implementation in the PicknPack food packaging line. Eventually, the advantages and some potential issues are discussed based on the PicknPack practice. PMID:29165347
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Optimising Microbial Growth with a Bench-Top Bioreactor
ERIC Educational Resources Information Center
Baker, A. M. R.; Borin, S. L.; Chooi, K. P.; Huang, S. S.; Newgas, A. J. S.; Sodagar, D.; Ziegler, C. A.; Chan, G. H. T.; Walsh, K. A. P.
2006-01-01
The effects of impeller size, agitation and aeration on the rate of yeast growth were investigated using bench-top bioreactors. This exercise, carried out over a six-month period, served as an effective demonstration of the importance of different operating parameters on cell growth and provided a means of determining the optimisation conditions…
Mutual information-based LPI optimisation for radar network
NASA Astrophysics Data System (ADS)
Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun
2015-07-01
Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.
Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0
Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; ...
2016-08-25
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate–carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimationmore » system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model–data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. Furthermore, the new improved parameters for JULES are presented along with the associated uncertainties for each parameter.« less
Mixing formula for tissue-mimicking silicone phantoms in the near infrared
NASA Astrophysics Data System (ADS)
Böcklin, C.; Baumann, D.; Stuker, F.; Fröhlich, Jürg
2015-03-01
The knowledge of accurate optical parameters of materials is paramount in biomedical optics applications and numerical simulations of such systems. Phantom materials with variable but predefined parameters are needed to optimise these systems. An optimised integrating sphere measurement setup and reconstruction algorithm are presented in this work to determine the optical properties of silicone rubber based phantoms whose absorption and scattering properties are altered with TiO2 and carbon black particles. A mixing formula for all constituents is derived and allows to create phantoms with predefined optical properties.
NASA Astrophysics Data System (ADS)
Vass, J.; Šmíd, R.; Randall, R. B.; Sovka, P.; Cristalli, C.; Torcianti, B.
2008-04-01
This paper presents a statistical technique to enhance vibration signals measured by laser Doppler vibrometry (LDV). The method has been optimised for LDV signals measured on bearings of universal electric motors and applied to quality control of washing machines. Inherent problems of LDV are addressed, particularly the speckle noise occurring when rough surfaces are measured. The presence of speckle noise is detected using a new scalar indicator kurtosis ratio (KR), specifically designed to quantify the amount of random impulses generated by this noise. The KR is a ratio of the standard kurtosis and a robust estimate of kurtosis, thus indicating the outliers in the data. Since it is inefficient to reject the signals affected by the speckle noise, an algorithm for selecting an undistorted portion of a signal is proposed. The algorithm operates in the time domain and is thus fast and simple. The algorithm includes band-pass filtering and segmentation of the signal, as well as thresholding of the KR computed for each filtered signal segment. Algorithm parameters are discussed in detail and instructions for optimisation are provided. Experimental results demonstrate that speckle noise is effectively avoided in severely distorted signals, thus improving the signal-to-noise ratio (SNR) significantly. Typical faults are finally detected using squared envelope analysis. It is also shown that the KR of the band-pass filtered signal is related to the spectral kurtosis (SK).
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
NASA Astrophysics Data System (ADS)
Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.
2017-12-01
In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.
A review on simple assembly line balancing type-e problem
NASA Astrophysics Data System (ADS)
Jusop, M.; Rashid, M. F. F. Ab
2015-12-01
Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.
Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm
NASA Astrophysics Data System (ADS)
Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh
2017-10-01
Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.
NASA Astrophysics Data System (ADS)
Vivek, Tiwary; Arunkumar, P.; Deshpande, A. S.; Vinayak, Malik; Kulkarni, R. M.; Asif, Angadi
2018-04-01
Conventional investment casting is one of the oldest and most economical manufacturing techniques to produce intricate and complex part geometries. However, investment casting is considered economical only if the volume of production is large. Design iterations and design optimisations in this technique proves to be very costly due to time and tooling cost for making dies for producing wax patterns. However, with the advent of Additive manufacturing technology, plastic patterns promise a very good potential to replace the wax patterns. This approach can be very useful for low volume production & lab requirements, since the cost and time required to incorporate the changes in the design is very low. This research paper discusses the steps involved for developing polymer nanocomposite filaments and checking its suitability for investment castings. The process parameters of the 3D printer machine are also optimized using the DOE technique to obtain mechanically stronger plastic patterns. The study is done to develop a framework for rapid investment casting for lab as well as industrial requirements.
Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike
2015-07-15
The extensive use of radium during the 20th century for industrial, military and pharmaceutical purposes has led to a large number of contaminated legacy sites across Europe and North America. Sites that pose a high risk to the general public can present expensive and long-term remediation projects. Often the most pragmatic remediation approach is through routine monitoring operating gamma-ray detectors to identify, in real-time, the signal from the most hazardous heterogeneous contamination (hot particles); thus facilitating their removal and safe disposal. However, current detection systems do not fully utilise all spectral information resulting in low detection rates and ultimately an increased risk to the human health. The aim of this study was to establish an optimised detector-algorithm combination. To achieve this, field data was collected using two handheld detectors (sodium iodide and lanthanum bromide) and a number of Monte Carlo simulated hot particles were randomly injected into the field data. This allowed for the detection rate of conventional deterministic (gross counts) and machine learning (neural networks and support vector machines) algorithms to be assessed. The results demonstrated that a Neural Network operated on a sodium iodide detector provided the best detection capability. Compared to deterministic approaches, this optimised detection system could detect a hot particle on average 10cm deeper into the soil column or with half of the activity at the same depth. It was also found that noise presented by internal contamination restricted lanthanum bromide for this application. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Brown, Nicholas W. A.
Composite parts can be manufactured to near-net shape with minimum wastage of material; however, there is almost always a need for further machining. The most common post-manufacture machining operations for composite materials are to create holes for assembly. This thesis presents and discusses a thermally-assisted piercing process that can be used as a technique for introducing holes into thermoplastic composites. The thermally-assisted piercing process heats up, and locally melts, thermoplastic composites to allow material to be displaced around a hole, rather than cutting them out from the structure. This investigation was concerned with how the variation of piercing process parameters (such as the size of the heated area, the temperature of the laminate prior to piercing and the geometry of the piercing spike) changed the material microstructure within carbon fibre/Polyetheretherketone (PEEK) laminates. The variation of process parameters was found to significantly affect the formation of resin rich regions, voids and the fibre volume fraction in the material surrounding the hole. Mechanical testing (using open-hole tension, open-hole compression, plain-pin bearing and bolted bearing tests) showed that the microstructural features created during piercing were having significant influence over the resulting mechanical performance of specimens. By optimising the process parameters strength improvements of up to 11% and 21% were found for pierced specimens when compared with drilled specimens for open-hole tension and compression loading, respectively. For plain-pin and bolted bearing tests, maximum strengths of 77% and 85%, respectively, were achieved when compared with drilled holes. Improvements in first failure force (by 10%) and the stress at 4% hole elongation (by 18%), however, were measured for the bolted bearing tests when compared to drilled specimens. The overall performance of pierced specimens in an industrially relevant application ultimately depends on the properties required for that specific scenario. The results within this thesis show that the piercing technique could be used as a direct replacement to drilling depending on this application.
FSW of Aluminum Tailor Welded Blanks across Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hovanski, Yuri; Upadhyay, Piyush; Carlson, Blair
2015-02-16
Development and characterization of friction stir welded aluminum tailor welded blanks was successfully carried out on three separate machine platforms. Each was a commercially available, gantry style, multi-axis machine designed specifically for friction stir welding. Weld parameters were developed to support high volume production of dissimilar thickness aluminum tailor welded blanks at speeds of 3 m/min and greater. Parameters originally developed on an ultra-high stiffness servo driven machine where first transferred to a high stiffness servo-hydraulic friction stir welding machine, and subsequently transferred to a purpose built machine designed to accommodate thin sheet aluminum welding. The inherent beam stiffness, bearingmore » compliance, and control system for each machine were distinctly unique, which posed specific challenges in transferring welding parameters across machine platforms. This work documents the challenges imposed by successfully transferring weld parameters from machine to machine, produced from different manufacturers and with unique control systems and interfaces.« less
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
Mohamed Johar, S; Embong, Z
2015-11-01
The optimisation of electrokinetic remediation of an alluvial soil, locally named as Holyrood-Lunas from Sri Gading Industrial Area, Batu Pahat, Johor, Malaysia, had been conducted in this research. This particular soil was chosen due to its relatively high level of background radiation in a range between 139.2 and 539.4 nGy h(-1). As the background radiation is correlated to the amount of parent nuclides, (238)U and (232)Th, hence, a remediation technique, such as electrokinetic, is very useful in reducing these particular concentrations of heavy metal and radionuclides in soils. Several series of electrokinetics experiments were performed in laboratory scale in order to study the influence of certain electrokinetic parameters in soil. The concentration before (pre-electrokinetic) and after the experiment (post-electrokinetic) was determined via X-ray fluorescence (XRF) analysis technique. The best electrokinetic parameter that contributed to the highest achievable concentration removal of heavy metals and radionuclides on each experimental series was incorporated into a final electrokinetic experiment. Here, High Pure Germanium (HPGe) was used for radioactivity elemental analysis. The XRF results suggested that the most optimised electrokinetic parameters for Cr, Ni, Zn, As, Pb, Th and U were 3.0 h, 90 volts, 22.0 cm, plate-shaped electrode by 8 × 8 cm and in 1-D configuration order whereas the selected optimised electrokinetic parameters gave very low reduction of (238)U and (232)Th at 0.23 ± 2.64 and 2.74 ± 23.78 ppm, respectively. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Pandey, Sonia; Swamy, S M Vijayendra; Gupta, Arti; Koli, Akshay; Patel, Swagat; Maulvi, Furqan; Vyas, Bhavin
2018-04-29
To optimise the Eudragit/Surelease ® -coated pH-sensitive pellets for controlled and target drug delivery to the colon tissue and to avoid frequent high dosing and associated side effects which restrict its use in the colorectal-cancer therapy. The pellets were prepared using extrusion-spheronisation technique. Box-Behnken and 3 2 full factorial designs were applied to optimise the process parameters [extruder sieve size, spheroniser-speed, and spheroniser-time] and the coating levels [%w/v of Eudragit S100/Eudragit-L100 and Surelease ® ], respectively, to achieve the smooth optimised size pellets with sustained drug delivery without prior drug release in upper gastrointestinal tract (GIT). The design proposed the optimised batch by selecting independent variables at; extruder sieve size (X 1 = 1 mm), spheroniser speed (X 2 = 900 revolutions per minute, rpm), and spheroniser time (X 3 = 15 min) to achieve pellet size of 0.96 mm, aspect ratio of 0.98, and roundness 97.42%. The 16%w/v coating strength of Surelease ® and 13%w/v coating strength of Eudragit showed pH-dependent sustained release up to 22.35 h (t 99% ). The organ distribution study showed the absence of the drug in the upper part of GIT tissue and the presence of high level of capecitabine in the caecum and colon tissue. Thus, the presence of Eudragit coat prevent the release of drug in stomach and the inner Surelease ® coat showed sustained drug release in the colon tissue. The study demonstrates the potential of optimised Eudragit/Surelease ® -coated capecitabine-pellets for effective colon-targeted delivery system to avoid frequent high dosing and associated systemic side effects of drug.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
An Optimisation Procedure for the Conceptual Analysis of Different Aerodynamic Configurations
2000-06-01
G. Lombardi, G. Mengali Department of Aerospace Engineering , University of Pisa Via Diotisalvi 2, 56126 PISA, Italy F. Beux Scuola Normale Superiore...obtain engines , gears and various systems; their weights and centre configurations with improved performances with respect to a of gravity positions...design parameters have been arranged for The optimisation process includes the following steps: cruise: payload, velocity, range, cruise height, engine
Quantum chemical calculations of Cr2O3/SnO2 using density functional theory method
NASA Astrophysics Data System (ADS)
Jawaher, K. Rackesh; Indirajith, R.; Krishnan, S.; Robert, R.; Das, S. Jerome
2018-03-01
Quantum chemical calculations have been employed to study the molecular effects produced by Cr2O3/SnO2 optimised structure. The theoretical parameters of the transparent conducting metal oxides were calculated using DFT / B3LYP / LANL2DZ method. The optimised bond parameters such as bond lengths, bond angles and dihedral angles were calculated using the same theory. The non-linear optical property of the title compound was calculated using first-order hyperpolarisability calculation. The calculated HOMO-LUMO analysis explains the charge transfer interaction between the molecule. In addition, MEP and Mulliken atomic charges were also calculated and analysed.
Keogh, Pauraic; Ray, Noel J; Lynch, Christopher D; Burke, Francis M; Hannigan, Ailish
2004-12-01
This investigation determined the minimum exposure times consistent with optimised surface microhardness parameters for a commercial resin composite cured using a "first-generation" light-emitting diode activation lamp. Disk specimens were exposed and surface microhardness numbers measured at the top and bottom surfaces for elapsed times of 1 hour and 24 hours. Bottom/top microhardness number ratios were also calculated. Most microhardness data increased significantly over the elapsed time interval but microhardness ratios (bottom/top) were dependent on exposure time only. A minimum exposure of 40 secs is appropriate to optimise microhardness parameters for the combination of resin composite and lamp investigated.
Optimisation of a propagation-based x-ray phase-contrast micro-CT system
NASA Astrophysics Data System (ADS)
Nesterets, Yakov I.; Gureyev, Timur E.; Dimmock, Matthew R.
2018-03-01
Micro-CT scanners find applications in many areas ranging from biomedical research to material sciences. In order to provide spatial resolution on a micron scale, these scanners are usually equipped with micro-focus, low-power x-ray sources and hence require long scanning times to produce high resolution 3D images of the object with acceptable contrast-to-noise. Propagation-based phase-contrast tomography (PB-PCT) has the potential to significantly improve the contrast-to-noise ratio (CNR) or, alternatively, reduce the image acquisition time while preserving the CNR and the spatial resolution. We propose a general approach for the optimisation of the PB-PCT imaging system. When applied to an imaging system with fixed parameters of the source and detector this approach requires optimisation of only two independent geometrical parameters of the imaging system, i.e. the source-to-object distance R 1 and geometrical magnification M, in order to produce the best spatial resolution and CNR. If, in addition to R 1 and M, the system parameter space also includes the source size and the anode potential this approach allows one to find a unique configuration of the imaging system that produces the required spatial resolution and the best CNR.
Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S
2015-01-01
Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738
Computer-aided diagnosis of melanoma using border and wavelet-based texture analysis.
Garnavi, Rahil; Aldeen, Mohammad; Bailey, James
2012-11-01
This paper presents a novel computer-aided diagnosis system for melanoma. The novelty lies in the optimised selection and integration of features derived from textural, borderbased and geometrical properties of the melanoma lesion. The texture features are derived from using wavelet-decomposition, the border features are derived from constructing a boundaryseries model of the lesion border and analysing it in spatial and frequency domains, and the geometry features are derived from shape indexes. The optimised selection of features is achieved by using the Gain-Ratio method, which is shown to be computationally efficient for melanoma diagnosis application. Classification is done through the use of four classifiers; namely, Support Vector Machine, Random Forest, Logistic Model Tree and Hidden Naive Bayes. The proposed diagnostic system is applied on a set of 289 dermoscopy images (114 malignant, 175 benign) partitioned into train, validation and test image sets. The system achieves and accuracy of 91.26% and AUC value of 0.937, when 23 features are used. Other important findings include (i) the clear advantage gained in complementing texture with border and geometry features, compared to using texture information only, and (ii) higher contribution of texture features than border-based features in the optimised feature set.
Reliability of the quench protection system for the LHC superconducting elements
NASA Astrophysics Data System (ADS)
Vergara Fernández, A.; Rodríguez-Mateos, F.
2004-06-01
The Quench Protection System (QPS) is the sole system in the Large Hadron Collider machine monitoring the signals from the superconducting elements (bus bars, current leads, magnets) which form the cold part of the electrical circuits. The basic functions to be accomplished by the QPS during the machine operation will be briefly presented. With more than 4000 internal trigger channels (quench detectors and others), the final QPS design is the result of an optimised balance between on-demand availability and false quench reliability. The built-in redundancy for the different equipment will be presented, focusing on the calculated, expected number of missed quenches and false quenches. Maintenance strategies in order to improve the performance over the years of operation will be addressed.
NASA Astrophysics Data System (ADS)
Vanhuyse, Johan; Deckers, Elke; Jonckheere, Stijn; Pluymers, Bert; Desmet, Wim
2016-02-01
The Biot theory is commonly used for the simulation of the vibro-acoustic behaviour of poroelastic materials. However, it relies on a number of material parameters. These can be hard to characterize and require dedicated measurement setups, yielding a time-consuming and costly characterisation. This paper presents a characterisation method which is able to identify all material parameters using only an impedance tube. The method relies on the assumption that the sample is clamped within the tube, that the shear wave is excited and that the acoustic field is no longer one-dimensional. This paper numerically shows the potential of the developed method. It therefore performs a sensitivity analysis of the quantification parameters, i.e. reflection coefficients and relative pressures, and a parameter estimation using global optimisation methods. A 3-step procedure is developed and validated. It is shown that even in the presence of numerically simulated noise this procedure leads to a robust parameter estimation.
NASA Astrophysics Data System (ADS)
Katata, Lebogang; Tshweu, Lesego; Naidoo, Saloshnee; Kalombo, Lonji; Swai, Hulda
2012-11-01
Efavirenz (EFV) is one of the first-line antiretroviral drugs recommended by the World Health Organisation for treating HIV. It is a hydrophobic drug that suffers from low aqueous solubility (4 μg/mL), which leads to a limited oral absorption and low bioavailability. In order to improve its oral bioavailability, nano-sized polymeric delivery systems are suggested. Spray dried polycaprolactone-efavirenz (PCL-EFV) nanoparticles were prepared by the double emulsion method. The Taguchi method, a statistical design with an L8 orthogonal array, was implemented to optimise the formulation parameters of PCL-EFV nanoparticles. The types of sugar (lactose or trehalose), surfactant concentration and solvent (dichloromethane and ethyl acetate) were chosen as significant parameters affecting the particle size and polydispersity index (PDI). Small nanoparticles with an average particle size of less than 254 ± 0.95 nm in the case of ethyl acetate as organic solvent were obtained as compared to more than 360 ± 19.96 nm for dichloromethane. In this study, the type of solvent and sugar were the most influencing parameters of the particle size and PDI. Taguchi method proved to be a quick, valuable tool in optimising the particle size and PDI of PCL-EFV nanoparticles. The optimised experimental values for the nanoparticle size and PDI were 217 ± 2.48 nm and 0.093 ± 0.02.
O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.
2012-01-01
Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
A shrinking hypersphere PSO for engineering optimisation problems
NASA Astrophysics Data System (ADS)
Yadav, Anupam; Deep, Kusum
2016-03-01
Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.
Achieving optimal SERS through enhanced experimental design
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.
2016-01-01
One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905
Achieving optimal SERS through enhanced experimental design.
Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston
2016-01-01
One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.
Analysis of the car body stability performance after coupler jack-knifing during braking
NASA Astrophysics Data System (ADS)
Guo, Lirong; Wang, Kaiyun; Chen, Zaigang; Shi, Zhiyong; Lv, Kaikai; Ji, Tiancheng
2018-06-01
This paper aims to improve car body stability performance by optimising locomotive parameters when coupler jack-knifing occurs during braking. In order to prevent car body instability behaviour caused by coupler jack-knifing, a multi-locomotive simulation model and a series of field braking tests are developed to analyse the influence of the secondary suspension and the secondary lateral stopper on the car body stability performance during braking. According to simulation and test results, increasing secondary lateral stiffness contributes to limit car body yaw angle during braking. However, it seriously affects the dynamic performance of the locomotive. For the secondary lateral stopper, its lateral stiffness and free clearance have a significant influence on improving the car body stability capacity, and have less effect on the dynamic performance of the locomotive. An optimised measure was proposed and adopted on the test locomotive. For the optimised locomotive, the lateral stiffness of secondary lateral stopper is increased to 7875 kN/m, while its free clearance is decreased to 10 mm. The optimised locomotive has excellent dynamic and safety performance. Comparing with the original locomotive, the maximum car body yaw angle and coupler rotation angle of the optimised locomotive were reduced by 59.25% and 53.19%, respectively, according to the practical application. The maximum derailment coefficient was 0.32, and the maximum wheelset lateral force was 39.5 kN. Hence, reasonable parameters of secondary lateral stopper can improve the car body stability capacity and the running safety of the heavy haul locomotive.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
Experimental Investigation – Magnetic Assisted Electro Discharge Machining
NASA Astrophysics Data System (ADS)
Kesava Reddy, Chirra; Manzoor Hussain, M.; Satyanarayana, S.; Krishna, M. V. S. Murali
2018-04-01
Emerging technology needs advanced machined parts with high strength and temperature resistance, high fatigue life at low production cost with good surface quality to fit into various industrial applications. Electro discharge machine is one of the extensively used machines to manufacture advanced machined parts which cannot be machined by other traditional machine with high precision and accuracy. Machining of DIN 17350-1.2080 (High Carbon High Chromium steel), using electro discharge machining has been discussed in this paper. In the present investigation an effort is made to use permanent magnet at various positions near the spark zone to improve surface quality of the machined surface. Taguchi methodology is used to obtain optimal choice for each machining parameter such as peak current, pulse duration, gap voltage and Servo reference voltage etc. Process parameters have significant influence on machining characteristics and surface finish. Improvement in surface finish is observed when process parameters are set at optimum condition under the influence of magnetic field at various positions.
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
NASA Astrophysics Data System (ADS)
Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.
2017-09-01
Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.
Sun, Yu; Reynolds, Hayley M; Wraith, Darren; Williams, Scott; Finnegan, Mary E; Mitchell, Catherine; Murphy, Declan; Haworth, Annette
2018-04-26
There are currently no methods to estimate cell density in the prostate. This study aimed to develop predictive models to estimate prostate cell density from multiparametric magnetic resonance imaging (mpMRI) data at a voxel level using machine learning techniques. In vivo mpMRI data were collected from 30 patients before radical prostatectomy. Sequences included T2-weighted imaging, diffusion-weighted imaging and dynamic contrast-enhanced imaging. Ground truth cell density maps were computed from histology and co-registered with mpMRI. Feature extraction and selection were performed on mpMRI data. Final models were fitted using three regression algorithms including multivariate adaptive regression spline (MARS), polynomial regression (PR) and generalised additive model (GAM). Model parameters were optimised using leave-one-out cross-validation on the training data and model performance was evaluated on test data using root mean square error (RMSE) measurements. Predictive models to estimate voxel-wise prostate cell density were successfully trained and tested using the three algorithms. The best model (GAM) achieved a RMSE of 1.06 (± 0.06) × 10 3 cells/mm 2 and a relative deviation of 13.3 ± 0.8%. Prostate cell density can be quantitatively estimated non-invasively from mpMRI data using high-quality co-registered data at a voxel level. These cell density predictions could be used for tissue classification, treatment response evaluation and personalised radiotherapy.
NASA Astrophysics Data System (ADS)
Dasgupta, S.; Mukherjee, S.
2016-09-01
One of the most significant factors in metal cutting is tool life. In this research work, the effects of machining parameters on tool under wet machining environment were studied. Tool life characteristics of brazed carbide cutting tool machined against mild steel and optimization of machining parameters based on Taguchi design of experiments were examined. The experiments were conducted using three factors, spindle speed, feed rate and depth of cut each having three levels. Nine experiments were performed on a high speed semi-automatic precision central lathe. ANOVA was used to determine the level of importance of the machining parameters on tool life. The optimum machining parameter combination was obtained by the analysis of S/N ratio. A mathematical model based on multiple regression analysis was developed to predict the tool life. Taguchi's orthogonal array analysis revealed the optimal combination of parameters at lower levels of spindle speed, feed rate and depth of cut which are 550 rpm, 0.2 mm/rev and 0.5mm respectively. The Main Effects plot reiterated the same. The variation of tool life with different process parameters has been plotted. Feed rate has the most significant effect on tool life followed by spindle speed and depth of cut.
Optimising μCT imaging of the middle and inner cat ear.
Seifert, H; Röher, U; Staszyk, C; Angrisani, N; Dziuba, D; Meyer-Lindenberg, A
2012-04-01
This study's aim was to determine the optimal scan parameters for imaging the middle and inner ear of the cat with micro-computertomography (μCT). Besides, the study set out to assess whether adequate image quality can be obtained to use μCT in diagnostics and research on cat ears. For optimisation, μCT imaging of two cat skull preparations was performed using 36 different scanning protocols. The μCT-scans were evaluated by four experienced experts with regard to the image quality and detail detectability. By compiling a ranking of the results, the best possible scan parameters could be determined. From a third cat's skull, a μCT-scan, using these optimised scan parameters, and a comparative clinical CT-scan were acquired. Afterwards, histological specimens of the ears were produced which were compared to the μCT-images. The comparison shows that the osseous structures are depicted in detail. Although soft tissues cannot be differentiated, the osseous structures serve as valuable spatial orientation of relevant nerves and muscles. Clinical CT can depict many anatomical structures which can also be seen on μCT-images, but these appear a lot less sharp and also less detailed than with μCT. © 2011 Blackwell Verlag GmbH.
Identification of eggs from different production systems based on hyperspectra and CS-SVM.
Sun, J; Cong, S L; Mao, H P; Zhou, X; Wu, X H; Zhang, X D
2017-06-01
1. To identify the origin of table eggs more accurately, a method based on hyperspectral imaging technology was studied. 2. The hyperspectral data of 200 samples of intensive and extensive eggs were collected. Standard normalised variables combined with a Savitzky-Golay were used to eliminate noise, then stepwise regression (SWR) was used for feature selection. Grid search algorithm (GS), genetic search algorithm (GA), particle swarm optimisation algorithm (PSO) and cuckoo search algorithm (CS) were applied by support vector machine (SVM) methods to establish an SVM identification model with the optimal parameters. The full spectrum data and the data after feature selection were the input of the model, while egg category was the output. 3. The SWR-CS-SVM model performed better than the other models, including SWR-GS-SVM, SWR-GA-SVM, SWR-PSO-SVM and others based on full spectral data. The training and test classification accuracy of the SWR-CS-SVM model were respectively 99.3% and 96%. 4. SWR-CS-SVM proved effective for identifying egg varieties and could also be useful for the non-destructive identification of other types of egg.
NASA Astrophysics Data System (ADS)
Tahir, Abdul Fattah Mohd; Aqida, Syarifah Nur
2017-07-01
In hot press forming, changes of mechanical properties in boron steel blanks have been a setback in trimming the final shape components. This paper presents investigation of kerf width and heat affected zone (HAZ) of ultra high strength 22MnB5 steel cutting. Sample cutting was conducted using a 4 kW Carbon Dioxide (CO2) laser machine with 10.6 μm wavelength with the laser spot size of 0.2 mm. A response surface methodology (RSM) using three level Box-Behnken design of experiment was developed with three factors of peak power, cutting speed and duty cycle. The parameters were optimised for minimum kerf width and HAZ formation. Optical evaluation using MITUTOYO TM 505 were conducted to measure the kerf width and HAZ region. From the findings, laser duty cycle was crucial to determine cutting quality of ultra-high strength steel; followed by cutting speed and laser power. Meanwhile, low power intensity with continuous wave contributes the narrowest kerf width formation and least HAZ region.
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
NASA Astrophysics Data System (ADS)
Lingadurai, K.; Nagasivamuni, B.; Muthu Kamatchi, M.; Palavesam, J.
2012-06-01
Wire electrical discharge machining (WEDM) is a specialized thermal machining process capable of accurately machining parts of hard materials with complex shapes. Parts having sharp edges that pose difficulties to be machined by the main stream machining processes can be easily machined by WEDM process. Design of Experiments approach (DOE) has been reported in this work for stainless steel AISI grade-304 which is used in cryogenic vessels, evaporators, hospital surgical equipment, marine equipment, fasteners, nuclear vessels, feed water tubing, valves, refrigeration equipment, etc., is machined by WEDM with brass wire electrode. The DOE method is used to formulate the experimental layout, to analyze the effect of each parameter on the machining characteristics, and to predict the optimal choice for each WEDM parameter such as voltage, pulse ON, pulse OFF and wire feed. It is found that these parameters have a significant influence on machining characteristic such as metal removal rate (MRR), kerf width and surface roughness (SR). The analysis of the DOE reveals that, in general the pulse ON time significantly affects the kerf width and the wire feed rate affects SR, while, the input voltage mainly affects the MRR.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias
2018-01-31
Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.
The Impact Of Surface Shape Of Chip-Breaker On Machined Surface
NASA Astrophysics Data System (ADS)
Šajgalík, Michal; Czán, Andrej; Martinček, Juraj; Varga, Daniel; Hemžský, Pavel; Pitela, David
2015-12-01
Machined surface is one of the most used indicators of workpiece quality. But machined surface is influenced by several factors such as cutting parameters, cutting material, shape of cutting tool or cutting insert, micro-structure of machined material and other known as technological parameters. By improving of these parameters, we can improve machined surface. In the machining, there is important to identify the characteristics of main product of these processes - workpiece, but also the byproduct - the chip. Size and shape of chip has impact on lifetime of cutting tools and its inappropriate form can influence the machine functionality and lifetime, too. This article deals with elimination of long chip created when machining of shaft in automotive industry and with impact of shape of chip-breaker on shape of chip in various cutting conditions based on production requirements.
Structure zone diagram and particle incorporation of nickel brush plated composite coatings
Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.
2017-01-01
This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%. PMID:28300159
Structure zone diagram and particle incorporation of nickel brush plated composite coatings
NASA Astrophysics Data System (ADS)
Isern, L.; Impey, S.; Almond, H.; Clouser, S. J.; Endrino, J. L.
2017-03-01
This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.
Variability estimation of urban wastewater biodegradable fractions by respirometry.
Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie
2005-11-01
This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.
Structure zone diagram and particle incorporation of nickel brush plated composite coatings.
Isern, L; Impey, S; Almond, H; Clouser, S J; Endrino, J L
2017-03-16
This work studies the deposition of aluminium-incorporated nickel coatings by brush electroplating, focusing on the electroplating setup and processing parameters. The setup was optimised in order to increase the volume of particle incorporation. The optimised design focused on increasing the plating solution flow to avoid sedimentation, and as a result the particle transport experienced a three-fold increase when compared with the traditional setup. The influence of bath load, current density and the brush material used was investigated. Both current density and brush material have a significant impact on the morphology and composition of the coatings. Higher current densities and non-abrasive brushes produce rough, particle-rich samples. Different combinations of these two parameters influence the surface characteristics differently, as illustrated in a Structure Zone Diagram. Finally, surfaces featuring crevices and peaks incorporate between 3.5 and 20 times more particles than smoother coatings. The presence of such features has been quantified using average surface roughness Ra and Abbott-Firestone curves. The combination of optimised setup and rough surface increased the particle content of the composite to 28 at.%.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-01-01
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163
Cha, Thye San; Yee, Willy; Aziz, Ahmad
2012-04-01
The successful establishment of an Agrobacterium-mediated transformation method and optimisation of six critical parameters known to influence the efficacy of Agrobacterium T-DNA transfer in the unicellular microalga Chlorella vulgaris (UMT-M1) are reported. Agrobacterium tumefaciens strain LBA4404 harbouring the binary vector pCAMBIA1304 containing the gfp:gusA fusion reporter and a hygromycin phosphotransferase (hpt) selectable marker driven by the CaMV35S promoter were used for transformation. Transformation frequency was assessed by monitoring transient β-glucuronidase (GUS) expression 2 days post-infection. It was found that co-cultivation temperature at 24°C, co-cultivation medium at pH 5.5, 3 days of co-cultivation, 150 μM acetosyringone, Agrobacterium density of 1.0 units (OD(600)) and 2 days of pre-culture were optimum variables which produced the highest number of GUS-positive cells (8.8-20.1%) when each of these parameters was optimised individually. Transformation conducted with the combination of all optimal parameters above produced 25.0% of GUS-positive cells, which was almost a threefold increase from 8.9% obtained from un-optimised parameters. Evidence of transformation was further confirmed in 30% of 30 randomly-selected hygromycin B (20 mg L(-1)) resistant colonies by polymerase chain reaction (PCR) using gfp:gusA and hpt-specific primers. The developed transformation method is expected to facilitate the genetic improvement of this commercially-important microalga.
Lança, L; Silva, A; Alves, E; Serranheira, F; Correia, M
2008-01-01
Typical distribution of exposure parameters in plain radiography is unknown in Portugal. This study aims to identify exposure parameters that are being used in plain radiography in the Lisbon area and to compare the collected data with European references [Commission of European Communities (CEC) guidelines]. The results show that in four examinations (skull, chest, lumbar spine and pelvis), there is a strong tendency of using exposure times above the European recommendation. The X-ray tube potential values (in kV) are below the recommended values from CEC guidelines. This study shows that at a local level (Lisbon region), radiographic practice does not comply with CEC guidelines concerning exposure techniques. Further national/local studies are recommended with the objective to improve exposure optimisation and technical procedures in plain radiography. This study also suggests the need to establish national/local diagnostic reference levels and to proceed to effective measurements for exposure optimisation.
Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin
2017-09-01
Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.
Use of cone beam CT in children and young people in three United Kingdom dental hospitals.
Hidalgo-Rivas, Jose Alejandro; Theodorakou, Chrysoula; Carmichael, Fiona; Murray, Brenda; Payne, Martin; Horner, Keith
2014-09-01
There is limited evidence about the use of cone-beam computed tomography (CBCT) in paediatric dentistry. Appropriate use of CBCT is particularly important because of greater radiation risks in this age group. To survey the use of CBCT in children and young people in three Dental Hospitals in the United Kingdom (UK), with special attention paid to aspects of justification and optimisation. Retrospective analysis of patient records over a 24-month period, looking at CBCT examinations performed on subjects under 18 years of age. Clinical indications, region of interest, scan field of view (FoV), incidental findings and exposure factors used were recorded. There were 294 CBCT examinations performed in this age group, representing 13.7% of all scanned patients. CBCT was used more frequently in the >13 year age group. The most common use was for localisation of unerupted teeth in the anterior maxilla and the detection of root resorption. Optimisation of X-ray exposures did not appear to be consistent. When planning a CBCT service for children and young people, a limited FoV machine would be the appropriate choice for the majority of clinical requirements. It would facilitate clinical evaluation of scans, would limit the number of incidental findings and contribute to optimisation of radiation doses.
NASA Astrophysics Data System (ADS)
Miza, A. T. N. A.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
In this study, Computer Aided Engineering was used for injection moulding simulation. The method of Design of experiment (DOE) was utilize according to the Latin Square orthogonal array. The relationship between the injection moulding parameters and warpage were identify based on the experimental data that used. Response Surface Methodology (RSM) was used as to validate the model accuracy. Then, the RSM and GA method were combine as to examine the optimum injection moulding process parameter. Therefore the optimisation of injection moulding is largely improve and the result shown an increasing accuracy and also reliability. The propose method by combining RSM and GA method also contribute in minimising the warpage from occur.
Wiener-Hammerstein system identification - an evolutionary approach
NASA Astrophysics Data System (ADS)
Naitali, Abdessamad; Giri, Fouad
2016-01-01
The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.
Energy efficiency in membrane bioreactors.
Barillon, B; Martin Ruel, S; Langlais, C; Lazarova, V
2013-01-01
Energy consumption remains the key factor for the optimisation of the performance of membrane bioreactors (MBRs). This paper presents the results of the detailed energy audits of six full-scale MBRs operated by Suez Environnement in France, Spain and the USA based on on-site energy measurement and analysis of plant operation parameters and treatment performance. Specific energy consumption is compared for two different MBR configurations (flat sheet and hollow fibre membranes) and for plants with different design, loads and operation parameters. The aim of this project was to understand how the energy is consumed in MBR facilities and under which operating conditions, in order to finally provide guidelines and recommended practices for optimisation of MBR operation and design to reduce energy consumption and environmental impacts.
Optimisation of the supercritical extraction of toxic elements in fish oil.
Hajeb, P; Jinap, S; Shakibazadeh, Sh; Afsah-Hejri, L; Mohebbi, G H; Zaidul, I S M
2014-01-01
This study aims to optimise the operating conditions for the supercritical fluid extraction (SFE) of toxic elements from fish oil. The SFE operating parameters of pressure, temperature, CO2 flow rate and extraction time were optimised using a central composite design (CCD) of response surface methodology (RSM). High coefficients of determination (R²) (0.897-0.988) for the predicted response surface models confirmed a satisfactory adjustment of the polynomial regression models with the operation conditions. The results showed that the linear and quadratic terms of pressure and temperature were the most significant (p < 0.05) variables affecting the overall responses. The optimum conditions for the simultaneous elimination of toxic elements comprised a pressure of 61 MPa, a temperature of 39.8ºC, a CO₂ flow rate of 3.7 ml min⁻¹ and an extraction time of 4 h. These optimised SFE conditions were able to produce fish oil with the contents of lead, cadmium, arsenic and mercury reduced by up to 98.3%, 96.1%, 94.9% and 93.7%, respectively. The fish oil extracted under the optimised SFE operating conditions was of good quality in terms of its fatty acid constituents.
NASA Astrophysics Data System (ADS)
Kharbouch, Yassine; Mimet, Abdelaziz; El Ganaoui, Mohammed; Ouhsaine, Lahoucine
2018-07-01
This study investigates the thermal energy potentials and economic feasibility of an air-conditioned family household-integrated phase change material (PCM) considering different climate zones in Morocco. A simulation-based optimisation was carried out in order to define the optimal design of a PCM-enhanced household envelope for thermal energy effectiveness and cost-effectiveness of predefined candidate solutions. The optimisation methodology is based on coupling Energyplus® as a dynamic simulation tool and GenOpt® as an optimisation tool. Considering the obtained optimum design strategies, a thermal energy and economic analysis are carried out to investigate PCMs' integration feasibility in the Moroccan constructions. The results show that the PCM-integrated household envelope allows minimising the cooling/heating thermal energy demand vs. a reference household without PCM. While for the cost-effectiveness optimisation, it has been deduced that the economic feasibility is stilling insufficient under the actual PCM market conditions. The optimal design parameters results are also analysed.
Rani, K; Jahnen, A; Noel, A; Wolf, D
2015-07-01
In the last decade, several studies have emphasised the need to understand and optimise the computed tomography (CT) procedures in order to reduce the radiation dose applied to paediatric patients. To evaluate the influence of the technical parameters on the radiation dose and the image quality, a statistical model has been developed using the design of experiments (DOE) method that has been successfully used in various fields (industry, biology and finance) applied to CT procedures for the abdomen of paediatric patients. A Box-Behnken DOE was used in this study. Three mathematical models (contrast-to-noise ratio, noise and CTDI vol) depending on three factors (tube current, tube voltage and level of iterative reconstruction) were developed and validated. They will serve as a basis for the development of a CT protocol optimisation model. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Liu, Ming; Zhao, Lindu
2012-08-01
Demand for emergency resources is usually uncertain and varies quickly in anti-bioterrorism system. Besides, emergency resources which had been allocated to the epidemic areas in the early rescue cycle will affect the demand later. In this article, an integrated and dynamic optimisation model with time-varying demand based on the epidemic diffusion rule is constructed. The heuristic algorithm coupled with the MATLAB mathematical programming solver is adopted to solve the optimisation model. In what follows, the application of the optimisation model as well as a short sensitivity analysis of the key parameters in the time-varying demand forecast model is presented. The results show that both the model and the solution algorithm are useful in practice, and both objectives of inventory level and emergency rescue cost can be controlled effectively. Thus, it can provide some guidelines for decision makers when coping with emergency rescue problem with uncertain demand, and offers an excellent reference when issues pertain to bioterrorism.
Optimisation of sensing time and transmission time in cognitive radio-based smart grid networks
NASA Astrophysics Data System (ADS)
Yang, Chao; Fu, Yuli; Yang, Junjie
2016-07-01
Cognitive radio (CR)-based smart grid (SG) networks have been widely recognised as emerging communication paradigms in power grids. However, a sufficient spectrum resource and reliability are two major challenges for real-time applications in CR-based SG networks. In this article, we study the traffic data collection problem. Based on the two-stage power pricing model, the power price is associated with the efficient received traffic data in a metre data management system (MDMS). In order to minimise the system power price, a wideband hybrid access strategy is proposed and analysed, to share the spectrum between the SG nodes and CR networks. The sensing time and transmission time are jointly optimised, while both the interference to primary users and the spectrum opportunity loss of secondary users are considered. Two algorithms are proposed to solve the joint optimisation problem. Simulation results show that the proposed joint optimisation algorithms outperform the fixed parameters (sensing time and transmission time) algorithms, and the power cost is reduced efficiently.
NASA Astrophysics Data System (ADS)
Isbilir, Ozden
Owing to their desirable strength-to-weight characteristics, carbon fibre reinforced polymer composites have been favourite materials for structural applications in different industries such as aerospace, transport, sports and energy. They provide a weight reduction in whole structure and consequently decrease fuel consumption. The use of lightweight materials such as titanium and its alloys in modern aircrafts has also increased significantly in the last couple of decades. Titanium and its alloys offer high strength/weight ratio, high compressive and tensile strength at high temperatures, low density, excellent corrosion resistance, exceptional erosion resistance, superior fatigue resistance and relatively low modulus of elasticity. Although composite/metal hybrid structures are increasingly used in airframes nowadays, number of studies regarding drilling of composite/metal stacks is very limited. During drilling of multilayer materials different problems may arise due to very different attributes of these materials. Machining conditions of drilling such structures play an important role on tool wear, quality of holes and cost of machining.. The research work in this thesis is aimed to investigate drilling of CFRP/Ti6Al4V hybrid structure and to optimize process parameters and drill geometry. The research work consist complete experimental study including drilling tests, in-situ and post measurements and related analysis; and finite element analysis including fully 3-D finite element models. The experimental investigations focused on drilling outputs such as thrust force, torque, delamination, burr formation, surface roughness and tool wear. An algorithm was developed to analyse drilling induced delamination quantitatively based on the images. In the numerical analysis, novel 3-D finite element models of drilling of CFRP, Ti6Al4V and CFRP/Ti6Al4V hybrid structure were developed with the use of 3-D complex drill geometries. A user defined subroutine was developed to model material and failure behaviour of CFRP. The effects of process parameters on drilling outputs have been investigated and compared with the experimental results. The influences of drill bit geometries have been simulated in this study..
Optimization of processing parameters of UAV integral structural components based on yield response
NASA Astrophysics Data System (ADS)
Chen, Yunsheng
2018-05-01
In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.
Choosing the appropriate forecasting model for predictive parameter control.
Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars
2014-01-01
All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.
Das, Anup Kumar; Mandal, Vivekananda; Mandal, Subhash C
2014-01-01
Extraction forms the very basic step in research on natural products for drug discovery. A poorly optimised and planned extraction methodology can jeopardise the entire mission. To provide a vivid picture of different chemometric tools and planning for process optimisation and method development in extraction of botanical material, with emphasis on microwave-assisted extraction (MAE) of botanical material. A review of studies involving the application of chemometric tools in combination with MAE of botanical materials was undertaken in order to discover what the significant extraction factors were. Optimising a response by fine-tuning those factors, experimental design or statistical design of experiment (DoE), which is a core area of study in chemometrics, was then used for statistical analysis and interpretations. In this review a brief explanation of the different aspects and methodologies related to MAE of botanical materials that were subjected to experimental design, along with some general chemometric tools and the steps involved in the practice of MAE, are presented. A detailed study on various factors and responses involved in the optimisation is also presented. This article will assist in obtaining a better insight into the chemometric strategies of process optimisation and method development, which will in turn improve the decision-making process in selecting influential extraction parameters. Copyright © 2013 John Wiley & Sons, Ltd.
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-01-01
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-05-22
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.
NASA Astrophysics Data System (ADS)
Sam, Ashish Alex; Ghosh, Parthasarathi
2017-03-01
Turboexpander constitutes one of the vital components of Claude cycle based helium refrigerators and liquefiers that are gaining increasing technological importance. These turboexpanders which are of radial inflow in configuration are generally high-speed micro turbines, due to the low molecular weight and density of helium. Any improvement in efficiency of these machines requires a detailed understanding of the flow field. Computational Fluid Dynamics analysis (CFD) has emerged as a necessary tool for the determination of the flow fields in cryogenic turboexpanders, which is often not possible through experiments. In the present work three-dimensional transient flow analysis of a cryogenic turboexpander for helium refrigeration and liquefaction cycles were performed using Ansys CFX®, to understand the flow field of a high-speed helium turboexpander, which in turn will help in taking appropriate decisions regarding modifications of established design methodology for improved efficiency of these machines. The turboexpander is designed based on Balje's nsds diagram and the inverse design blade profile generation formalism prescribed by Hasselgruber and Balje. The analyses include the study of several losses, their origins, the increase in entropy due to these losses, quantification of losses and the effects of various geometrical parameters on these losses. Through the flow field analysis it was observed that in the nozzle, flow separation at the nozzle blade suction side and trailing edge vortices resulted in loss generation, which calls for better nozzle blade profile. The turbine wheel flow field analysis revealed that the significant geometrical parameters of the turbine wheel blade like blade inlet angle, blade profile, tip clearance height and trailing edge thickness need to be optimised for improved performance of the turboexpander. The detailed flow field analysis in this paper can be used to improve the mean line design methodology for turboexpanders used in helium refrigeration and liquefaction cycles.
Investigations on high speed machining of EN-353 steel alloy under different machining environments
NASA Astrophysics Data System (ADS)
Venkata Vishnu, A.; Jamaleswara Kumar, P.
2018-03-01
The addition of Nano Particles into conventional cutting fluids enhances its cooling capabilities; in the present paper an attempt is made by adding nano sized particles into conventional cutting fluids. Taguchi Robust Design Methodology is employed in order to study the performance characteristics of different turning parameters i.e. cutting speed, feed rate, depth of cut and type of tool under different machining environments i.e. dry machining, machining with lubricant - SAE 40 and machining with mixture of nano sized particles of Boric acid and base fluid SAE 40. A series of turning operations were performed using L27 (3)13 orthogonal array, considering high cutting speeds and the other machining parameters to measure hardness. The results are compared among the different machining environments, and it is concluded that there is considerable improvement in the machining performance using lubricant SAE 40 and mixture of SAE 40 + boric acid compared with dry machining. The ANOVA suggests that the selected parameters and the interactions are significant and cutting speed has most significant effect on hardness.
High density plasmas and new diagnostics: An overview (invited).
Celona, L; Gammino, S; Mascali, D
2016-02-01
One of the limiting factors for the full understanding of Electron Cyclotron Resonance Ion Sources (ECRISs) fundamental mechanisms consists of few types of diagnostic tools so far available for such compact machines. Microwave-to-plasma coupling optimisation, new methods of density overboost provided by plasma wave generation, and magnetostatic field tailoring for generating a proper electron energy distribution function, suitable for optimal ion beams formation, require diagnostic tools spanning across the entire electromagnetic spectrum from microwave interferometry to X-ray spectroscopy; these methods are going to be implemented including high resolution and spatially resolved X-ray spectroscopy made by quasi-optical methods (pin-hole cameras). The ion confinement optimisation also requires a complete control of cold electrons displacement, which can be performed by optical emission spectroscopy. Several diagnostic tools have been recently developed at INFN-LNS, including "volume-integrated" X-ray spectroscopy in low energy domain (2-30 keV, by using silicon drift detectors) or high energy regime (>30 keV, by using high purity germanium detectors). For the direct detection of the spatially resolved spectral distribution of X-rays produced by the electronic motion, a "pin-hole camera" has been developed also taking profit from previous experiences in the ECRIS field. The paper will give an overview of INFN-LNS strategy in terms of new microwave-to-plasma coupling schemes and advanced diagnostics supporting the design of new ion sources and for optimizing the performances of the existing ones, with the goal of a microwave-absorption oriented design of future machines.
NASA Astrophysics Data System (ADS)
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senthil, K.; Mitra, S.; Sandeep, S., E-mail: sentilk@barc.gov.in
In a multi-gigawatt pulsed power system like KALI-30 GW, insulation coordination is required to achieve high voltages ranging from 0.3 MV to 1 MV. At the same time optimisation of the insulation parameters is required to minimize the inductance of the system, so that nanoseconds output can be achieved. The KALI-30GW pulse power system utilizes a combination of Perspex, delrin, epoxy, transformer oil, nitrogen/SF{sub 6} gas and vacuum insulation at its various stages in compressing DC high voltage to a nanoseconds pulse. This paper describes the operation and performance of the system from 400 kV to 1030 kV output voltagemore » pulse and insulation parameters utilized for obtaining peak 1 MV output. (author)« less
GilPavas, E; Dobrosz-Gómez, I; Gómez-García, M Á
2011-01-01
The capacity of the electro-coagulation (EC) process for the treatment of the wastewater containing Cr3+, resulting from a leather tannery industry placed in Medellin (Colombia), was evaluated. In order to assess the effect of some parameters, such as: the electrode type (Al and/or Fe), the distance between electrodes, the current density, the stirring velocity, and the initial Cr3+ concentration on its efficiency of removal (%RCr+3), a multifactorial experimental design was used. The %RCr3+ was defined as the response variable for the statistical analysis. In order to optimise the operational values for the chosen parameters, the response surface method (RSM) was applied. Additionally, the Biological Oxygen Demand (BOD5), the Chemical Oxygen Demand (COD), and the Total Organic Carbon (TOC) were monitored during the EC process. The electrodes made of aluminium appeared to be the most effective in the chromium removal from the wastewater under study. At pH equal to 4.52 and at 28°C, the optimal conditions of Cr3+ removal using the EC process were found, as follows: the initial Cr3+ concentration=3,596 mg/L, the electrode gap=0.5 cm, the stirring velocity=382.3 rpm, and the current density=57.87 mA/cm2. At those conditions, it was possible to reach 99.76% of Cr3+ removal, and 64% and 61% of mineralisation (TOC) and COD removal, respectively. A kinetic analysis was performed in order to verify the response capacity of the EC process at optimised parameter values.
Predictive Modeling and Optimization of Vibration-assisted AFM Tip-based Nanomachining
NASA Astrophysics Data System (ADS)
Kong, Xiangcheng
The tip-based vibration-assisted nanomachining process offers a low-cost, low-effort technique in fabricating nanometer scale 2D/3D structures in sub-100 nm regime. To understand its mechanism, as well as provide the guidelines for process planning and optimization, we have systematically studied this nanomachining technique in this work. To understand the mechanism of this nanomachining technique, we firstly analyzed the interaction between the AFM tip and the workpiece surface during the machining process. A 3D voxel-based numerical algorithm has been developed to calculate the material removal rate as well as the contact area between the AFM tip and the workpiece surface. As a critical factor to understand the mechanism of this nanomachining process, the cutting force has been analyzed and modeled. A semi-empirical model has been proposed by correlating the cutting force with the material removal rate, which was validated using experimental data from different machining conditions. With the understanding of its mechanism, we have developed guidelines for process planning of this nanomachining technique. To provide the guideline for parameter selection, the effect of machining parameters on the feature dimensions (depth and width) has been analyzed. Based on ANOVA test results, the feature width is only controlled by the XY vibration amplitude, while the feature depth is affected by several machining parameters such as setpoint force and feed rate. A semi-empirical model was first proposed to predict the machined feature depth under given machining condition. Then, to reduce the computation intensity, linear and nonlinear regression models were also proposed and validated using experimental data. Given the desired feature dimensions, feasible machining parameters could be provided using these predictive feature dimension models. As the tip wear is unavoidable during the machining process, the machining precision will gradually decrease. To maintain the machining quality, the guideline for when to change the tip should be provided. In this study, we have developed several metrics to detect tip wear, such as tip radius and the pull-off force. The effect of machining parameters on the tip wear rate has been studied using these metrics, and the machining distance before a tip must be changed has been modeled using these machining parameters. Finally, the optimization functions have been built for unit production time and unit production cost subject to realistic constraints, and the optimal machining parameters can be found by solving these functions.
NASA Astrophysics Data System (ADS)
Mäkelä, Jarmo; Susiluoto, Jouni; Markkanen, Tiina; Aurela, Mika; Järvinen, Heikki; Mammarella, Ivan; Hagemann, Stefan; Aalto, Tuula
2016-12-01
We examined parameter optimisation in the JSBACH (Kaminski et al., 2013; Knorr and Kattge, 2005; Reick et al., 2013) ecosystem model, applied to two boreal forest sites (Hyytiälä and Sodankylä) in Finland. We identified and tested key parameters in soil hydrology and forest water and carbon-exchange-related formulations, and optimised them using the adaptive Metropolis (AM) algorithm for Hyytiälä with a 5-year calibration period (2000-2004) followed by a 4-year validation period (2005-2008). Sodankylä acted as an independent validation site, where optimisations were not made. The tuning provided estimates for full distribution of possible parameters, along with information about correlation, sensitivity and identifiability. Some parameters were correlated with each other due to a phenomenological connection between carbon uptake and water stress or other connections due to the set-up of the model formulations. The latter holds especially for vegetation phenology parameters. The least identifiable parameters include phenology parameters, parameters connecting relative humidity and soil dryness, and the field capacity of the skin reservoir. These soil parameters were masked by the large contribution from vegetation transpiration. In addition to leaf area index and the maximum carboxylation rate, the most effective parameters adjusting the gross primary production (GPP) and evapotranspiration (ET) fluxes in seasonal tuning were related to soil wilting point, drainage and moisture stress imposed on vegetation. For daily and half-hourly tunings the most important parameters were the ratio of leaf internal CO2 concentration to external CO2 and the parameter connecting relative humidity and soil dryness. Effectively the seasonal tuning transferred water from soil moisture into ET, and daily and half-hourly tunings reversed this process. The seasonal tuning improved the month-to-month development of GPP and ET, and produced the most stable estimates of water use efficiency. When compared to the seasonal tuning, the daily tuning is worse on the seasonal scale. However, daily parametrisation reproduced the observations for average diurnal cycle best, except for the GPP for Sodankylä validation period, where half-hourly tuned parameters were better. In general, the daily tuning provided the largest reduction in model-data mismatch. The models response to drought was unaffected by our parametrisations and further studies are needed into enhancing the dry response in JSBACH.
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
NASA Astrophysics Data System (ADS)
Liou, Cheng-Dar
2015-09-01
This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.
Calibration of phoswich-based lung counting system using realistic chest phantom.
Manohari, M; Mathiyarasu, R; Rajagopal, V; Meenakshisundaram, V; Indira, R
2011-03-01
A phoswich detector, housed inside a low background steel room, coupled with a state-of-art pulse shape discrimination (PSD) electronics is recently established at Radiological Safety Division of IGCAR for in vivo monitoring of actinides. The various parameters of PSD electronics were optimised to achieve efficient background reduction in low-energy regions. The PSD with optimised parameters has reduced steel room background from 9.5 to 0.28 cps in the 17 keV region and 5.8 to 0.3 cps in the 60 keV region. The Figure of Merit for the timing spectrum of the system is 3.0. The true signal loss due to PSD was found to be less than 2 %. The phoswich system was calibrated with Lawrence Livermore National Laboratory realistic chest phantom loaded with (241)Am activity tagged lung set. Calibration factors for varying chest wall composition and chest wall thickness in terms of muscle equivalent chest wall thickness were established. (241)Am activity in the JAERI phantom which was received as a part of IAEA inter-comparison exercise was estimated. This paper presents the optimisation of PSD electronics and the salient results of the calibration.
Performance of Ti-multilayer coated tool during machining of MDN431 alloyed steel
NASA Astrophysics Data System (ADS)
Badiger, Pradeep V.; Desai, Vijay; Ramesh, M. R.
2018-04-01
Turbine forgings and other components are required to be high resistance to corrosion and oxidation because which they are highly alloyed with Ni and Cr. Midhani manufactures one of such material MDN431. It's a hard-to-machine steel with high hardness and strength. PVD coated insert provide an answer to problem with its state of art technique on the WC tool. Machinability studies is carried out on MDN431 steel using uncoated and Ti-multilayer coated WC tool insert using Taguchi optimisation technique. During the present investigation, speed (398-625rpm), feed (0.093-0.175mm/rev), and depth of cut (0.2-0.4mm) varied according to Taguchi L9 orthogonal array, subsequently cutting forces and surface roughness (Ra) were measured. Optimizations of the obtained results are done using Taguchi technique for cutting forces and surface roughness. Using Taguchi technique linear fit model regression analysis carried out for the combination of each input variable. Experimented results are compared and found the developed model is adequate which supported by proof trials. Speed, feed and depth of cut are linearly dependent on the cutting force and surface roughness for uncoated insert whereas Speed and depth of cut feed is inversely dependent in coated insert for both cutting force and surface roughness. Machined surface for coated and uncoated inserts during machining of MDN431 is studied using optical profilometer.
SU-E-T-113: Dose Distribution Using Respiratory Signals and Machine Parameters During Treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imae, T; Haga, A; Saotome, N
Purpose: Volumetric modulated arc therapy (VMAT) is a rotational intensity-modulated radiotherapy (IMRT) technique capable of acquiring projection images during treatment. Treatment plans for lung tumors using stereotactic body radiotherapy (SBRT) are calculated with planning computed tomography (CT) images only exhale phase. Purpose of this study is to evaluate dose distribution by reconstructing from only the data such as respiratory signals and machine parameters acquired during treatment. Methods: Phantom and three patients with lung tumor underwent CT scans for treatment planning. They were treated by VMAT while acquiring projection images to derive their respiratory signals and machine parameters including positions ofmore » multi leaf collimators, dose rates and integrated monitor units. The respiratory signals were divided into 4 and 10 phases and machine parameters were correlated with the divided respiratory signals based on the gantry angle. Dose distributions of each respiratory phase were calculated from plans which were reconstructed from the respiratory signals and the machine parameters during treatment. The doses at isocenter, maximum point and the centroid of target were evaluated. Results and Discussion: Dose distributions during treatment were calculated using the machine parameters and the respiratory signals detected from projection images. Maximum dose difference between plan and in treatment distribution was −1.8±0.4% at centroid of target and dose differences of evaluated points between 4 and 10 phases were no significant. Conclusion: The present method successfully evaluated dose distribution using respiratory signals and machine parameters during treatment. This method is feasible to verify the actual dose for moving target.« less
NASA Astrophysics Data System (ADS)
Datta, Jinia; Chowdhuri, Sumana; Bera, Jitendranath
2016-12-01
This paper presents a novel scheme of remote condition monitoring of multi machine system where a secured and coded data of induction machine with different parameters is communicated between a state-of-the-art dedicated hardware Units (DHU) installed at the machine terminal and a centralized PC based machine data management (MDM) software. The DHUs are built for acquisition of different parameters from the respective machines, and hence are placed at their nearby panels in order to acquire different parameters cost effectively during their running condition. The MDM software collects these data through a communication channel where all the DHUs are networked using RS485 protocol. Before transmitting, the parameter's related data is modified with the adoption of differential pulse coded modulation (DPCM) and Huffman coding technique. It is further encrypted with a private key where different keys are used for different DHUs. In this way a data security scheme is adopted during its passage through the communication channel in order to avoid any third party attack into the channel. The hybrid mode of DPCM and Huffman coding is chosen to reduce the data packet length. A MATLAB based simulation and its practical implementation using DHUs at three machine terminals (one healthy three phase, one healthy single phase and one faulty three phase machine) proves its efficacy and usefulness for condition based maintenance of multi machine system. The data at the central control room are decrypted and decoded using MDM software. In this work it is observed that Chanel efficiency with respect to different parameter measurements has been increased very much.
Design of a prototype flow microreactor for synthetic biology in vitro.
Boehm, Christian R; Freemont, Paul S; Ces, Oscar
2013-09-07
As a reference platform for in vitro synthetic biology, we have developed a prototype flow microreactor for enzymatic biosynthesis. We report the design, implementation, and computer-aided optimisation of a three-step model pathway within a microfluidic reactor. A packed bed format was shown to be optimal for enzyme compartmentalisation after experimental evaluation of several approaches. The specific substrate conversion efficiency could significantly be improved by an optimised parameter set obtained by computational modelling. Our microreactor design provides a platform to explore new in vitro synthetic biology solutions for industrial biosynthesis.
A robust optimisation approach to the problem of supplier selection and allocation in outsourcing
NASA Astrophysics Data System (ADS)
Fu, Yelin; Keung Lai, Kin; Liang, Liang
2016-03-01
We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.
Machine-Learning Approach for Design of Nanomagnetic-Based Antennas
NASA Astrophysics Data System (ADS)
Gianfagna, Carmine; Yu, Huan; Swaminathan, Madhavan; Pulugurtha, Raj; Tummala, Rao; Antonini, Giulio
2017-08-01
We propose a machine-learning approach for design of planar inverted-F antennas with a magneto-dielectric nanocomposite substrate. It is shown that machine-learning techniques can be efficiently used to characterize nanomagnetic-based antennas by accurately mapping the particle radius and volume fraction of the nanomagnetic material to antenna parameters such as gain, bandwidth, radiation efficiency, and resonant frequency. A modified mixing rule model is also presented. In addition, the inverse problem is addressed through machine learning as well, where given the antenna parameters, the corresponding design space of possible material parameters is identified.
A comparative study of electrochemical machining process parameters by using GA and Taguchi method
NASA Astrophysics Data System (ADS)
Soni, S. K.; Thomas, B.
2017-11-01
In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.
Higton, D M
2001-01-01
An improvement to the procedure for the rapid optimisation of mass spectrometry (PROMS), for the development of multiple reaction methods (MRM) for quantitative bioanalytical liquid chromatography/tandem mass spectrometry (LC/MS/MS), is presented. PROMS is an automated protocol that uses flow-injection analysis (FIA) and AppleScripts to create methods and acquire the data for optimisation. The protocol determines the optimum orifice potential, the MRM conditions for each compound, and finally creates the MRM methods needed for sample analysis. The sensitivities of the MRM methods created by PROMS approach those created manually. MRM method development using PROMS currently takes less than three minutes per compound compared to at least fifteen minutes manually. To further enhance throughput, approaches to MRM optimisation using one injection per compound, two injections per pool of five compounds and one injection per pool of five compounds have been investigated. No significant difference in the optimised instrumental parameters for MRM methods were found between the original PROMS approach and these new methods, which are up to ten times faster. The time taken for an AppleScript to determine the optimum conditions and build the MRM methods is the same with all approaches. Copyright 2001 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bruneel, David; Kearsley, Andrew; Karnakis, Dimitris
2015-07-01
In this work we present picosecond DPSS laser surface texturing optimisation of automotive grade cast iron steel. This application attracts great interest, particularly in the automotive industry, to reduce friction between moving piston parts in car engines, in order to decrease fuel consumption. This is accomplished by partially covering with swallow microgrooves the inner surface of a piston liner and is currently a production process adopting much longer pulse (microsecond) DPSS lasers. Lubricated interface conditions of moving parts require from the laser process to produce a very strictly controlled surface topography around the laser formed grooves, whose edge burr height must be lower than 100 nm. To achieve such a strict tolerance, laser machining of cast iron steel was investigated using an infrared DPSS picosecond laser (10ps duration) with an output power of 16W and a repetition rate of 200 kHz. The ultrashort laser is believed to provide a much better thermal management of the etching process. All studies presented here were performed on flat samples in ambient air but the process is transferrable to cylindrical geometry engine liners. We will show that reducing significantly the edge burr below an acceptable limit for lubricated engine production is possible using such lasers and remarkably the process window lies at very high irradiated fluences much higher that the single pulse ablation threshold. This detailed experimental work highlights the close relationship between the optimised laser irradiation conditions as well as the process strategy with the final size of the undesirable edge burrs. The optimised process conditions are compatible with an industrial production process and show the potential for removing extra post)processing steps (honing, etc) of cylinder liners on the manufacturing line saving time and cost.
Modeling and simulation of five-axis virtual machine based on NX
NASA Astrophysics Data System (ADS)
Li, Xiaoda; Zhan, Xianghui
2018-04-01
Virtual technology in the machinery manufacturing industry has shown the role of growing. In this paper, the Siemens NX software is used to model the virtual CNC machine tool, and the parameters of the virtual machine are defined according to the actual parameters of the machine tool so that the virtual simulation can be carried out without loss of the accuracy of the simulation. How to use the machine builder of the CAM module to define the kinematic chain and machine components of the machine is described. The simulation of virtual machine can provide alarm information of tool collision and over cutting during the process to users, and can evaluate and forecast the rationality of the technological process.
NASA Astrophysics Data System (ADS)
Lewis, N. J.; Anderson, P. I.; Gao, Y.; Robinson, F.
2018-04-01
This paper reports the development of a measurement probe which couples local flux density measurements obtained using the needle probe method with the local magnetising field attained via a Hall effect sensor. This determines the variation in magnetic properties including power loss and permeability at increasing distances from the punched edge of 2.4% and 3.2% Si non-oriented electrical steel sample. Improvements in the characterisation of the magnetic properties of electrical steels would aid in optimising the efficiency in the design of electric machines.
Preliminary Review of Psychophysiological Technologies to Support Multimodal UAV Interface Design
2010-05-01
l’information et réduire la charge de travail de l’opérateur et ainsi optimiser le rendement du système homme -machine. Les technologies de surveillance...Aviation Psychology, 12(1), 63-77. De Rivecourt, M., Kuperus, M. N., Post, W. J. & Mulder, L . J. M. (2008). Cardiovascular and eye activity measures as...Majesty the Queen in Right of Canada, as represented by the Minister of National Defence, 2010 © Sa Majesté la Reine (en droit du Canada), telle que
IEEE 1982. Proceedings of the international conference on cybernetics and society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-01-01
The following topics were dealt with: knowledge-based systems; risk analysis; man-machine interactions; human information processing; metaphor, analogy and problem-solving; manual control modelling; transportation systems; simulation; adaptive and learning systems; biocybernetics; cybernetics; mathematical programming; robotics; decision support systems; analysis, design and validation of models; computer vision; systems science; energy systems; environmental modelling and policy; pattern recognition; nuclear warfare; technological forecasting; artificial intelligence; the Turin shroud; optimisation; workloads. Abstracts of individual papers can be found under the relevant classification codes in this or future issues.
On the performance of energy detection-based CR with SC diversity over IG channel
NASA Astrophysics Data System (ADS)
Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka
2017-12-01
Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.
2017-03-01
Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demerdash, N.A.; Nehl, T.W.; Nyamusa, T.A.
1985-08-01
Effects of high momentary overloads on the samarium-cobalt and strontium-ferrite permanent magnets and the magnetic field in electronically commutated brushless dc machines, as well as their impact on the associated machine parameters were studied. The effect of overload on the machine parameters, and subsequently on the machine system performance was also investigated. This was accomplished through the combined use of finite element analysis of the magnetic field in such machines, perturbation of the magnetic energies to determine machine inductances, and dynamic simulation of the performance of brushless dc machines, when energized from voltage source inverters. These effects were investigated throughmore » application of the above methods to two equivalent 15 hp brushless dc motors, one of which was built with samarium-cobalt magnets, while the other was built with strontium- ferrite magnets. For momentary overloads as high as 4.5 p.u. magnet flux reductions of 29% and 42% of the no load flux were obtained in the samarium-cobalt and strontiumferrite machines, respectively. Corresponding reductions in the line to line armature inductances of 52% and 46% of the no load values were reported for the samarium-cobalt and strontium-ferrite cases, respectively. The overload affected the profiles and magnitudes of armature induced back emfs. Subsequently, the effects of overload on machine parameters were found to have significant impact on the performance of the machine systems, where findings indicate that the samarium-cobalt unit is more suited for higher overload duties than the strontium-ferrite machine.« less
Analysis and optimization of machining parameters of laser cutting for polypropylene composite
NASA Astrophysics Data System (ADS)
Deepa, A.; Padmanabhan, K.; Kuppan, P.
2017-11-01
Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.
Analysis of the shrinkage at the thick plate part using response surface methodology
NASA Astrophysics Data System (ADS)
Hatta, N. M.; Azlan, M. Z.; Shayfull, Z.; Roselina, S.; Nasir, S. M.
2017-09-01
Injection moulding is well known for its manufacturing process especially in producing plastic products. To measure the final product quality, there are lots of precautions to be taken into such as parameters setting at the initial stage of the process. Sometimes, if these parameters were set up wrongly, defects may be occurred and one of the well-known defects in the injection moulding process is a shrinkage. To overcome this problem, a maximisation at the precaution stage by making an optimal adjustment on the parameter setting need to be done and this paper focuses on analysing the shrinkage by optimising the parameter at thick plate part with the help of Response Surface Methodology (RSM) and ANOVA analysis. From the previous study, the outstanding parameter gained from the optimisation method in minimising the shrinkage at the moulded part was packing pressure. Therefore, with the reference from the previous literature, packing pressure was selected as the parameter setting for this study with other three parameters which are melt temperature, cooling time and mould temperature. The analysis of the process was obtained from the simulation by Autodesk Moldflow Insight (AMI) software and the material used for moulded part was Acrylonitrile Butadiene Styrene (ABS). The analysis and result were obtained and it found that the shrinkage can be minimised and the significant parameters were found as packing pressure, mould temperature and melt temperature.
Machining of bone: Analysis of cutting force and surface roughness by turning process.
Noordin, M Y; Jiawkok, N; Ndaruhadi, P Y M W; Kurniawan, D
2015-11-01
There are millions of orthopedic surgeries and dental implantation procedures performed every year globally. Most of them involve machining of bones and cartilage. However, theoretical and analytical study on bone machining is lagging behind its practice and implementation. This study views bone machining as a machining process with bovine bone as the workpiece material. Turning process which makes the basis of the actually used drilling process was experimented. The focus is on evaluating the effects of three machining parameters, that is, cutting speed, feed, and depth of cut, to machining responses, that is, cutting forces and surface roughness resulted by the turning process. Response surface methodology was used to quantify the relation between the machining parameters and the machining responses. The turning process was done at various cutting speeds (29-156 m/min), depths of cut (0.03 -0.37 mm), and feeds (0.023-0.11 mm/rev). Empirical models of the resulted cutting force and surface roughness as the functions of cutting speed, depth of cut, and feed were developed. Observation using the developed empirical models found that within the range of machining parameters evaluated, the most influential machining parameter to the cutting force is depth of cut, followed by feed and cutting speed. The lowest cutting force was obtained at the lowest cutting speed, lowest depth of cut, and highest feed setting. For surface roughness, feed is the most significant machining condition, followed by cutting speed, and with depth of cut showed no effect. The finest surface finish was obtained at the lowest cutting speed and feed setting. © IMechE 2015.
Santonastaso, Giovanni Francesco; Bortone, Immacolata; Chianese, Simeone; Di Nardo, Armando; Di Natale, Michele; Erto, Alessandro; Karatza, Despina; Musmarra, Dino
2017-09-19
The following paper presents a method to optimise a discontinuous permeable adsorptive barrier (PAB-D). This method is based on the comparison of different PAB-D configurations obtained by changing some of the main PAB-D design parameters. In particular, the well diameters, the distance between two consecutive passive wells and the distance between two consecutive well lines were varied, and a cost analysis for each configuration was carried out in order to define the best performing and most cost-effective PAB-D configuration. As a case study, a benzene-contaminated aquifer located in an urban area in the north of Naples (Italy) was considered. The PAB-D configuration with a well diameter of 0.8 m resulted the best optimised layout in terms of performance and cost-effectiveness. Moreover, in order to identify the best configuration for the remediation of the aquifer studied, a comparison with a continuous permeable adsorptive barrier (PAB-C) was added. In particular, this showed a 40% reduction of the total remediation costs by using the optimised PAB-D.
NASA Astrophysics Data System (ADS)
Rahman, Abdul Ghaffar Abdul; Noroozi, Siamak; Dupac, Mihai; Mahathir Syed Mohd Al-Attas, Syed; Vinney, John E.
2013-03-01
Complex rotating machinery requires regular condition monitoring inspections to assess their running conditions and their structural integrity to prevent catastrophic failures. Machine failures can be divided into two categories. First is the wear and tear during operation, they range from bearing defects, gear damage, misalignment, imbalance or mechanical looseness, for which simple condition-based maintenance techniques can easily detect the root cause and trigger remedial action process. The second factor in machine failure is caused by the inherent design faults that usually happened due to many reasons such as improper installation, poor servicing, bad workmanship and structural dynamics design deficiency. In fact, individual machines components are generally dynamically well designed and rigorously tested. However, when these machines are assembled on sight and linked together, their dynamic characteristics will change causing unexpected behaviour of the system. Since nondestructive evaluation provides an excellent alternative to the classical monitoring and proved attractive due to the possibility of performing reliable assessments of all types of machinery, the novel dynamic design verification procedure - based on the combination of in-service operation deflection shape measurement, experimental modal analysis and iterative inverse finite element analysis - proposed here allows quick identification of structural weakness, and helps to provide and verify the solutions.
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
NASA Astrophysics Data System (ADS)
Haikal Ahmad, M. A.; Zulafif Rahim, M.; Fauzi, M. F. Mohd; Abdullah, Aslam; Omar, Z.; Ding, Songlin; Ismail, A. E.; Rasidi Ibrahim, M.
2018-01-01
Polycrystalline diamond (PCD) is regarded as among the hardest material in the world. Electrical Discharge Machining (EDM) typically used to machine this material because of its non-contact process nature. This investigation was purposely done to compare the EDM performances of PCD when using normal electrode of copper (Cu) and newly proposed graphitization catalyst electrode of copper nickel (CuNi). Two level full factorial design of experiment with 4 center points technique was used to study the influence of main and interaction effects of the machining parameter namely; pulse-on, pulse-off, sparking current, and electrode materials (categorical factor). The paper shows interesting discovery in which the newly proposed electrode presented positive impact to the machining performance. With the same machining parameters of finishing, CuNi delivered more than 100% better in Ra and MRR than ordinary Cu electrode.
ATLAS software configuration and build tool optimisation
NASA Astrophysics Data System (ADS)
Rybkin, Grigory; Atlas Collaboration
2014-06-01
ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of multi-core computing resources utilisation, and considerably improved software developer and user experience.
NASA Astrophysics Data System (ADS)
Prasanna, J.; Rajamanickam, S.; Amith Kumar, O.; Karthick Raj, G.; Sathya Narayanan, P. V. V.
2017-05-01
In this paper Ti-6Al-4V used as workpiece material and it is keenly seen in variety of field including medical, chemical, marine, automotive, aerospace, aviation, electronic industries, nuclear reactor, consumer products etc., The conventional machining of Ti-6Al-4V is very difficult due to its distinctive properties. The Electrical Discharge Machining (EDM) is right choice of machining this material. The tungsten copper composite material is employed as tool material. The gap voltage, peak current, pulse on time and duty factor is considered as the machining parameter to analyze the machining characteristics Material Removal Rate (MRR) and Tool Wear Rate (TWR). The Taguchi method is provided to work for finding the significant parameter of EDM. It is found that for MRR significant parameters rated in the following order Gap Voltage, Pulse On-Time, Peak Current and Duty Factor. On the other hand for TWR significant parameters are listed in line of Gap Voltage, Duty Factor, Peak Current and Pulse On-Time.
Burgansky-Eliash, Zvia; Wollstein, Gadi; Chu, Tianjiao; Ramsey, Joseph D.; Glymour, Clark; Noecker, Robert J.; Ishikawa, Hiroshi; Schuman, Joel S.
2007-01-01
Purpose Machine-learning classifiers are trained computerized systems with the ability to detect the relationship between multiple input parameters and a diagnosis. The present study investigated whether the use of machine-learning classifiers improves optical coherence tomography (OCT) glaucoma detection. Methods Forty-seven patients with glaucoma (47 eyes) and 42 healthy subjects (42 eyes) were included in this cross-sectional study. Of the glaucoma patients, 27 had early disease (visual field mean deviation [MD] ≥ −6 dB) and 20 had advanced glaucoma (MD < −6 dB). Machine-learning classifiers were trained to discriminate between glaucomatous and healthy eyes using parameters derived from OCT output. The classifiers were trained with all 38 parameters as well as with only 8 parameters that correlated best with the visual field MD. Five classifiers were tested: linear discriminant analysis, support vector machine, recursive partitioning and regression tree, generalized linear model, and generalized additive model. For the last two classifiers, a backward feature selection was used to find the minimal number of parameters that resulted in the best and most simple prediction. The cross-validated receiver operating characteristic (ROC) curve and accuracies were calculated. Results The largest area under the ROC curve (AROC) for glaucoma detection was achieved with the support vector machine using eight parameters (0.981). The sensitivity at 80% and 95% specificity was 97.9% and 92.5%, respectively. This classifier also performed best when judged by cross-validated accuracy (0.966). The best classification between early glaucoma and advanced glaucoma was obtained with the generalized additive model using only three parameters (AROC = 0.854). Conclusions Automated machine classifiers of OCT data might be useful for enhancing the utility of this technology for detecting glaucomatous abnormality. PMID:16249492
NASA Astrophysics Data System (ADS)
Mallick, S.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2016-07-01
This paper proposes a novel hybrid optimisation algorithm which combines the recently proposed evolutionary algorithm Backtracking Search Algorithm (BSA) with another widely accepted evolutionary algorithm, namely, Differential Evolution (DE). The proposed algorithm called BSA-DE is employed for the optimal designs of two commonly used analogue circuits, namely Complementary Metal Oxide Semiconductor (CMOS) differential amplifier circuit with current mirror load and CMOS two-stage operational amplifier (op-amp) circuit. BSA has a simple structure that is effective, fast and capable of solving multimodal problems. DE is a stochastic, population-based heuristic approach, having the capability to solve global optimisation problems. In this paper, the transistors' sizes are optimised using the proposed BSA-DE to minimise the areas occupied by the circuits and to improve the performances of the circuits. The simulation results justify the superiority of BSA-DE in global convergence properties and fine tuning ability, and prove it to be a promising candidate for the optimal design of the analogue CMOS amplifier circuits. The simulation results obtained for both the amplifier circuits prove the effectiveness of the proposed BSA-DE-based approach over DE, harmony search (HS), artificial bee colony (ABC) and PSO in terms of convergence speed, design specifications and design parameters of the optimal design of the analogue CMOS amplifier circuits. It is shown that BSA-DE-based design technique for each amplifier circuit yields the least MOS transistor area, and each designed circuit is shown to have the best performance parameters such as gain, power dissipation, etc., as compared with those of other recently reported literature.
Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.
NASA Astrophysics Data System (ADS)
Le, Loc Xuan
1987-09-01
A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.
NASA Astrophysics Data System (ADS)
Schmidt, S.; Heyns, P. S.; de Villiers, J. P.
2018-02-01
In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.
Viscoelastic property tuning for reducing noise radiated by switched-reluctance machines
NASA Astrophysics Data System (ADS)
Millithaler, Pierre; Dupont, Jean-Baptiste; Ouisse, Morvan; Sadoulet-Reboul, Émeline; Bouhaddi, Noureddine
2017-10-01
Switched-reluctance motors (SRM) present major acoustic drawbacks that hinder their use for electric vehicles in spite of widely-acknowledged robustness and low manufacturing costs. Unlike other types of electric machines, a SRM stator is completely encapsulated/potted with a viscoelastic resin. By taking advantage of the high damping capacity that a viscoelastic material has in certain temperature and frequency ranges, this article proposes a tuning methodology for reducing the noise emitted by a SRM in operation. After introducing the aspects the tuning process will focus on, the article details a concrete application consisting in computing representative electromagnetic excitations and then the structural response of the stator including equivalent radiated power levels. An optimised viscoelastic material is determined, with which the peak radiated levels are reduced up to 10 dB in comparison to the initial state. This methodology is implementable for concrete industrial applications as it only relies on common commercial finite-element solvers.
The Effects of Operational Parameters on a Mono-wire Cutting System: Efficiency in Marble Processing
NASA Astrophysics Data System (ADS)
Yilmazkaya, Emre; Ozcelik, Yilmaz
2016-02-01
Mono-wire block cutting machines that cut with a diamond wire can be used for squaring natural stone blocks and the slab-cutting process. The efficient use of these machines reduces operating costs by ensuring less diamond wire wear and longer wire life at high speeds. The high investment costs of these machines will lead to their efficient use and reduce production costs by increasing plant efficiency. Therefore, there is a need to investigate the cutting performance parameters of mono-wire cutting machines in terms of rock properties and operating parameters. This study aims to investigate the effects of the wire rotational speed (peripheral speed) and wire descending speed (cutting speed), which are the operating parameters of a mono-wire cutting machine, on unit wear and unit energy, which are the performance parameters in mono-wire cutting. By using the obtained results, cuttability charts for each natural stone were created on the basis of unit wear and unit energy values, cutting optimizations were performed, and the relationships between some physical and mechanical properties of rocks and the optimum cutting parameters obtained as a result of the optimization were investigated.
Research on intrusion detection based on Kohonen network and support vector machine
NASA Astrophysics Data System (ADS)
Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi
2018-05-01
In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate–carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimationmore » system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model–data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. Furthermore, the new improved parameters for JULES are presented along with the associated uncertainties for each parameter.« less
A Concept for Optimizing Behavioural Effectiveness & Efficiency
NASA Astrophysics Data System (ADS)
Barca, Jan Carlo; Rumantir, Grace; Li, Raymond
Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.
NASA Astrophysics Data System (ADS)
Selva Bhuvaneswari, K.; Geetha, P.
2017-05-01
Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.
Optimisation of logistics processes of energy grass collection
NASA Astrophysics Data System (ADS)
Bányai, Tamás.
2010-05-01
The collection of energy grass is a logistics-intensive process [1]. The optimal design and control of transportation and collection subprocesses is a critical point of the supply chain. To avoid irresponsible decisions by right of experience and intuition, the optimisation and analysis of collection processes based on mathematical models and methods is the scientific suggestible way. Within the frame of this work, the author focuses on the optimisation possibilities of the collection processes, especially from the point of view transportation and related warehousing operations. However the developed optimisation methods in the literature [2] take into account the harvesting processes, county-specific yields, transportation distances, erosion constraints, machinery specifications, and other key variables, but the possibility of more collection points and the multi-level collection were not taken into consideration. The possible areas of using energy grass is very wide (energetically use, biogas and bio alcohol production, paper and textile industry, industrial fibre material, foddering purposes, biological soil protection [3], etc.), so not only a single level but also a multi-level collection system with more collection and production facilities has to be taken into consideration. The input parameters of the optimisation problem are the followings: total amount of energy grass to be harvested in each region; specific facility costs of collection, warehousing and production units; specific costs of transportation resources; pre-scheduling of harvesting process; specific transportation and warehousing costs; pre-scheduling of processing of energy grass at each facility (exclusive warehousing). The model take into consideration the following assumptions: (1) cooperative relation among processing and production facilties, (2) capacity constraints are not ignored, (3) the cost function of transportation is non-linear, (4) the drivers conditions are ignored. The objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social Fund. References [1] P. R. Daniel: The Economics of Harvesting and Transporting Corn Stover for Conversion to Fuel Ethanol: A Case Study for Minnesota. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/14213.html [2] T. G. Douglas, J. Brendan, D. Erin & V.-D. Becca: Energy and Chemicals from Native Grasses: Production, Transportation and Processing Technologies Considered in the Northern Great Plains. University of Minnesota, Department of Applied Economics. 2006. http://ideas.repec.org/p/ags/umaesp/13838.html [3] Homepage of energygrass. www.energiafu.hu
Apparatus and method for fluid analysis
Wilson, Bary W.; Peters, Timothy J.; Shepard, Chester L.; Reeves, James H.
2004-11-02
The present invention is an apparatus and method for analyzing a fluid used in a machine or in an industrial process line. The apparatus has at least one meter placed proximate the machine or process line and in contact with the machine or process fluid for measuring at least one parameter related to the fluid. The at least one parameter is a standard laboratory analysis parameter. The at least one meter includes but is not limited to viscometer, element meter, optical meter, particulate meter, and combinations thereof.
NASA Astrophysics Data System (ADS)
Wang, Xu; Bi, Fengrong; Du, Haiping
2018-05-01
This paper aims to develop an 5-degree-of-freedom driver and seating system model for optimal vibration control. A new method for identification of the driver seating system parameters from experimental vibration measurement has been developed. The parameter sensitivity analysis has been conducted considering the random excitation frequency and system parameter uncertainty. The most and least sensitive system parameters for the transmissibility ratio have been identified. The optimised PID controllers have been developed to reduce the driver's body vibration.
“Investigations on the machinability of Waspaloy under dry environment”
NASA Astrophysics Data System (ADS)
Deepu, J.; Kuppan, P.; SBalan, A. S.; Oyyaravelu, R.
2016-09-01
Nickel based superalloy, Waspaloy is extensively used in gas turbine, aerospace and automobile industries because of their unique combination of properties like high strength at elevated temperatures, resistance to chemical degradation and excellent wear resistance in many hostile environments. It is considered as one of the difficult to machine superalloy due to excessive tool wear and poor surface finish. The present paper is an attempt for removing cutting fluids from turning process of Waspaloy and to make the processes environmentally safe. For this purpose, the effect of machining parameters such as cutting speed and feed rate on the cutting force, cutting temperature, surface finish and tool wear were investigated barrier. Consequently, the strength and tool wear resistance and tool life increased significantly. Response Surface Methodology (RSM) has been used for developing and analyzing a mathematical model which describes the relationship between machining parameters and output variables. Subsequently ANOVA was used to check the adequacy of the regression model as well as each machining variables. The optimal cutting parameters were determined based on multi-response optimizations by composite desirability approach in order to minimize cutting force, average surface roughness and maximum flank wear. The results obtained from the experiments shown that machining of Waspaloy using coated carbide tool with special ranges of parameters, cutting fluid could be completely removed from machining process
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
On the optimisation of the use of 3He in radiation portal monitors
NASA Astrophysics Data System (ADS)
Tomanin, Alice; Peerani, Paolo; Janssens-Maenhout, Greet
2013-02-01
Radiation Portal Monitors (RPMs) are used to detect illicit trafficking of nuclear or other radioactive material concealed in vehicles, cargo containers or people at strategic check points, such as borders, seaports and airports. Most of them include neutron detectors for the interception of potential plutonium smuggling. The most common technology used for neutron detection in RPMs is based on 3He proportional counters. The recent severe shortage of this rare and expensive gas has created a problem of capacity for manufacturers to provide enough detectors to satisfy the market demand. In this paper we analyse the design of typical commercial RPMs and try to optimise the detector parameters in order either to maximise the efficiency using the same amount of 3He or minimise the amount of gas needed to reach the same detection performance: by reducing the volume or gas pressure in an optimised design.
Sterckx, Femke L; Saison, Daan; Delvaux, Freddy R
2010-08-31
Monophenols are widely spread compounds contributing to the flavour of many foods and beverages. They are most likely present in beer, but so far, little is known about their influence on beer flavour. To quantify these monophenols in beer, we optimised a headspace solid-phase microextraction method coupled to gas chromatography-mass spectrometry. To improve their isolation from the beer matrix and their chromatographic properties, the monophenols were acetylated using acetic anhydride and KHCO(3) as derivatising agent and base catalyst, respectively. Derivatisation conditions were optimised with attention for the pH of the reaction medium. Additionally, different parameters affecting extraction efficiency were optimised, including fibre coating, extraction time and temperature and salt addition. Afterwards, we calibrated and validated the method successfully and applied it for the analysis of monophenols in beer samples. 2010 Elsevier B.V. All rights reserved.
Pardo, O; Yusà, V; Coscollà, C; León, N; Pastor, A
2007-07-01
A selective and sensitive procedure has been developed and validated for the determination of acrylamide in difficult matrices, such as coffee and chocolate. The proposed method includes pressurised fluid extraction (PFE) with acetonitrile, florisil clean-up purification inside the PFE extraction cell and detection by liquid chromatography (LC) coupled to atmospheric pressure ionisation in positive mode tandem mass spectrometry (APCI-MS-MS). Comparison of ionisation sources (atmospheric pressure chemical ionisation (APCI), atmospheric pressure photoionization (APPI) and the combined APCI/APPI) and clean-up procedures were carried out to improve the analytical signal. The main parameters affecting the performance of the different ionisation sources were previously optimised using statistical design of experiments (DOE). PFE parameters were also optimised by DOE. For quantitation, an isotope dilution approach was used. The limit of quantification (LOQ) of the method was 1 microg kg(-1) for coffee and 0.6 microg kg(-1) for chocolate. Recoveries ranged between 81-105% in coffee and 87-102% in chocolate. The accuracy was evaluated using a coffee reference test material FAPAS T3008. Using the optimised method, 20 coffee and 15 chocolate samples collected from Valencian (Spain) supermarkets, were investigated for acrylamide, yielding median levels of 146 microg kg(-1) in coffee and 102 microg kg(-1) in chocolate.
NASA Astrophysics Data System (ADS)
Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.
2016-03-01
Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
Simulation studies promote technological development of radiofrequency phased array hyperthermia.
Wust, P; Seebass, M; Nadobny, J; Deuflhard, P; Mönich, G; Felix, R
1996-01-01
A treatment planning program package for radiofrequency hyperthermia has been developed. It consists of software modules for processing three-dimensional computerized tomography (CT) data sets, manual segmentation, generation of tetrahedral grids, numerical calculation and optimisation of three-dimensional E field distributions using a volume surface integral equation algorithm as well as temperature distributions using an adaptive multilevel finite-elements code, and graphical tools for simultaneous representation of CT data and simulation results. Heat treatments are limited by hot spots in healthy tissues caused by E field maxima at electrical interfaces (bone/muscle). In order to reduce or avoid hot spots suitable objective functions are derived from power deposition patterns and temperature distributions, and are utilised to optimise antenna parameters (phases, amplitudes). The simulation and optimisation tools have been applied to estimate the improvements that could be reached by upgrades of the clinically used SIGMA-60 applicator (consisting of a single ring of four antenna pairs). The investigated upgrades are increased number of antennas and channels (triple-ring of 3 x 8 antennas and variation of antenna inclination. Significant improvement of index temperatures (1-2 degrees C) is achieved by upgrading the single ring to a triple ring with free phase selection for every antenna or antenna pair. Antenna amplitudes and inclinations proved as less important parameters.
Optimisation of novel method for the extraction of steviosides from Stevia rebaudiana leaves.
Puri, Munish; Sharma, Deepika; Barrow, Colin J; Tiwary, A K
2012-06-01
Stevioside, a diterpene glycoside, is well known for its intense sweetness and is used as a non-caloric sweetener. Its potential widespread use requires an easy and effective extraction method. Enzymatic extraction of stevioside from Stevia rebaudiana leaves with cellulase, pectinase and hemicellulase, using various parameters, such as concentration of enzyme, incubation time and temperature, was optimised. Hemicellulase was observed to give the highest stevioside yield (369.23±0.11μg) in 1h in comparison to cellulase (359±0.30μg) and pectinases (333±0.55μg). Extraction from leaves under optimised conditions showed a remarkable increase in the yield (35 times) compared with a control experiment. The extraction conditions were further optimised using response surface methodology (RSM). A central composite design (CCD) was used for experimental design and analysis of the results to obtain optimal extraction conditions. Based on RSM analysis, temperature of 51-54°C, time of 36-45min and the cocktail of pectinase, cellulase and hemicellulase, set at 2% each, gave the best results. Under the optimised conditions, the experimental values were in close agreement with the prediction model and resulted in a three times yield enhancement of stevioside. The isolated stevioside was characterised through 1 H-NMR spectroscopy, by comparison with a stevioside standard. Copyright © 2011 Elsevier Ltd. All rights reserved.
Model of head-neck joint fast movements in the frontal plane.
Pedrocchi, A; Ferrigno, G
2004-06-01
The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.
Influence of Wire Electrical Discharge Machining (WEDM) process parameters on surface roughness
NASA Astrophysics Data System (ADS)
Yeakub Ali, Mohammad; Banu, Asfana; Abu Bakar, Mazilah
2018-01-01
In obtaining the best quality of engineering components, the quality of machined parts surface plays an important role. It improves the fatigue strength, wear resistance, and corrosion of workpiece. This paper investigates the effects of wire electrical discharge machining (WEDM) process parameters on surface roughness of stainless steel using distilled water as dielectric fluid and brass wire as tool electrode. The parameters selected are voltage open, wire speed, wire tension, voltage gap, and off time. Empirical model was developed for the estimation of surface roughness. The analysis revealed that off time has a major influence on surface roughness. The optimum machining parameters for minimum surface roughness were found to be at a 10 V open voltage, 2.84 μs off time, 12 m/min wire speed, 6.3 N wire tension, and 54.91 V voltage gap.
Optimising sulfuric acid hard coat anodising for an Al-Mg-Si wrought aluminium alloy
NASA Astrophysics Data System (ADS)
Bartolo, N.; Sinagra, E.; Mallia, B.
2014-06-01
This research evaluates the effects of sulfuric acid hard coat anodising parameters, such as acid concentration, electrolyte temperature, current density and time, on the hardness and thickness of the resultant anodised layers. A small scale anodising facility was designed and set up to enable experimental investigation of the anodising parameters. An experimental design using the Taguchi method to optimise the parameters within an established operating window was performed. Qualitative and quantitative methods of characterisation of the resultant anodised layers were carried out. The anodised layer's thickness, and morphology were determined using a light optical microscope (LOM) and field emission gun scanning electron microscope (FEG-SEM). Hardness measurements were carried out using a nano hardness tester. Correlations between the various anodising parameters and their effect on the hardness and thickness of the anodised layers were established. Careful evaluation of these effects enabled optimum parameters to be determined using the Taguchi method, which were verified experimentally. Anodised layers having hardness varying between 2.4-5.2 GPa and a thickness of between 20-80 μm were produced. The Taguchi method was shown to be applicable to anodising. This finding could facilitate on-going and future research and development of anodising, which is attracting remarkable academic and industrial interest.
NASA Astrophysics Data System (ADS)
Nadolny, K.; Kapłonek, W.
2014-08-01
The following work is an analysis of flatness deviations of a workpiece made of X2CrNiMo17-12-2 austenitic stainless steel. The workpiece surface was shaped using efficient machining techniques (milling, grinding, and smoothing). After the machining was completed, all surfaces underwent stylus measurements in order to obtain surface flatness and roughness parameters. For this purpose the stylus profilometer Hommel-Tester T8000 by Hommelwerke with HommelMap software was used. The research results are presented in the form of 2D surface maps, 3D surface topographies with extracted single profiles, Abbott-Firestone curves, and graphical studies of the Sk parameters. The results of these experimental tests proved the possibility of a correlation between flatness and roughness parameters, as well as enabled an analysis of changes in these parameters from shaping and rough grinding to finished machining. The main novelty of this paper is comprehensive analysis of measurement results obtained during a three-step machining process of austenitic stainless steel. Simultaneous analysis of individual machining steps (milling, grinding, and smoothing) enabled a complementary assessment of the process of shaping the workpiece surface macro- and micro-geometry, giving special consideration to minimize the flatness deviations
SASS Applied to Optimum Work Roll Profile Selection in the Hot Rolling of Wide Steel
NASA Astrophysics Data System (ADS)
Nolle, Lars
The quality of steel strip produced in a wide strip rolling mill depends heavily on the careful selection of initial ground work roll profiles for each of the mill stands in the finishing train. In the past, these profiles were determined by human experts, based on their knowledge and experience. In previous work, the profiles were successfully optimised using a self-organising migration algorithm (SOMA). In this research, SASS, a novel heuristic optimisation algorithm that has only one control parameter, has been used to find the optimum profiles for a simulated rolling mill. The resulting strip quality produced using the profiles found by SASS is compared with results from previous work and the quality produced using the original profile specifications. The best set of profiles found by SASS clearly outperformed the original set and performed equally well as SOMA without the need of finding a suitable set of control parameters.
Thermal Performance Analysis of Solar Collectors Installed for Combisystem in the Apartment Building
NASA Astrophysics Data System (ADS)
Žandeckis, A.; Timma, L.; Blumberga, D.; Rochas, C.; Rošā, M.
2012-01-01
The paper focuses on the application of wood pellet and solar combisystem for space heating and hot water preparation at apartment buildings under the climate of Northern Europe. A pilot project has been implemented in the city of Sigulda (N 57° 09.410 E 024° 52.194), Latvia. The system was designed and optimised using TRNSYS - a dynamic simulation tool. The pilot project was continuously monitored. To the analysis the heat transfer fluid flow rate and the influence of the inlet temperature on the performance of solar collectors were subjected. The thermal performance of a solar collector loop was studied using a direct method. A multiple regression analysis was carried out using STATGRAPHICS Centurion 16.1.15 with the aim to identify the operational and weather parameters of the system which cause the strongest influence on the collector's performance. The parameters to be used for the system's optimisation have been evaluated.
Bringing metabolic networks to life: convenience rate law and thermodynamic constraints
Liebermeister, Wolfram; Klipp, Edda
2006-01-01
Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669
A new effective operator for the hybrid algorithm for solving global optimisation problems
NASA Astrophysics Data System (ADS)
Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac
2018-04-01
Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.
Effect of Width of Kerf on Machining Accuracy and Subsurface Layer After WEDM
NASA Astrophysics Data System (ADS)
Mouralova, K.; Kovar, J.; Klakurkova, L.; Prokes, T.
2018-02-01
Wire electrical discharge machining is an unconventional machining technology that applies physical principles to material removal. The material is removed by a series of recurring current discharges between the workpiece and the tool electrode, and a `kerf' is created between the wire and the material being machined. The width of the kerf is directly dependent not only on the diameter of the wire used, but also on the machine parameter settings and, in particular, on the set of mechanical and physical properties of the material being machined. To ensure precise machining, it is important to have the width of the kerf as small as possible. The present study deals with the evaluation of the width of the kerf for four different metallic materials (some of which were subsequently heat treated using several methods) with different machine parameter settings. The kerf is investigated on metallographic cross sections using light and electron microscopy.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
An efficient approach for improving virtual machine placement in cloud computing environment
NASA Astrophysics Data System (ADS)
Ghobaei-Arani, Mostafa; Shamsi, Mahboubeh; Rahmanian, Ali A.
2017-11-01
The ever increasing demand for the cloud services requires more data centres. The power consumption in the data centres is a challenging problem for cloud computing, which has not been considered properly by the data centre developer companies. Especially, large data centres struggle with the power cost and the Greenhouse gases production. Hence, employing the power efficient mechanisms are necessary to optimise the mentioned effects. Moreover, virtual machine (VM) placement can be used as an effective method to reduce the power consumption in data centres. In this paper by grouping both virtual and physical machines, and taking into account the maximum absolute deviation during the VM placement, the power consumption as well as the service level agreement (SLA) deviation in data centres are reduced. To this end, the best-fit decreasing algorithm is utilised in the simulation to reduce the power consumption by about 5% compared to the modified best-fit decreasing algorithm, and at the same time, the SLA violation is improved by 6%. Finally, the learning automata are used to a trade-off between power consumption reduction from one side, and SLA violation percentage from the other side.
Effect of Machining Parameters on Oxidation Behavior of Mild Steel
NASA Astrophysics Data System (ADS)
Majumdar, P.; Shekhar, S.; Mondal, K.
2015-01-01
This study aims to find out a correlation between machining parameters, resultant microstructure, and isothermal oxidation behavior of lathe-machined mild steel in the temperature range of 660-710 °C. The tool rake angles "α" used were +20°, 0°, and -20°, and cutting speeds used were 41, 232, and 541 mm/s. Under isothermal conditions, non-machined and machined mild steel samples follow parabolic oxidation kinetics with activation energy of 181 and ~400 kJ/mol, respectively. Exaggerated grain growth of the machined surface was observed, whereas, the center part of the machined sample showed minimal grain growth during oxidation at higher temperatures. Grain growth on the surface was attributed to the reduction of strain energy at high temperature oxidation, which was accumulated on the sub-region of the machined surface during machining. It was also observed that characteristic surface oxide controlled the oxidation behavior of the machined samples. This study clearly demonstrates the effect of equivalent strain, roughness, and grain size due to machining, and subsequent grain growth on the oxidation behavior of the mild steel.
NASA Astrophysics Data System (ADS)
Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten
2017-04-01
In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that produced the respective hydrograph. Therefore, the result of the poll can be seen as an additional quality criterion for the comparison of the two different approaches and help in the evaluation of the automatic calibration method.
Generative Modeling for Machine Learning on the D-Wave
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thulasidasan, Sunil
These are slides on Generative Modeling for Machine Learning on the D-Wave. The following topics are detailed: generative models; Boltzmann machines: a generative model; restricted Boltzmann machines; learning parameters: RBM training; practical ways to train RBM; D-Wave as a Boltzmann sampler; mapping RBM onto the D-Wave; Chimera restricted RBM; mapping binary RBM to Ising model; experiments; data; D-Wave effective temperature, parameters noise, etc.; experiments: contrastive divergence (CD) 1 step; after 50 steps of CD; after 100 steps of CD; D-Wave (experiments 1, 2, 3); D-Wave observations.
A method to identify the main mode of machine tool under operating conditions
NASA Astrophysics Data System (ADS)
Wang, Daming; Pan, Yabing
2017-04-01
The identification of the modal parameters under experimental conditions is the most common procedure when solving the problem of machine tool structure vibration. However, the influence of each mode on the machine tool vibration in real working conditions remains unknown. In fact, the contributions each mode makes to the machine tool vibration during machining process are different. In this article, an active excitation modal analysis is applied to identify the modal parameters in operational condition, and the Operating Deflection Shapes (ODS) in frequencies of high level vibration that affect the quality of machining in real working conditions are obtained. Then, the ODS is decomposed by the mode shapes which are identified in operational conditions. So, the contributions each mode makes to machine tool vibration during machining process are got by decomposition coefficients. From the previous steps, we can find out the main modes which effect the machine tool more significantly in working conditions. This method was also verified to be effective by experiments.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Automatic classification of protein structures using physicochemical parameters.
Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam
2014-09-01
Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.
Production of gluconic acid using Micrococcus sp.: optimisation of carbon and nitrogen sources.
Joshi, V D; Sreekantiah, K R; Manjrekar, S P
1996-01-01
A process for production of gluconic acid from glucose by a Micrococcus sp. is described. More than 400 bacterial cultures isolated from local soil were tested for gluconic acid production. Three isolates, were selected on basis of their ability to produce gluconic acid and high titrable acidity. These were identified as Micrococcus sp. and were named M 27, M 54 and M 81. Nutritional and other parameters for maximum production of gluconic acid by the selected isolates were optimised. It was found that Micrococcus sp. isolate M 27 gave highest yield of 8.19 g gluconic acid from 9 g glucose utilised giving 91% conversion effeciency.
NASA Astrophysics Data System (ADS)
Yingfei, Ge; de Escalona, Patricia Muñoz; Galloway, Alexander
2017-01-01
The efficiency of a machining process can be measured by evaluating the quality of the machined surface and the tool wear rate. The research reported herein is mainly focused on the effect of cutting parameters and tool wear on the machined surface defects, surface roughness, deformation layer and residual stresses when dry milling Stellite 6, deposited by overlay on a carbon steel surface. The results showed that under the selected cutting conditions, abrasion, diffusion, peeling, chipping and breakage were the main tool wear mechanisms presented. Also the feed rate was the primary factor affecting the tool wear with an influence of 83%. With regard to the influence of cutting parameters on the surface roughness, the primary factors were feed rate and cutting speed with 57 and 38%, respectively. In addition, in general, as tool wear increased, the surface roughness increased and the deformation layer was found to be influenced more by the cutting parameters rather than the tool wear. Compressive residual stresses were observed in the un-machined surface, and when machining longer than 5 min, residual stress changed 100% from compression to tension. Finally, results showed that micro-crack initiation was the main mechanism for chip formation.
Warpage analysis on thin shell part using glowworm swarm optimisation (GSO)
NASA Astrophysics Data System (ADS)
Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.
2017-09-01
The Autodesk Moldflow Insight (AMI) software was used in this study to focuses on the analysis in plastic injection moulding process associate the input parameter and output parameter. The material used in this study is Acrylonitrile Butadiene Styrene (ABS) as the moulded material to produced the plastic part. The MATLAB sortware is a method was used to find the best setting parameter. The variables was selected in this study were melt temperature, packing pressure, coolant temperature and cooling time.
NASA Astrophysics Data System (ADS)
Anil, K. C.; Vikas, M. G.; Shanmukha Teja, B.; Sreenivas Rao, K. V.
2017-04-01
Many materials such as alloys, composites find their applications on the basis of machinability, cost and availability. In the present work, graphite (Grp) reinforced Aluminium 8011 is synthesized by convention stir casting process and Surface finish & machinability of prepared composite is examined by using lathe tool dynamometer attached with BANKA Lathe by varying the machining parameters like spindle speed, Depth of cut and Feed rate in 3 levels. Also, Roughness Average (Ra) of machined surfaces is measured by using Surface Roughness Tester (Mitutoyo SJ201). From the studies it is cleared that mechanical properties of a composites increases with addition of Grp and The cutting force were decreased with the reinforcement percentage and thus increases the machinability of composites and also results in increased surface finish.
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Shukla, Anupam
2018-03-01
Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
NASA Astrophysics Data System (ADS)
Khidhir, Basim A.; Mohamed, Bashir
2011-02-01
Machining parameters has an important factor on tool wear and surface finish, for that the manufacturers need to obtain optimal operating parameters with a minimum set of experiments as well as minimizing the simulations in order to reduce machining set up costs. The cutting speed is one of the most important cutting parameter to evaluate, it clearly most influences on one hand, tool life, tool stability, and cutting process quality, and on the other hand controls production flow. Due to more demanding manufacturing systems, the requirements for reliable technological information have increased. For a reliable analysis in cutting, the cutting zone (tip insert-workpiece-chip system) as the mechanics of cutting in this area are very complicated, the chip is formed in the shear plane (entrance the shear zone) and is shape in the sliding plane. The temperature contributed in the primary shear, chamfer and sticking, sliding zones are expressed as a function of unknown shear angle on the rake face and temperature modified flow stress in each zone. The experiments were carried out on a CNC lathe and surface finish and tool tip wear are measured in process. Machining experiments are conducted. Reasonable agreement is observed under turning with high depth of cut. Results of this research help to guide the design of new cutting tool materials and the studies on evaluation of machining parameters to further advance the productivity of nickel based alloy Hastelloy - 276 machining.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
Machinability of IPS Empress 2 framework ceramic.
Schmidt, C; Weigl, P
2000-01-01
Using ceramic materials for an automatic production of ceramic dentures by CAD/CAM is a challenge, because many technological, medical, and optical demands must be considered. The IPS Empress 2 framework ceramic meets most of them. This study shows the possibilities for machining this ceramic with economical parameters. The long life-time requirement for ceramic dentures requires a ductile machined surface to avoid the well-known subsurface damages of brittle materials caused by machining. Slow and rapid damage propagation begins at break outs and cracks, and limits life-time significantly. Therefore, ductile machined surfaces are an important demand for machine dental ceramics. The machining tests were performed with various parameters such as tool grain size and feed speed. Denture ceramics were machined by jig grinding on a 5-axis CNC milling machine (Maho HGF 500) with a high-speed spindle up to 120,000 rpm. The results of the wear test indicate low tool wear. With one tool, you can machine eight occlusal surfaces including roughing and finishing. One occlusal surface takes about 60 min machining time. Recommended parameters for roughing are middle diamond grain size (D107), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 1000 mm/min, depth of cut a(e) = 0.06 mm, width of contact a(p) = 0.8 mm, and for finishing ultra fine diamond grain size (D46), cutting speed v(c) = 4.7 m/s, feed speed v(ft) = 100 mm/min, depth of cut a(e) = 0.02 mm, width of contact a(p) = 0.8 mm. The results of the machining tests give a reference for using IPS Empress(R) 2 framework ceramic in CAD/CAM systems. Copyright 2000 John Wiley & Sons, Inc.
Cheong, Ai M; Tan, Chin P; Nyam, Kar L
2018-01-01
Kenaf ( Hibiscus cannabinus L.) seed oil has been proven for its multi-pharmacological benefits; however, its poor water solubility and stability have limited its industrial applications. This study was aimed to further improve the stability of pre-developed kenaf seed oil-in-water nanoemulsions by using food-grade ternary emulsifiers. The effects of emulsifier concentration (1, 5, 10, 15% w/w), homogenisation pressure (16,000, 22,000, 28,000 psi), and homogenisation cycles (three, four, five cycles) were studied to produce high stability of kenaf seed oil-in-water nanoemulsions using high pressure homogeniser. Generally, results showed that the emulsifier concentration and homogenisation conditions had great effect ( p < 0.05) on the particle sizes, polydispersity index and hence the physical stability of nanoemulsions. Homogenisation parameters at 28,000 psi for three cycles produced the most stable homogeneous nanoemulsions that were below 130 nm, below 0.16, and above -40 mV of particle size, polydispersity index, and zeta potential, respectively. Field emission scanning electron microscopy micrograph showed that the optimised nanoemulsions had a good distribution within nano-range. The optimised nanoemulsions were proved to be physically stable for up to six weeks of storage at room temperature. The results from this study also provided valuable information in producing stable kenaf seed oil nanoemulsions for the future application in food and nutraceutical industries.
Moss and peat hydraulic properties are optimized to maximise peatland water use efficiency
NASA Astrophysics Data System (ADS)
Kettridge, Nicholas; Tilak, Amey; Devito, Kevin; Petrone, Rich; Mendoza, Carl; Waddington, Mike
2016-04-01
Peatland ecosystems are globally important carbon and terrestrial surface water stores that have formed over millennia. These ecosystems have likely optimised their ecohydrological function over the long-term development of their soil hydraulic properties. Through a theoretical ecosystem approach, applying hydrological modelling integrated with known ecological thresholds and concepts, the optimisation of peat hydraulic properties is examined to determine which of the following conditions peatland ecosystems target during this development: i) maximise carbon accumulation, ii) maximise water storage, or iii) balance carbon profit across hydrological disturbances. Saturated hydraulic conductivity (Ks) and empirical van Genuchten water retention parameter α are shown to provide a first order control on simulated water tensions. Across parameter space, peat profiles with hypothetical combinations of Ks and α show a strong binary tendency towards targeting either water or carbon storage. Actual hydraulic properties from five northern peatlands fall at the interface between these goals, balancing the competing demands of carbon accumulation and water storage. We argue that peat hydraulic properties are thus optimized to maximise water use efficiency and that this optimisation occurs over a centennial to millennial timescale as the peatland develops. This provides a new conceptual framework to characterise peat hydraulic properties across climate zones and between a range of different disturbances, and which can be used to provide benchmarks for peatland design and reclamation.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
Method and apparatus for monitoring machine performance
Smith, Stephen F.; Castleberry, Kimberly N.
1996-01-01
Machine operating conditions can be monitored by analyzing, in either the time or frequency domain, the spectral components of the motor current. Changes in the electric background noise, induced by mechanical variations in the machine, are correlated to changes in the operating parameters of the machine.
Bokhari, Awais; Chuah, Lai Fatt; Yusup, Suzana; Klemeš, Jiří Jaromír; Kamil, Ruzaimah Nik M
2016-01-01
Pretreatment of the high free fatty acid rubber seed oil (RSO) via esterification reaction has been investigated by using a pilot scale hydrodynamic cavitation (HC) reactor. Four newly designed orifice plate geometries are studied. Cavities are induced by assisted double diaphragm pump in the range of 1-3.5 bar inlet pressure. An optimised plate with 21 holes of 1mm diameter and inlet pressure of 3 bar resulted in RSO acid value reduction from 72.36 to 2.64 mg KOH/g within 30 min of reaction time. Reaction parameters have been optimised by using response surface methodology and found as methanol to oil ratio of 6:1, catalyst concentration of 8 wt%, reaction time of 30 min and reaction temperature of 55°C. The reaction time and esterified efficiency of HC was three fold shorter and four fold higher than mechanical stirring. This makes the HC process more environmental friendly. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.
2015-10-01
In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.
Miller, Renee; Kolipaka, Arunark; Nash, Martyn P; Young, Alistair A
2018-03-12
Magnetic resonance elastography (MRE) has been used to estimate isotropic myocardial stiffness. However, anisotropic stiffness estimates may give insight into structural changes that occur in the myocardium as a result of pathologies such as diastolic heart failure. The virtual fields method (VFM) has been proposed for estimating material stiffness from image data. This study applied the optimised VFM to identify transversely isotropic material properties from both simulated harmonic displacements in a left ventricular (LV) model with a fibre field measured from histology as well as isotropic phantom MRE data. Two material model formulations were implemented, estimating either 3 or 5 material properties. The 3-parameter formulation writes the transversely isotropic constitutive relation in a way that dissociates the bulk modulus from other parameters. Accurate identification of transversely isotropic material properties in the LV model was shown to be dependent on the loading condition applied, amount of Gaussian noise in the signal, and frequency of excitation. Parameter sensitivity values showed that shear moduli are less sensitive to noise than the other parameters. This preliminary investigation showed the feasibility and limitations of using the VFM to identify transversely isotropic material properties from MRE images of a phantom as well as simulated harmonic displacements in an LV geometry. Copyright © 2018 John Wiley & Sons, Ltd.
Dynamic VMs placement for energy efficiency by PSO in cloud computing
NASA Astrophysics Data System (ADS)
Dashti, Seyed Ebrahim; Rahmani, Amir Masoud
2016-03-01
Recently, cloud computing is growing fast and helps to realise other high technologies. In this paper, we propose a hieratical architecture to satisfy both providers' and consumers' requirements in these technologies. We design a new service in the PaaS layer for scheduling consumer tasks. In the providers' perspective, incompatibility between specification of physical machine and user requests in cloud leads to problems such as energy-performance trade-off and large power consumption so that profits are decreased. To guarantee Quality of service of users' tasks, and reduce energy efficiency, we proposed to modify Particle Swarm Optimisation to reallocate migrated virtual machines in the overloaded host. We also dynamically consolidate the under-loaded host which provides power saving. Simulation results in CloudSim demonstrated that whatever simulation condition is near to the real environment, our method is able to save as much as 14% more energy and the number of migrations and simulation time significantly reduces compared with the previous works.
Machine learning strategy for accelerated design of polymer dielectrics
Mannodi-Kanakkithodi, Arun; Pilania, Ghanshyam; Huan, Tran Doan; ...
2016-02-15
The ability to efficiently design new and advanced dielectric polymers is hampered by the lack of sufficient, reliable data on wide polymer chemical spaces, and the difficulty of generating such data given time and computational/experimental constraints. Here, we address the issue of accelerating polymer dielectrics design by extracting learning models from data generated by accurate state-of-the-art first principles computations for polymers occupying an important part of the chemical subspace. The polymers are ‘fingerprinted’ as simple, easily attainable numerical representations, which are mapped to the properties of interest using a machine learning algorithm to develop an on-demand property prediction model. Further,more » a genetic algorithm is utilised to optimise polymer constituent blocks in an evolutionary manner, thus directly leading to the design of polymers with given target properties. Furthermore, while this philosophy of learning to make instant predictions and design is demonstrated here for the example of polymer dielectrics, it is equally applicable to other classes of materials as well.« less
Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision
NASA Astrophysics Data System (ADS)
Hendrawan, Y.; Hawa, L. C.; Damayanti, R.
2018-03-01
This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.
NASA Astrophysics Data System (ADS)
van Schaik, Joris W. J.; Kleja, Dan B.; Gustafsson, Jon Petter
2010-02-01
Vast amounts of knowledge about the proton- and metal-binding properties of dissolved organic matter (DOM) in natural waters have been obtained in studies on isolated humic and fulvic (hydrophobic) acids. Although macromolecular hydrophilic acids normally make up about one-third of DOM, their proton- and metal-binding properties are poorly known. Here, we investigated the acid-base and Cu-binding properties of the hydrophobic (fulvic) acid fraction and two hydrophilic fractions isolated from a soil solution. Proton titrations revealed a higher total charge for the hydrophilic acid fractions than for the hydrophobic acid fraction. The most hydrophilic fraction appeared to be dominated by weak acid sites, as evidenced by increased slope of the curve of surface charge versus pH at pH values above 6. The titration curves were poorly predicted by both Stockholm Humic Model (SHM) and NICA-Donnan model calculations using generic parameter values, but could be modelled accurately after optimisation of the proton-binding parameters (pH ⩽ 9). Cu-binding isotherms for the three fractions were determined at pH values of 4, 6 and 9. With the optimised proton-binding parameters, the SHM model predictions for Cu binding improved, whereas the NICA-Donnan predictions deteriorated. After optimisation of Cu-binding parameters, both models described the experimental data satisfactorily. Iron(III) and aluminium competed strongly with Cu for binding sites at both pH 4 and pH 6. The SHM model predicted this competition reasonably well, but the NICA-Donnan model underestimated the effects significantly at pH 6. Overall, the Cu-binding behaviour of the two hydrophilic acid fractions was very similar to that of the hydrophobic acid fraction, despite the differences observed in proton-binding characteristics. These results show that for modelling purposes, it is essential to include the hydrophilic acid fraction in the pool of 'active' humic substances.
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
NASA Astrophysics Data System (ADS)
Czán, Andrej; Kubala, Ondrej; Danis, Igor; Czánová, Tatiana; Holubják, Jozef; Mikloš, Matej
2017-12-01
The ever-increasing production and the usage of hard-to-machine progressive materials are the main cause of continual finding of new ways and methods of machining. One of these ways is the ceramic milling tool, which combines the pros of conventional ceramic cutting materials and pros of conventional coating steel-based insert. These properties allow to improve cutting conditions and so increase the productivity with preserved quality known from conventional tools usage. In this paper, there is made the identification of properties and possibilities of this tool when machining of hard-to-machine materials such as nickel alloys using in airplanes engines. This article is focused on the analysis and evaluation ordinary technological parameters and surface quality, mainly roughness of surface and quality of machined surface and tool wearing.
Experimental investigation of the tip based micro/nano machining
NASA Astrophysics Data System (ADS)
Guo, Z.; Tian, Y.; Liu, X.; Wang, F.; Zhou, C.; Zhang, D.
2017-12-01
Based on the self-developed three dimensional micro/nano machining system, the effects of machining parameters and sample material on micro/nano machining are investigated. The micro/nano machining system is mainly composed of the probe system and micro/nano positioning stage. The former is applied to control the normal load and the latter is utilized to realize high precision motion in the xy plane. A sample examination method is firstly introduced to estimate whether the sample is placed horizontally. The machining parameters include scratching direction, speed, cycles, normal load and feed. According to the experimental results, the scratching depth is significantly affected by the normal load in all four defined scratching directions but is rarely influenced by the scratching speed. The increase of scratching cycle number can increase the scratching depth as well as smooth the groove wall. In addition, the scratching tests of silicon and copper attest that the harder material is easier to be removed. In the scratching with different feed amount, the machining results indicate that the machined depth increases as the feed reduces. Further, a cubic polynomial is used to fit the experimental results to predict the scratching depth. With the selected machining parameters of scratching direction d3/d4, scratching speed 5 μm/s and feed 0.06 μm, some more micro structures including stair, sinusoidal groove, Chinese character '田', 'TJU' and Chinese panda have been fabricated on the silicon substrate.
NASA Astrophysics Data System (ADS)
Tolipov, A. A.; Elghawail, A.; Shushing, S.; Pham, D.; Essa, K.
2017-09-01
There is a growing demand for flexible manufacturing techniques that meet the rapid changes in customer needs. A finite element analysis numerical optimisation technique was used to optimise the multi-point sheet forming process. Multi-point forming (MPF) is a flexible sheet metal forming technique where the same tool can be readily changed to produce different parts. The process suffers from some geometrical defects such as wrinkling and dimpling, which have been found to be the cause of the major surface quality problems. This study investigated the influence of parameters such as the elastic cushion hardness, blank holder force, coefficient of friction, cushion thickness and radius of curvature, on the quality of parts formed in a flexible multi-point stamping die. For those reasons, in this investigation, a multipoint forming stamping process using a blank holder was carried out in order to study the effects of the wrinkling, dimpling, thickness variation and forming force. The aim was to determine the optimum values of these parameters. Finite element modelling (FEM) was employed to simulate the multi-point forming of hemispherical shapes. Using the response surface method, the effects of process parameters on wrinkling, maximum deviation from the target shape and thickness variation were investigated. The results show that elastic cushion with proper thickness and polyurethane with the hardness of Shore A90. It has also been found that the application of lubrication cans improve the shape accuracy of the formed workpiece. These final results were compared with the numerical simulation results of the multi-point forming for hemispherical shapes using a blank-holder and it was found that using cushion hardness realistic to reduce wrinkling and maximum deviation.
Optimization of microwave-assisted extraction of polyphenols from Myrtus communis L. leaves.
Dahmoune, Farid; Nayak, Balunkeswar; Moussi, Kamal; Remini, Hocine; Madani, Khodir
2015-01-01
Phytochemicals, such as phenolic compounds, are of great interest due to their health-benefitting antioxidant properties and possible protection against inflammation, cardiovascular diseases and certain types of cancer. Maximum retention of these phytochemicals during extraction requires optimised process parameter conditions. A microwave-assisted extraction (MAE) method was investigated for extraction of total phenolics from Myrtus communis leaves. The total phenolic capacity (TPC) of leaf extracts at optimised MAE conditions was compared with ultrasound-assisted extraction (UAE) and conventional solvent extraction (CSE). The influence of extraction parameters including ethanol concentration, microwave power, irradiation time and solvent-to-solid ratio on the extraction of TPC was modeled by using a second-order regression equation. The optimal MAE conditions were 42% ethanol concentration, 500 W microwave power, 62 s irradiation time and 32 mL/g solvent to material ratio. Ethanol concentration and liquid-to-solid ratio were the significant parameters for the extraction process (p<0.01). Under the MAE optimised conditions, the recovery of TPC was 162.49 ± 16.95 mg gallic acidequivalent/gdry weight(DW), approximating the predicted content (166.13 mg GAE/g DW). When bioactive phytochemicals extracted from Myrtus leaves using MAE compared with UAE and CSE, it was also observed that tannins (32.65 ± 0.01 mg/g), total flavonoids (5.02 ± 0.05 mg QE/g) and antioxidant activities (38.20 ± 1.08 μg GAE/mL) in MAE extracts were higher than the other two extracts. These findings further illustrate that extraction of bioactive phytochemicals from plant materials using MAE method consumes less extraction solvent and saves time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Implications of a wavelength dependent PSF for weak lensing measurements.
NASA Astrophysics Data System (ADS)
Eriksen, Martin; Hoekstra, Henk
2018-05-01
The convolution of galaxy images by the point-spread function (PSF) is the dominant source of bias for weak gravitational lensing studies, and an accurate estimate of the PSF is required to obtain unbiased shape measurements. The PSF estimate for a galaxy depends on its spectral energy distribution (SED), because the instrumental PSF is generally a function of the wavelength. In this paper we explore various approaches to determine the resulting `effective' PSF using broad-band data. Considering the Euclid mission as a reference, we find that standard SED template fitting methods result in biases that depend on source redshift, although this may be remedied if the algorithms can be optimised for this purpose. Using a machine-learning algorithm we show that, at least in principle, the required accuracy can be achieved with the current survey parameters. It is also possible to account for the correlations between photometric redshift and PSF estimates that arise from the use of the same photometry. We explore the impact of errors in photometric calibration, errors in the assumed wavelength dependence of the PSF model and limitations of the adopted template libraries. Our results indicate that the required accuracy for Euclid can be achieved using the data that are planned to determine photometric redshifts.
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
De Tobel, J; Radesh, P; Vandermeulen, D; Thevissen, P W
2017-12-01
Automated methods to evaluate growth of hand and wrist bones on radiographs and magnetic resonance imaging have been developed. They can be applied to estimate age in children and subadults. Automated methods require the software to (1) recognise the region of interest in the image(s), (2) evaluate the degree of development and (3) correlate this to the age of the subject based on a reference population. For age estimation based on third molars an automated method for step (1) has been presented for 3D magnetic resonance imaging and is currently being optimised (Unterpirker et al. 2015). To develop an automated method for step (2) based on lower third molars on panoramic radiographs. A modified Demirjian staging technique including ten developmental stages was developed. Twenty panoramic radiographs per stage per gender were retrospectively selected for FDI element 38. Two observers decided in consensus about the stages. When necessary, a third observer acted as a referee to establish the reference stage for the considered third molar. This set of radiographs was used as training data for machine learning algorithms for automated staging. First, image contrast settings were optimised to evaluate the third molar of interest and a rectangular bounding box was placed around it in a standardised way using Adobe Photoshop CC 2017 software. This bounding box indicated the region of interest for the next step. Second, several machine learning algorithms available in MATLAB R2017a software were applied for automated stage recognition. Third, the classification performance was evaluated in a 5-fold cross-validation scenario, using different validation metrics (accuracy, Rank-N recognition rate, mean absolute difference, linear kappa coefficient). Transfer Learning as a type of Deep Learning Convolutional Neural Network approach outperformed all other tested approaches. Mean accuracy equalled 0.51, mean absolute difference was 0.6 stages and mean linearly weighted kappa was 0.82. The overall performance of the presented automated pilot technique to stage lower third molar development on panoramic radiographs was similar to staging by human observers. It will be further optimised in future research, since it represents a necessary step to achieve a fully automated dental age estimation method, which to date is not available.
Nondimensional parameter for conformal grinding: combining machine and process parameters
NASA Astrophysics Data System (ADS)
Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.
1999-11-01
Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.
Influence of cutting data on surface quality when machining 17-4 PH stainless steel
NASA Astrophysics Data System (ADS)
Popovici, T. D.; Dijmărescu, M. R.
2017-08-01
The aim of the research presented in this paper is to analyse the cutting data influence upon surface quality for 17-4 PH stainless steel milling machining. The cutting regime parameters considered for the experiments were established using cutting regimes from experimental researches or from industrial conditions as basis, within the recommended ranges. The experimental program structure was determined by taking into account compatibility and orthogonality conditions, minimal use of material and labour. The machined surface roughness was determined by measuring the Ra roughness parameter, followed by surface profile registration in the form of graphics which were saved on a computer with MarSurf PS1Explorer software. Based on Ra roughness parameter, maximum values were extracted from these graphics and the influence charts of the cutting regime parameters upon surface roughness were traced using Microsoft Excel software. After a thorough analysis of the resulting data, relevant conclusions were drawn, presenting the interdependence between the surface roughness of the machined 17-4 PH samples and the cutting data variation.
NASA Astrophysics Data System (ADS)
Li, Dewei; Li, Jiwei; Xi, Yugeng; Gao, Furong
2017-12-01
In practical applications, systems are always influenced by parameter uncertainties and external disturbance. Both the H2 performance and the H∞ performance are important for the real applications. For a constrained system, the previous designs of mixed H2/H∞ robust model predictive control (RMPC) optimise one performance with the other performance requirement as a constraint. But the two performances cannot be optimised at the same time. In this paper, an improved design of mixed H2/H∞ RMPC for polytopic uncertain systems with external disturbances is proposed to optimise them simultaneously. In the proposed design, the original uncertain system is decomposed into two subsystems by the additive character of linear systems. Two different Lyapunov functions are used to separately formulate the two performance indices for the two subsystems. Then, the proposed RMPC is designed to optimise both the two performances by the weighting method with the satisfaction of the H∞ performance requirement. Meanwhile, to make the design more practical, a simplified design is also developed. The recursive feasible conditions of the proposed RMPC are discussed and the closed-loop input state practical stable is proven. The numerical examples reflect the enlarged feasible region and the improved performance of the proposed design.
Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil
2013-01-01
In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L(-1) starch, 30 g L(-1) soya bean and 9 g L(-1) sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view.
Statistical optimisation of diclofenac sustained release pellets coated with polymethacrylic films.
Kramar, A; Turk, S; Vrecer, F
2003-04-30
The objective of the present study was to evaluate three formulation parameters for the application of polymethacrylic films from aqueous dispersions in order to obtain multiparticulate sustained release of diclofenac sodium. Film coating of pellet cores was performed in a laboratory fluid bed apparatus. The chosen independent variables, i.e. the concentration of plasticizer (triethyl citrate), methacrylate polymers ratio (Eudragit RS:Eudragit RL) and the quantity of coating dispersion were optimised with a three-factor, three-level Box-Behnken design. The chosen dependent variables were cumulative percentage values of diclofenac dissolved in 3, 4 and 6 h. Based on the experimental design, different diclofenac release profiles were obtained. Response surface plots were used to relate the dependent and the independent variables. The optimisation procedure generated an optimum of 40% release in 3 h. The levels of plasticizer concentration, quantity of coating dispersion and polymer to polymer ratio (Eudragit RS:Eudragit RL) were 25% w/w, 400 g and 3/1, respectively. The optimised formulation prepared according to computer-determined levels provided a release profile, which was close to the predicted values. We also studied thermal and surface characteristics of the polymethacrylic films to understand the influence of plasticizer concentration on the drug release from the pellets.
Ben Khedher, Saoussen; Jaoua, Samir; Zouari, Nabil
2013-01-01
In order to overproduce bioinsecticides production by a sporeless Bacillus thuringiensis strain, an optimal composition of a cheap medium was defined using a response surface methodology. In a first step, a Plackett-Burman design used to evaluate the effects of eight medium components on delta-endotoxin production showed that starch, soya bean and sodium chloride exhibited significant effects on bioinsecticides production. In a second step, these parameters were selected for further optimisation by central composite design. The obtained results revealed that the optimum culture medium for delta-endotoxin production consists of 30 g L−1 starch, 30 g L−1 soya bean and 9 g L−1 sodium chloride. When compared to the basal production medium, an improvement in delta-endotoxin production up to 50% was noted. Moreover, relative toxin yield of sporeless Bacillus thuringiensis S22 was improved markedly by using optimised cheap medium (148.5 mg delta-endotoxins per g starch) when compared to the yield obtained in the basal medium (94.46 mg delta-endotoxins per g starch). Therefore, the use of optimised culture cheap medium appeared to be a good alternative for a low cost production of sporeless Bacillus thuringiensis bioinsecticides at industrial scale which is of great importance in practical point of view. PMID:24516462
Machining of Molybdenum by EDM-EP and EDC Processes
NASA Astrophysics Data System (ADS)
Wu, K. L.; Chen, H. J.; Lee, H. M.; Lo, J. S.
2017-12-01
Molybdenum metal (Mo) can be machined with conventional tools and equipment, however, its refractory propertytends to chip when being machined. In this study, the nonconventional processes of electrical discharge machining (EDM) and electro-polishing (EP) have been conducted to investigate the machining of Mo metal and fabrication of Mo grid. Satisfactory surface quality was obtained using appropriate EDM parameters of Ip ≦ 3A and Ton < 80μs at a constant pulse interval of 100μs. The finished Mometal has accomplished by selecting appropriate EP parameters such as electrolyte flow rate of 0.42m/s under EP voltage of 50V and flush time of 20 sec to remove the recast layer and craters on the surface of Mo metal. The surface roughness of machined Mo metal can be improved from Ra of 0.93μm (Rmax = 8.51μm) to 0.23μm (Rmax = 1.48μm). Machined Mo metal surface, when used as grid component in electron gun, needs to be modified by coating materials with high work function, such as silicon carbide (SiC). The main purpose of this study is to explore the electrical discharge coating (EDC) process for coating the SiC layer on EDMed Mo metal. Experimental results proved that the appropriate parameters of Ip = 5A and Ton = 50μs at Toff = 10μs can obtain the deposit with about 60μm thickness. The major phase of deposit on machined Mo surface was SiC ceramic, while the minor phases included MoSi2 and/or SiO2 with the presence of free Si due to improper discharging parameters and the use of silicone oil as the dielectric fluid.
NASA Astrophysics Data System (ADS)
Boilard, Patrick
Even though powder metallurgy (P/M) is a near net shape process, a large number of parts still require one or more machining operations during the course of their elaboration and/or their finishing. The main objectives of the work presented in this thesis are centered on the elaboration of blends with enhanced machinability, as well as helping with the definition and in the characterization of the machinability of P/M parts. Enhancing machinability can be done in various ways, through the use of machinability additives and by decreasing the amount of porosity of the parts. These different ways of enhancing machinability have been investigated thoroughly, by systematically planning and preparing series of samples in order to obtain valid and repeatable results leading to meaningful conclusions relevant to the P/M domain. Results obtained during the course of the work are divided into three main chapters: (1) the effect of machining parameters on machinability, (2) the effect of additives on machinability, and (3) the development and the characterization of high density parts obtained by liquid phase sintering. Regarding the effect of machining parameters on machinability, studies were performed on parameters such as rotating speed, feed, tool position and diameter of the tool. Optimal cutting parameters are found for drilling operations performed on a standard FC-0208 blend, for different machinability criteria. Moreover, study of material removal rates shows the sensitivity of the machinability criteria for different machining parameters and indicates that thrust force is more regular than tool wear and slope of the drillability curve in the characterization of machinability. The chapter discussing the effect of various additives on machinability reveals many interesting results. First, work carried out on MoS2 additions reveals the dissociation of this additive and the creation of metallic sulphides (namely CuxS sulphides) when copper is present. Results also show that it is possible to reduce the amount of MoS2 in the blend so as to lower the dimensional change and the cost (blend Mo8A), while enhancing machinability and keeping hardness values within the same range (70 HRB). Second, adding enstatite (MgO·SiO2) permits the observation of the mechanisms occurring with the use of this additive. It is found that the stability of enstatite limits the diffusion of graphite during sintering, leading to the presence of free graphite in the pores, thus enhancing machinability. Furthermore, a lower amount of graphite in the matrix leads to a lower hardness, which is also beneficial to machinability. It is also found that the presence of copper enhances the diffusion of graphite, through the formation of a liquid phase during sintering. With the objective of improving machinability by reaching higher densities, blends were developed for densification through liquid phase sintering. High density samples are obtained (>7.5 g/cm3) for blends prepared with Fe-C-P constituents, namely with 0.5%P and 2.4%C. By systematically studying the effect of different parameters, the importance of the chemical composition (mainly the carbon content) and the importance of the sintering cycle (particularly the cooling rate) are demonstrated. Moreover, various heat treatments studied illustrate the different microstructures achievable for this system, showing various amounts of cementite, pearlite and free graphite. Although the machinability is limited for samples containing large amounts of cementite, it can be greatly improved with very slow cooling, leading to graphitization of the carbon in presence of phosphorus. Adequate control of the sintering cycle on samples made from FGS1625 powder leads to the obtention of high density (≥7.0 g/cm 3) microstructures containing various amounts of pearlite, ferrite and free graphite. Obtaining ferritic microstructures with free graphite designed for very high machinability (tool wear <1.0%) or fine pearlitic microstructures with excellent mechanical properties (transverse rupture strength >1600 MPa) is therefore possible. These results show that improvement of machinability through higher densities is limited by microstructure. Indeed, for the studied samples, microstructure is dominant in the determination of machinability, far more important than density, judging by the influence of cementite or of the volume fraction of free graphite on machinability for example. (Abstract shortened by UMI.)
Method of Individual Forecasting of Technical State of Logging Machines
NASA Astrophysics Data System (ADS)
Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.
2018-03-01
Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.
CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms
NASA Astrophysics Data System (ADS)
Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert
2018-04-01
CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Applying machine learning to identify autistic adults using imitation: An exploratory study.
Li, Baihua; Sharma, Arjun; Meng, James; Purushwalkam, Senthil; Gowen, Emma
2017-01-01
Autism spectrum condition (ASC) is primarily diagnosed by behavioural symptoms including social, sensory and motor aspects. Although stereotyped, repetitive motor movements are considered during diagnosis, quantitative measures that identify kinematic characteristics in the movement patterns of autistic individuals are poorly studied, preventing advances in understanding the aetiology of motor impairment, or whether a wider range of motor characteristics could be used for diagnosis. The aim of this study was to investigate whether data-driven machine learning based methods could be used to address some fundamental problems with regard to identifying discriminative test conditions and kinematic parameters to classify between ASC and neurotypical controls. Data was based on a previous task where 16 ASC participants and 14 age, IQ matched controls observed then imitated a series of hand movements. 40 kinematic parameters extracted from eight imitation conditions were analysed using machine learning based methods. Two optimal imitation conditions and nine most significant kinematic parameters were identified and compared with some standard attribute evaluators. To our knowledge, this is the first attempt to apply machine learning to kinematic movement parameters measured during imitation of hand movements to investigate the identification of ASC. Although based on a small sample, the work demonstrates the feasibility of applying machine learning methods to analyse high-dimensional data and suggest the potential of machine learning for identifying kinematic biomarkers that could contribute to the diagnostic classification of autism.
The work studies the effect of magnetic circuit saturation on the synchronous inductive reactance of the armature. A practical method is given for...calculating synchronized parameters in saturating synchronized machines with additional clearances and machines with superconducting excitation windings.
NASA Astrophysics Data System (ADS)
Plastun, A. T.; Tikhonova, O. V.; Malygin, I. V.
2018-02-01
The paper presents methods of making a periodically varying different-pole magnetic field in low-power electrical machines. Authors consider classical designs of electrical machines and machines with ring windings in armature, structural features and calculated parameters of magnetic circuit for these machines.
NASA Astrophysics Data System (ADS)
Sheikholeslami, Ghazal; Griffiths, Jonathan; Dearden, Geoff; Edwardson, Stuart P.
Laser forming (LF) has been shown to be a viable alternative to form automotive grade advanced high strength steels (AHSS). Due to their high strength, heat sensitivity and low conventional formability show early fractures, larger springback, batch-to-batch inconsistency and high tool wear. In this paper, optimisation of the LF process parameters has been conducted to further understand the impact of a surface heat treatment on DP1000. A FE numerical simulation has been developed to analyse the dynamic thermo-mechanical effects. This has been verified against empirical data. The goal of the optimisation has been to develop a usable process window for the LF of AHSS within strict metallurgical constraints. Results indicate it is possible to LF this material, however a complex relationship has been found between the generation and maintenance of hardness values in the heated zone. A laser surface hardening effect has been observed that could be beneficial to the efficiency of the process.
Optimisation of driver actions in RWD race car including tyre thermodynamics
NASA Astrophysics Data System (ADS)
Maniowski, Michal
2016-04-01
The paper presents an innovative method for a lap time minimisation by using genetic algorithms for a multi objective optimisation of a race driver-vehicle model. The decision variables consist of 16 parameters responsible for actions of a professional driver (e.g. time traces for brake, accelerator and steering wheel) on a race track part with RH corner. Purpose-built, high fidelity, multibody vehicle model (called 'miMa') is described by 30 generalised coordinates and 440 parameters, crucial in motorsport. Focus is put on modelling of the tyre tread thermodynamics and its influence on race vehicle dynamics. Numerical example considers a Rear Wheel Drive BMW E36 prepared for track day events. In order to improve the section lap time (by 5%) and corner exit velocity (by 4%) a few different driving strategies are found depending on thermal conditions of semi-slick tyres. The process of the race driver adaptation to initially cold or hot tyres is explained.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
NASA Astrophysics Data System (ADS)
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Optimal control of Formula One car energy recovery systems
NASA Astrophysics Data System (ADS)
Limebeer, D. J. N.; Perantoni, G.; Rao, A. V.
2014-10-01
The utility of orthogonal collocation methods in the solution of optimal control problems relating to Formula One racing is demonstrated. These methods can be used to optimise driver controls such as the steering, braking and throttle usage, and to optimise vehicle parameters such as the aerodynamic down force and mass distributions. Of particular interest is the optimal usage of energy recovery systems (ERSs). Contemporary kinetic energy recovery systems are studied and compared with future hybrid kinetic and thermal/heat ERSs known as ERS-K and ERS-H, respectively. It is demonstrated that these systems, when properly controlled, can produce contemporary lap time using approximately two-thirds of the fuel required by earlier generation (2013 and prior) vehicles.
Multi-parameter monitoring of electrical machines using integrated fibre Bragg gratings
NASA Astrophysics Data System (ADS)
Fabian, Matthias; Hind, David; Gerada, Chris; Sun, Tong; Grattan, Kenneth T. V.
2017-04-01
In this paper a sensor system for multi-parameter electrical machine condition monitoring is reported. The proposed FBG-based system allows for the simultaneous monitoring of machine vibration, rotor speed and position, torque, spinning direction, temperature distribution along the stator windings and on the rotor surface as well as the stator wave frequency. This all-optical sensing solution reduces the component count of conventional sensor systems, i.e., all 48 sensing elements are contained within the machine operated by a single sensing interrogation unit. In this work, the sensing system has been successfully integrated into and tested on a permanent magnet motor prototype.
NASA Astrophysics Data System (ADS)
Okokpujie, Imhade Princess; Ikumapayi, Omolayo M.; Okonkwo, Ugochukwu C.; Salawu, Enesi Y.; Afolalu, Sunday A.; Dirisu, Joseph O.; Nwoke, Obinna N.; Ajayi, Oluseyi O.
2017-12-01
In recent machining operation, tool life is one of the most demanding tasks in production process, especially in the automotive industry. The aim of this paper is to study tool wear on HSS in end milling of aluminium 6061 alloy. The experiments were carried out to investigate tool wear with the machined parameters and to developed mathematical model using response surface methodology. The various machining parameters selected for the experiment are spindle speed (N), feed rate (f), axial depth of cut (a) and radial depth of cut (r). The experiment was designed using central composite design (CCD) in which 31 samples were run on SIEG 3/10/0010 CNC end milling machine. After each experiment the cutting tool was measured using scanning electron microscope (SEM). The obtained optimum machining parameter combination are spindle speed of 2500 rpm, feed rate of 200 mm/min, axial depth of cut of 20 mm, and radial depth of cut 1.0mm was found out to achieved the minimum tool wear as 0.213 mm. The mathematical model developed predicted the tool wear with 99.7% which is within the acceptable accuracy range for tool wear prediction.
NASA Astrophysics Data System (ADS)
Bhaumik, Munmun; Maity, Kalipada
Powder mixed electro discharge machining (PMEDM) is further advancement of conventional electro discharge machining (EDM) where the powder particles are suspended in the dielectric medium to enhance the machining rate as well as surface finish. Cryogenic treatment is introduced in this process for improving the tool life and cutting tool properties. In the present investigation, the characterization of the cryotreated tempered electrode was performed. An attempt has been made to study the effect of cryotreated double tempered electrode on the radial overcut (ROC) when SiC powder is mixed in the kerosene dielectric during electro discharge machining of AISI 304. The process performance has been evaluated by means of ROC when peak current, pulse on time, gap voltage, duty cycle and powder concentration are considered as process parameters and machining is performed by using tungsten carbide electrodes (untreated and double tempered electrodes). A regression analysis was performed to correlate the data between the response and the process parameters. Microstructural analysis was carried out on the machined surfaces. Least radial overcut was observed for conventional EDM as compared to powder mixed EDM. Cryotreated double tempered electrode significantly reduced the radial overcut than untreated electrode.
Ji, Renjie; Liu, Yonghong; Diao, Ruiqiang; Xu, Chenchen; Li, Xiaopeng; Cai, Baoping; Zhang, Yanzhen
2014-01-01
Engineering ceramics have been widely used in modern industry for their excellent physical and mechanical properties, and they are difficult to machine owing to their high hardness and brittleness. Electrical discharge machining (EDM) is the appropriate process for machining engineering ceramics provided they are electrically conducting. However, the electrical resistivity of the popular engineering ceramics is higher, and there has been no research on the relationship between the EDM parameters and the electrical resistivity of the engineering ceramics. This paper investigates the effects of the electrical resistivity and EDM parameters such as tool polarity, pulse interval, and electrode material, on the ZnO/Al2O3 ceramic's EDM performance, in terms of the material removal rate (MRR), electrode wear ratio (EWR), and surface roughness (SR). The results show that the electrical resistivity and the EDM parameters have the great influence on the EDM performance. The ZnO/Al2O3 ceramic with the electrical resistivity up to 3410 Ω·cm can be effectively machined by EDM with the copper electrode, the negative tool polarity, and the shorter pulse interval. Under most machining conditions, the MRR increases, and the SR decreases with the decrease of electrical resistivity. Moreover, the tool polarity, and pulse interval affect the EWR, respectively, and the electrical resistivity and electrode material have a combined effect on the EWR. Furthermore, the EDM performance of ZnO/Al2O3 ceramic with the electrical resistivity higher than 687 Ω·cm is obviously different from that with the electrical resistivity lower than 687 Ω·cm, when the electrode material changes. The microstructure character analysis of the machined ZnO/Al2O3 ceramic surface shows that the ZnO/Al2O3 ceramic is removed by melting, evaporation and thermal spalling, and the material from the working fluid and the graphite electrode can transfer to the workpiece surface during electrical discharge machining ZnO/Al2O3 ceramic.
Application of Fuzzy TOPSIS for evaluating machining techniques using sustainability metrics
NASA Astrophysics Data System (ADS)
Digalwar, Abhijeet K.
2018-04-01
Sustainable processes and techniques are getting increased attention over the last few decades due to rising concerns over the environment, improved focus on productivity and stringency in environmental as well as occupational health and safety norms. The present work analyzes the research on sustainable machining techniques and identifies techniques and parameters on which sustainability of a process is evaluated. Based on the analysis these parameters are then adopted as criteria’s to evaluate different sustainable machining techniques such as Cryogenic Machining, Dry Machining, Minimum Quantity Lubrication (MQL) and High Pressure Jet Assisted Machining (HPJAM) using a fuzzy TOPSIS framework. In order to facilitate easy arithmetic, the linguistic variables represented by fuzzy numbers are transformed into crisp numbers based on graded mean representation. Cryogenic machining was found to be the best alternative sustainable technique as per the fuzzy TOPSIS framework adopted. The paper provides a method to deal with multi criteria decision making problems in a complex and linguistic environment.
NASA Astrophysics Data System (ADS)
Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal
2013-07-01
The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.
Li, Jinyan; Fong, Simon; Wong, Raymond K; Millham, Richard; Wong, Kelvin K L
2017-06-28
Due to the high-dimensional characteristics of dataset, we propose a new method based on the Wolf Search Algorithm (WSA) for optimising the feature selection problem. The proposed approach uses the natural strategy established by Charles Darwin; that is, 'It is not the strongest of the species that survives, but the most adaptable'. This means that in the evolution of a swarm, the elitists are motivated to quickly obtain more and better resources. The memory function helps the proposed method to avoid repeat searches for the worst position in order to enhance the effectiveness of the search, while the binary strategy simplifies the feature selection problem into a similar problem of function optimisation. Furthermore, the wrapper strategy gathers these strengthened wolves with the classifier of extreme learning machine to find a sub-dataset with a reasonable number of features that offers the maximum correctness of global classification models. The experimental results from the six public high-dimensional bioinformatics datasets tested demonstrate that the proposed method can best some of the conventional feature selection methods up to 29% in classification accuracy, and outperform previous WSAs by up to 99.81% in computational time.
Cast iron cutting with nano TiN and multilayer TiN-CrN coated inserts
NASA Astrophysics Data System (ADS)
Perucca, M.; Durante, S.; Semmler, U.; Rüger, C.; Fuentes, G. G.; Almandoz, E.
2012-09-01
During the past decade great success has been achieved in the development of duplex and multilayer multi-functional surface systems. Among these surface systems outstanding properties have nanoscale multilayer coatings. Within the framework of the M3-2S project funded in the 7th European Framework Programme, several nanoscale multilayer coatings have been developed and investigated for experimental and industrial validation. This paper shows the performance of TiN and TiN/CrN nanoscale multilayer coatings on WC cutting inserts when machining GJL250 cast iron. The thin films have been deposited by cathodic arc evaporation in an industrial PVD system. The multilayer deposition characteristic and its properties are shown. The inserts have been investigated in systematic cutting experiments of cast iron bars on a turning machine specifically equipped for force measurements, accompanied by wear determination. Furthermore, equivalent experiments have been carried out on an industrial turning unit. Industrial validation criteria have been applied to assess the comparative performance of the coatings. The choice of the material and the machined parts is driven by an interest in automotive applications. The industrial tests show the need to further optimise the multi-scale modelling approach in order to reduce the lead time of the coating development as well as to improve simulation reliability.
Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José
2016-07-22
The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched.
Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José
2016-01-01
The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched. PMID:27455265
Scale effects and a method for similarity evaluation in micro electrical discharge machining
NASA Astrophysics Data System (ADS)
Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua
2016-08-01
Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Bashir, Mahmood Al; Dhar, Nikhil Ranjan
2016-07-01
Hard turning is gradually replacing the time consuming conventional turning process, which is typically followed by grinding, by producing surface quality compatible to grinding. The hard turned surface roughness depends on the cutting parameters, machining environments and tool insert configurations. In this article the variation of the surface roughness of the produced surfaces with the changes in tool insert configuration, use of coolant and different cutting parameters (cutting speed, feed rate) has been investigated. This investigation was performed in machining AISI 1060 steel, hardened to 56 HRC by heat treatment, using coated carbide inserts under two different machining environments. The depth of cut, fluid pressure and material hardness were kept constant. The Design of Experiment (DOE) was performed to determine the number and combination sets of different cutting parameters. A full factorial analysis has been performed to examine the effect of main factors as well as interaction effect of factors on surface roughness. A statistical analysis of variance (ANOVA) was employed to determine the combined effect of cutting parameters, environment and tool configuration. The result of this analysis reveals that environment has the most significant impact on surface roughness followed by feed rate and tool configuration respectively.
Experimental Investigation and Optimization of Response Variables in WEDM of Inconel - 718
NASA Astrophysics Data System (ADS)
Karidkar, S. S.; Dabade, U. A.
2016-02-01
Effective utilisation of Wire Electrical Discharge Machining (WEDM) technology is challenge for modern manufacturing industries. Day by day new materials with high strengths and capabilities are being developed to fulfil the customers need. Inconel - 718 is similar kind of material which is extensively used in aerospace applications, such as gas turbine, rocket motors, and spacecraft as well as in nuclear reactors and pumps etc. This paper deals with the experimental investigation of optimal machining parameters in WEDM for Surface Roughness, Kerf Width and Dimensional Deviation using DoE such as Taguchi methodology, L9 orthogonal array. By keeping peak current constant at 70 A, the effect of other process parameters on above response variables were analysed. Obtained experimental results were statistically analysed using Minitab-16 software. Analysis of Variance (ANOVA) shows pulse on time as the most influential parameter followed by wire tension whereas spark gap set voltage is observed to be non-influencing parameter. Multi-objective optimization technique, Grey Relational Analysis (GRA), shows optimal machining parameters such as pulse on time 108 Machine unit, spark gap set voltage 50 V and wire tension 12 gm for optimal response variables considered for the experimental analysis.
National Synchrotron Light Source annual report 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulbert, S.L.; Lazarz, N.M.
1992-04-01
This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less
Streese-Kleeberg, Jan; Rachor, Ingke; Gebert, Julia; Stegmann, Rainer
2011-05-01
In order to optimise methane oxidation in landfill cover soils, it is important to be able to accurately quantify the amount of methane oxidised. This research considers the gas push-pull test (GPPT) as a possible method to quantify oxidation rates in situ. During a GPPT, a gas mixture consisting of one or more reactive gases (e.g., CH(4), O(2)) and one or more conservative tracers (e.g., argon), is injected into the soil. Following this, the mixture of injected gas and soil air is extracted from the same location and periodically sampled. The kinetic parameters for the biological oxidation taking place in the soil can be derived from the differences in the breakthrough curves. The original method of Urmann et al. (2005) was optimised for application in landfill cover soils and modified to reduce the analytical effort required. Optimised parameters included the flow rate during the injection phase and the duration of the experiment. 50 GPPTs have been conducted at different landfills in Germany during different seasons. Generally, methane oxidation rates ranged between 0 and 150 g m(soil air)(-3)h(-1). At one location, rates up to 440 g m(soil air)(-3)h(-1) were measured under particularly favourable conditions. The method is simple in operation and does not require expensive equipment besides standard laboratory gas chromatographs. Copyright © 2010 Elsevier Ltd. All rights reserved.
Lean energy analysis of CNC lathe
NASA Astrophysics Data System (ADS)
Liana, N. A.; Amsyar, N.; Hilmy, I.; Yusof, MD
2018-01-01
The industrial sector in Malaysia is one of the main sectors that have high percentage of energy demand compared to other sector and this problem may lead to the future power shortage and increasing the production cost of a company. Suitable initiatives should be implemented by the industrial sectors to solve the issues such as by improving the machining system. In the past, the majority of the energy consumption in industry focus on lighting, HVAC and office section usage. Future trend, manufacturing process is also considered to be included in the energy analysis. A study on Lean Energy Analysis in a machining process is presented. Improving the energy efficiency in a lathe machine by enhancing the cutting parameters of turning process is discussed. Energy consumption of a lathe machine was analyzed in order to identify the effect of cutting parameters towards energy consumption. It was found that the combination of parameters for third run (spindle speed: 1065 rpm, depth of cut: 1.5 mm, feed rate: 0.3 mm/rev) was the most preferred and ideal to be used during the turning machining process as it consumed less energy usage.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Effect of processing parameters on surface finish for fused deposition machinable wax patterns
NASA Technical Reports Server (NTRS)
Roberts, F. E., III
1995-01-01
This report presents a study on the effect of material processing parameters used in layer-by-layer material construction on the surface finish of a model to be used as an investment casting pattern. The data presented relate specifically to fused deposition modeling using a machinable wax.
ICRP publication 121: radiological protection in paediatric diagnostic and interventional radiology.
Khong, P-L; Ringertz, H; Donoghue, V; Frush, D; Rehani, M; Appelgate, K; Sanchez, R
2013-04-01
Paediatric patients have a higher average risk of developing cancer compared with adults receiving the same dose. The longer life expectancy in children allows more time for any harmful effects of radiation to manifest, and developing organs and tissues are more sensitive to the effects of radiation. This publication aims to provide guiding principles of radiological protection for referring clinicians and clinical staff performing diagnostic imaging and interventional procedures for paediatric patients. It begins with a brief description of the basic concepts of radiological protection, followed by the general aspects of radiological protection, including principles of justification and optimisation. Guidelines and suggestions for radiological protection in specific modalities - radiography and fluoroscopy, interventional radiology, and computed tomography - are subsequently covered in depth. The report concludes with a summary and recommendations. The importance of rigorous justification of radiological procedures is emphasised for every procedure involving ionising radiation, and the use of imaging modalities that are non-ionising should always be considered. The basic aim of optimisation of radiological protection is to adjust imaging parameters and institute protective measures such that the required image is obtained with the lowest possible dose of radiation, and that net benefit is maximised to maintain sufficient quality for diagnostic interpretation. Special consideration should be given to the availability of dose reduction measures when purchasing new imaging equipment for paediatric use. One of the unique aspects of paediatric imaging is with regards to the wide range in patient size (and weight), therefore requiring special attention to optimisation and modification of equipment, technique, and imaging parameters. Examples of good radiographic and fluoroscopic technique include attention to patient positioning, field size and adequate collimation, use of protective shielding, optimisation of exposure factors, use of pulsed fluoroscopy, limiting fluoroscopy time, etc. Major paediatric interventional procedures should be performed by experienced paediatric interventional operators, and a second, specific level of training in radiological protection is desirable (in some countries, this is mandatory). For computed tomography, dose reduction should be optimised by the adjustment of scan parameters (such as mA, kVp, and pitch) according to patient weight or age, region scanned, and study indication (e.g. images with greater noise should be accepted if they are of sufficient diagnostic quality). Other strategies include restricting multiphase examination protocols, avoiding overlapping of scan regions, and only scanning the area in question. Up-to-date dose reduction technology such as tube current modulation, organ-based dose modulation, auto kV technology, and iterative reconstruction should be utilised when appropriate. It is anticipated that this publication will assist institutions in encouraging the standardisation of procedures, and that it may help increase awareness and ultimately improve practices for the benefit of patients. Copyright © 2012. Published by Elsevier Ltd.
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
NASA Astrophysics Data System (ADS)
Chandrasekaran, Muthumari; Tamang, Santosh
2017-08-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
NASA Astrophysics Data System (ADS)
Bondarenko, J. A.; Fedorenko, M. A.; Pogonin, A. A.
2018-03-01
Large parts can be treated without disassembling machines using “Extra”, having technological and design challenges, which differ from the challenges in the processing of these components on the stationary machine. Extension machines are used to restore large parts up to the condition allowing one to use them in a production environment. To achieve the desired accuracy and surface roughness parameters, the surface after rotary grinding becomes recoverable, which greatly increases complexity. In order to improve production efficiency and productivity of the process, the qualitative rotary processing of the machined surface is applied. The rotary cutting process includes a continuous change of the cutting edge surfaces. The kinematic parameters of a rotary cutting define its main features and patterns, the cutting operation of the rotary cutting instrument.
Machinability of Al 6061 Deposited with Cold Spray Additive Manufacturing
NASA Astrophysics Data System (ADS)
Aldwell, Barry; Kelly, Elaine; Wall, Ronan; Amaldi, Andrea; O'Donnell, Garret E.; Lupoi, Rocco
2017-10-01
Additive manufacturing techniques such as cold spray are translating from research laboratories into more mainstream high-end production systems. Similar to many additive processes, finishing still depends on removal processes. This research presents the results from investigations into aspects of the machinability of aluminum 6061 tubes manufactured with cold spray. Through the analysis of cutting forces and observations on chip formation and surface morphology, the effect of cutting speed, feed rate, and heat treatment was quantified, for both cold-sprayed and bulk aluminum 6061. High-speed video of chip formation shows changes in chip form for varying material and heat treatment, which is supported by the force data and quantitative imaging of the machined surface. The results shown in this paper demonstrate that parameters involved in cold spray directly impact on machinability and therefore have implications for machining parameters and strategy.
Neural networks with fuzzy Petri nets for modeling a machining process
NASA Astrophysics Data System (ADS)
Hanna, Moheb M.
1998-03-01
The paper presents an intelligent architecture based a feedforward neural network with fuzzy Petri nets for modeling product quality in a CNC machining center. It discusses how the proposed architecture can be used for modeling, monitoring and control a product quality specification such as surface roughness. The surface roughness represents the output quality specification manufactured by a CNC machining center as a result of a milling process. The neural network approach employed the selected input parameters which defined by the machine operator via the CNC code. The fuzzy Petri nets approach utilized the exact input milling parameters, such as spindle speed, feed rate, tool diameter and coolant (off/on), which can be obtained via the machine or sensors system. An aim of the proposed architecture is to model the demanded quality of surface roughness as high, medium or low.
Optimisation of thulium fibre laser parameters with generation of pulses by pump modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obronov, I V; Larin, S V; Sypin, V E
2015-07-31
The formation of relaxation pulses of a thulium fibre laser (λ = 1.9 μm) by modulating the power of a pump erbium fibre laser (λ = 1.55 μm) is studied. A theoretical model is developed to find the dependences of pulse duration and peak power on different cavity parameters. The optimal cavity parameters for achieving the minimal pulse duration are determined. The results are confirmed by experimental development of a laser emitting pulses with a duration shorter than 10 ns, a peak power of 1.8 kW and a repetition rate of 50 kHz. (control of radiation parameters)
NASA Astrophysics Data System (ADS)
Ferretti, S.; Amadori, K.; Boccalatte, A.; Alessandrini, M.; Freddi, A.; Persiani, F.; Poli, G.
2002-01-01
The UNIBO team composed of students and professors of the University of Bologna along with technicians and engineers from Alenia Space Division and Siad Italargon Division, took part in the 3rd Student Parabolic Flight Campaign of the European Space Agency in 2000. It won the student competition and went on to take part in the Professional Parabolic Flight Campaign of May 2001. The experiment focused on "dendritic growth in aluminium alloy weldings", and investigated topics related to the welding process of aluminium in microgravity. The purpose of the research is to optimise the process and to define the areas of interest that could be improved by new conceptual designs. The team performed accurate tests in microgravity to determine which phenomena have the greatest impact on the quality of the weldings with respect to penetration, surface roughness and the microstructures that are formed during the solidification. Various parameters were considered in the economic-technical optimisation, such as the type of electrode and its tip angle. Ground and space tests have determined the optimum chemical composition of the electrodes to offer longest life while maintaining the shape of the point. Additionally, the power consumption has been optimised; this offers opportunities for promoting the product to the customer as well as being environmentally friendly. Tests performed on the Al-Li alloys showed a significant influence of some physical phenomena such as the Marangoni effect and thermal diffusion; predictions have been made on the basis of observations of the thermal flux seen in the stereophotos. Space transportation today is a key element in the construction of space stations and future planetary bases, because the volumes available for launch to space are directly related to the payload capacity of rockets or the Space Shuttle. The research performed gives engineers the opportunity to consider completely new concepts for designing structures for space applications. In fact, once the optimised parameters are defined for welding in space, it could be possible to weld different parts directly in orbit to obtain much larger sizes and volumes, for example for space tourism habitation modules. The second relevant aspect is technology transfer obtained by the optimisation of the TIG process on aluminium which is often used in the automotive industry as well as in mass production markets.
Sweetapple, Christine; Fu, Guangtao; Butler, David
2014-05-15
This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
Baugreet, Sephora; Kerry, Joseph P; Brodkorb, André; Gomez, Carolina; Auty, Mark; Allen, Paul; Hamill, Ruth M
2018-08-01
With the goal of optimising a protein-enriched restructured beef steak targeted at the nutritional and chemosensory requirements of older adults, technological performance of thirty formulations, containing plant-based ingredients, pea protein isolate (PPI), rice protein (RP) and lentil flour (LF) with transglutaminase (TG) to enhance binding of meat pieces, were analysed. Maximal protein content of 28% in cooked product was achieved with PPI, RP and LF. Binding strength was primarily affected by TG, while textural parameters were improved with LF inclusion. Optimal formulation (F) to obtain a protein-enriched steak with lowest hardness values was achieved with TG (2%), PPI (8%), RP (9.35%) and LF (4%). F, F1S (optimal formulation 1 with added seasoning) and control restructured products (not containing plant proteins or seasonings) were scored by 120 consumers' aged over-65 years. Controls were most preferred (P < .05), while F1S were least liked by the older consumers. Consumer testing suggests further refinement and optimisation of restructured products with plant proteins should be undertaken. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ali, Sikander; Nawaz, Wajeeha
2017-02-01
The optimisation of nutritional requirements for dopamine (DA) synthesis by calcium alginate-entrapped mutant variant of Aspergillus oryzae EMS-6 using submerged fermentation technique was investigated. A total of 13 strains were isolated from soil. Isolate I-2 was selected as a better producer of DA and improved by exposing with ethyl methylsulphonate (EMS). EMS-6 was selected as it exhibited 43 μg/mL DA activity. The mutant variable was further treated with low levels of l-cysteine HCl to make it resistant against diversion and environmental stress. The conidiospores of mutant variant were entrapped in calcium alginate beads for stable product formation. EMS-6 gave maximum DA activity (124 μg/mL) when supplemented with 0.1% peptone and 0.2% sucrose, under optimised parameters viz. pH 3, temperature of 55 °C and incubation time of 70 min. The study involves the high profile of DA activity and is needed, as DA is capable to control numerous neurogenic disorders.
On the analysis of using 3-coil wireless power transfer system in retinal prosthesis.
Bai, Shun; Skafidas, Stan
2014-01-01
Designing a wireless power transmission system(WPTS) using inductive coupling has been investigated extensively in the last decade. Depending on the different configurations of the coupling system, there have been various designing methods to optimise the power transmission efficiency based on the tuning circuitry, quality factor optimisation and geometrical configuration. Recently, a 3-coil WPTS was introduced in retinal prosthesis to overcome the low power transferring efficiency due to low coupling coefficient. Here we present a method to analyse this 3-coil WPTS using the S-parameters to directly obtain maximum achievable power transferring efficiency. Through electromagnetic simulation, we brought a question on the condition of improvement using 3-coil WPTS in powering retinal prosthesis.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Surface roughness analysis after laser assisted machining of hard to cut materials
NASA Astrophysics Data System (ADS)
Przestacki, D.; Jankowiak, M.
2014-03-01
Metal matrix composites and Si3N4 ceramics are very attractive materials for various industry applications due to extremely high hardness and abrasive wear resistance. However because of these features they are problematic for the conventional turning process. The machining on a classic lathe still requires special polycrystalline diamond (PCD) or cubic boron nitride (CBN) cutting inserts which are very expensive. In the paper an experimental surface roughness analysis of laser assisted machining (LAM) for two tapes of hard-to-cut materials was presented. In LAM, the surface of work piece is heated directly by a laser beam in order to facilitate, the decohesion of material. Surface analysis concentrates on the influence of laser assisted machining on the surface quality of the silicon nitride ceramic Si3N4 and metal matrix composite (MMC). The effect of the laser assisted machining was compared to the conventional machining. The machining parameters influence on surface roughness parameters was also investigated. The 3D surface topographies were measured using optical surface profiler. The analysis of power spectrum density (PSD) roughness profile were analyzed.
Tack, Denis; Jahnen, Andreas; Kohler, Sarah; Harpes, Nico; De Maertelaer, Viviane; Back, Carlo; Gevenois, Pierre Alain
2014-01-01
To report short- and long-term effects of an audit process intended to optimise the radiation dose from multidetector row computed tomography (MDCT). A survey of radiation dose from all eight MDCT departments in the state of Luxembourg performed in 2007 served as baseline, and involved the most frequently imaged regions (head, sinus, cervical spine, thorax, abdomen, and lumbar spine). CT dose index volume (CTDIvol), dose-length product per acquisition (DLP/acq), and DLP per examination (DLP/exa) were recorded, and their mean, median, 25th and 75th percentiles compared. In 2008, an audit conducted in each department helped to optimise doses. In 2009 and 2010, two further surveys evaluated the audit's impact on the dose delivered. Between 2007 and 2009, DLP/exa significantly decreased by 32-69 % for all regions (P < 0.001) except the lumbar spine (5 %, P = 0.455). Between 2009 and 2010, DLP/exa significantly decreased by 13-18 % for sinus, cervical and lumbar spine (P ranging from 0.016 to less than 0.001). Between 2007 and 2010, DLP/exa significantly decreased for all regions (18-75 %, P < 0.001). Collective dose decreased by 30 % and the 75th percentile (diagnostic reference level, DRL) by 20-78 %. The audit process resulted in long-lasting dose reduction, with DRLs reduced by 20-78 %, mean DLP/examination by 18-75 %, and collective dose by 30 %. • External support through clinical audit may optimise default parameters of routine CT. • Reduction of 75th percentiles used as reference diagnostic levels is 18-75 %. • The effect of this audit is sustainable over time. • Dose savings through optimisation can be added to those achievable through CT.
Cutting Zone Temperature Identification During Machining of Nickel Alloy Inconel 718
NASA Astrophysics Data System (ADS)
Czán, Andrej; Daniš, Igor; Holubják, Jozef; Zaušková, Lucia; Czánová, Tatiana; Mikloš, Matej; Martikáň, Pavol
2017-12-01
Quality of machined surface is affected by quality of cutting process. There are many parameters, which influence on the quality of the cutting process. The cutting temperature is one of most important parameters that influence the tool life and the quality of machined surfaces. Its identification and determination is key objective in specialized machining processes such as dry machining of hard-to-machine materials. It is well known that maximum temperature is obtained in the tool rake face at the vicinity of the cutting edge. A moderate level of cutting edge temperature and a low thermal shock reduce the tool wear phenomena, and a low temperature gradient in the machined sublayer reduces the risk of high tensile residual stresses. The thermocouple method was used to measure the temperature directly in the cutting zone. An original thermocouple was specially developed for measuring of temperature in the cutting zone, surface and subsurface layers of machined surface. This paper deals with identification of temperature and temperature gradient during dry peripheral milling of Inconel 718. The measurements were used to identification the temperature gradients and to reconstruct the thermal distribution in cutting zone with various cutting conditions.
National Synchrotron Light Source annual report 1991. Volume 1, October 1, 1990--September 30, 1991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulbert, S.L.; Lazarz, N.M.
1992-04-01
This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less
Development of the FITS tools package for multiple software environments
NASA Technical Reports Server (NTRS)
Pence, W. D.; Blackburn, J. K.
1992-01-01
The HEASARC is developing a package of general purpose software for analyzing data files in FITS format. This paper describes the design philosophy which makes the software both machine-independent (it runs on VAXs, Suns, and DEC-stations) and software environment-independent. Currently the software can be compiled and linked to produce IRAF tasks, or alternatively, the same source code can be used to generate stand-alone tasks using one of two implementations of a user-parameter interface library. The machine independence of the software is achieved by writing the source code in ANSI standard Fortran or C, using the machine-independent FITSIO subroutine interface for all data file I/O, and using a standard user-parameter subroutine interface for all user I/O. The latter interface is based on the Fortran IRAF Parameter File interface developed at STScI. The IRAF tasks are built by linking to the IRAF implementation of this parameter interface library. Two other implementations of this parameter interface library, which have no IRAF dependencies, are now available which can be used to generate stand-alone executable tasks. These stand-alone tasks can simply be executed from the machine operating system prompt either by supplying all the task parameters on the command line or by entering the task name after which the user will be prompted for any required parameters. A first release of this FTOOLS package is now publicly available. The currently available tasks are described, along with instructions on how to obtain a copy of the software.
Initial planetary base construction techniques and machine implementation
NASA Technical Reports Server (NTRS)
Crockford, William W.
1987-01-01
Conceptual designs of (1) initial planetary base structures, and (2) an unmanned machine to perform the construction of these structures using materials local to the planet are presented. Rock melting is suggested as a possible technique to be used by the machine in fabricating roads, platforms, and interlocking bricks. Identification of problem areas in machine design and materials processing is accomplished. The feasibility of the designs is contingent upon favorable results of an analysis of the engineering behavior of the product materials. The analysis requires knowledge of several parameters for solution of the constitutive equations of the theory of elasticity. An initial collection of these parameters is presented which helps to define research needed to perform a realistic feasibility study. A qualitative approach to estimating power and mass lift requirements for the proposed machine is used which employs specifications of currently available equipment. An initial, unmanned mission scenario is discussed with emphasis on identifying uncompleted tasks and suggesting design considerations for vehicles and primitive structures which use the products of the machine processing.
Romero, G; Panzalis, R; Ruegg, P
2017-11-01
The aim of this paper was to study the relationship between milk flow emission variables recorded during milking of dairy goats with variables related to milking routine, goat physiology, milking parameters and milking machine characteristics, to determine the variables affecting milking performance and help the goat industry pinpoint farm and milking practices that improve milking performance. In total, 19 farms were visited once during the evening milking. Milking parameters (vacuum level (VL), pulsation ratio and pulsation rate, vacuum drop), milk emission flow variables (milking time, milk yield, maximum milk flow (MMF), average milk flow (AVMF), time until 500 g/min milk flow is established (TS500)), doe characteristics of 8 to 10 goats/farm (breed, days in milk and parity), milking practices (overmilking, overstripping, pre-lag time) and milking machine characteristics (line height, presence of claw) were recorded on every farm. The relationships between recorded variables and farm were analysed by a one-way ANOVA analysis. The relationships of milk yield, MMF, milking time and TS500 with goat physiology, milking routine, milking parameters and milking machine design were analysed using a linear mixed model, considering the farm as the random effect. Farm was significant (P<0.05) in all the studied variables. Milk emission flow variables were similar to those recommended in scientific studies. Milking parameters were adequate in most of the farms, being similar to those recommended in scientific studies. Few milking parameters and milking machine characteristics affected the tested variables: average vacuum level only showed tendency on MMF, and milk pipeline height on TS500. Milk yield (MY) was mainly affected by parity, as the interaction of days in milk with parity was also significant. Milking time was mainly affected by milk yield and breed. Also significant were parity, the interaction of days in milk with parity and overstripping, whereas overmilking showed a slight tendency. We concluded that most of the studied variables were mainly related to goat physiology characteristics, as the effects of milking parameters and milking machine characteristics were scarce.
Healy, B J; van der Merwe, D; Christaki, K E; Meghzifene, A
2017-02-01
Medical linear accelerators (linacs) and cobalt-60 machines are both mature technologies for external beam radiotherapy. A comparison is made between these two technologies in terms of infrastructure and maintenance, dosimetry, shielding requirements, staffing, costs, security, patient throughput and clinical use. Infrastructure and maintenance are more demanding for linacs due to the complex electric componentry. In dosimetry, a higher beam energy, modulated dose rate and smaller focal spot size mean that it is easier to create an optimised treatment with a linac for conformal dose coverage of the tumour while sparing healthy organs at risk. In shielding, the requirements for a concrete bunker are similar for cobalt-60 machines and linacs but extra shielding and protection from neutrons are required for linacs. Staffing levels can be higher for linacs and more staff training is required for linacs. Life cycle costs are higher for linacs, especially multi-energy linacs. Security is more complex for cobalt-60 machines because of the high activity radioactive source. Patient throughput can be affected by source decay for cobalt-60 machines but poor maintenance and breakdowns can severely affect patient throughput for linacs. In clinical use, more complex treatment techniques are easier to achieve with linacs, and the availability of electron beams on high-energy linacs can be useful for certain treatments. In summary, there is no simple answer to the question of the choice of either cobalt-60 machines or linacs for radiotherapy in low- and middle-income countries. In fact a radiotherapy department with a combination of technologies, including orthovoltage X-ray units, may be an option. Local needs, conditions and resources will have to be factored into any decision on technology taking into account the characteristics of both forms of teletherapy, with the primary goal being the sustainability of the radiotherapy service over the useful lifetime of the equipment. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Lin, Frank P Y; Pokorny, Adrian; Teng, Christina; Dear, Rachel; Epstein, Richard J
2016-12-01
Multidisciplinary team (MDT) meetings are used to optimise expert decision-making about treatment options, but such expertise is not digitally transferable between centres. To help standardise medical decision-making, we developed a machine learning model designed to predict MDT decisions about adjuvant breast cancer treatments. We analysed MDT decisions regarding adjuvant systemic therapy for 1065 breast cancer cases over eight years. Machine learning classifiers with and without bootstrap aggregation were correlated with MDT decisions (recommended, not recommended, or discussable) regarding adjuvant cytotoxic, endocrine and biologic/targeted therapies, then tested for predictability using stratified ten-fold cross-validations. The predictions so derived were duly compared with those based on published (ESMO and NCCN) cancer guidelines. Machine learning more accurately predicted adjuvant chemotherapy MDT decisions than did simple application of guidelines. No differences were found between MDT- vs. ESMO/NCCN- based decisions to prescribe either adjuvant endocrine (97%, p = 0.44/0.74) or biologic/targeted therapies (98%, p = 0.82/0.59). In contrast, significant discrepancies were evident between MDT- and guideline-based decisions to prescribe chemotherapy (87%, p < 0.01, representing 43% and 53% variations from ESMO/NCCN guidelines, respectively). Using ten-fold cross-validation, the best classifiers achieved areas under the receiver operating characteristic curve (AUC) of 0.940 for chemotherapy (95% C.I., 0.922-0.958), 0.899 for the endocrine therapy (95% C.I., 0.880-0.918), and 0.977 for trastuzumab therapy (95% C.I., 0.955-0.999) respectively. Overall, bootstrap aggregated classifiers performed better among all evaluated machine learning models. A machine learning approach based on clinicopathologic characteristics can predict MDT decisions about adjuvant breast cancer drug therapies. The discrepancy between MDT- and guideline-based decisions regarding adjuvant chemotherapy implies that certain non-clincopathologic criteria, such as patient preference and resource availability, are factored into clinical decision-making by local experts but not captured by guidelines.
Amaral, Jorge L M; Lopes, Agnaldo J; Jansen, José M; Faria, Alvaro C D; Melo, Pedro L
2013-12-01
The purpose of this study was to develop an automatic classifier to increase the accuracy of the forced oscillation technique (FOT) for diagnosing early respiratory abnormalities in smoking patients. The data consisted of FOT parameters obtained from 56 volunteers, 28 healthy and 28 smokers with low tobacco consumption. Many supervised learning techniques were investigated, including logistic linear classifiers, k nearest neighbor (KNN), neural networks and support vector machines (SVM). To evaluate performance, the ROC curve of the most accurate parameter was established as baseline. To determine the best input features and classifier parameters, we used genetic algorithms and a 10-fold cross-validation using the average area under the ROC curve (AUC). In the first experiment, the original FOT parameters were used as input. We observed a significant improvement in accuracy (KNN=0.89 and SVM=0.87) compared with the baseline (0.77). The second experiment performed a feature selection on the original FOT parameters. This selection did not cause any significant improvement in accuracy, but it was useful in identifying more adequate FOT parameters. In the third experiment, we performed a feature selection on the cross products of the FOT parameters. This selection resulted in a further increase in AUC (KNN=SVM=0.91), which allows for high diagnostic accuracy. In conclusion, machine learning classifiers can help identify early smoking-induced respiratory alterations. The use of FOT cross products and the search for the best features and classifier parameters can markedly improve the performance of machine learning classifiers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production
NASA Astrophysics Data System (ADS)
Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne
2018-05-01
A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.
Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-01-01
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282
Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-07-18
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.
Grinding, Machining Morphological Studies on C/SiC Composites
NASA Astrophysics Data System (ADS)
Xiao, Chun-fang; Han, Bing
2018-05-01
C/SiC composite is a typical material difficult to machine. It is hard and brittle. In machining, the cutting force is large, the material removal rate is low, the edge is prone to collapse, and the tool wear is serious. In this paper, the grinding of C/Si composites material along the direction of fiber distribution is studied respectively. The surface microstructure and mechanical properties of C/SiC composites processed by ultrasonic machining were evaluated. The change of surface quality with the change of processing parameters has also been studied. By comparing the performances of conventional grinding and ultrasonic grinding, the surface roughness and functional characteristics of the material can be improved by optimizing the processing parameters.
Ji, Renjie; Liu, Yonghong; Diao, Ruiqiang; Xu, Chenchen; Li, Xiaopeng; Cai, Baoping; Zhang, Yanzhen
2014-01-01
Engineering ceramics have been widely used in modern industry for their excellent physical and mechanical properties, and they are difficult to machine owing to their high hardness and brittleness. Electrical discharge machining (EDM) is the appropriate process for machining engineering ceramics provided they are electrically conducting. However, the electrical resistivity of the popular engineering ceramics is higher, and there has been no research on the relationship between the EDM parameters and the electrical resistivity of the engineering ceramics. This paper investigates the effects of the electrical resistivity and EDM parameters such as tool polarity, pulse interval, and electrode material, on the ZnO/Al2O3 ceramic's EDM performance, in terms of the material removal rate (MRR), electrode wear ratio (EWR), and surface roughness (SR). The results show that the electrical resistivity and the EDM parameters have the great influence on the EDM performance. The ZnO/Al2O3 ceramic with the electrical resistivity up to 3410 Ω·cm can be effectively machined by EDM with the copper electrode, the negative tool polarity, and the shorter pulse interval. Under most machining conditions, the MRR increases, and the SR decreases with the decrease of electrical resistivity. Moreover, the tool polarity, and pulse interval affect the EWR, respectively, and the electrical resistivity and electrode material have a combined effect on the EWR. Furthermore, the EDM performance of ZnO/Al2O3 ceramic with the electrical resistivity higher than 687 Ω·cm is obviously different from that with the electrical resistivity lower than 687 Ω·cm, when the electrode material changes. The microstructure character analysis of the machined ZnO/Al2O3 ceramic surface shows that the ZnO/Al2O3 ceramic is removed by melting, evaporation and thermal spalling, and the material from the working fluid and the graphite electrode can transfer to the workpiece surface during electrical discharge machining ZnO/Al2O3 ceramic. PMID:25364912
Machinability of nickel based alloys using electrical discharge machining process
NASA Astrophysics Data System (ADS)
Khan, M. Adam; Gokul, A. K.; Bharani Dharan, M. P.; Jeevakarthikeyan, R. V. S.; Uthayakumar, M.; Thirumalai Kumaran, S.; Duraiselvam, M.
2018-04-01
The high temperature materials such as nickel based alloys and austenitic steel are frequently used for manufacturing critical aero engine turbine components. Literature on conventional and unconventional machining of steel materials is abundant over the past three decades. However the machining studies on superalloy is still a challenging task due to its inherent property and quality. Thus this material is difficult to be cut in conventional processes. Study on unconventional machining process for nickel alloys is focused in this proposed research. Inconel718 and Monel 400 are the two different candidate materials used for electrical discharge machining (EDM) process. Investigation is to prepare a blind hole using copper electrode of 6mm diameter. Electrical parameters are varied to produce plasma spark for diffusion process and machining time is made constant to calculate the experimental results of both the material. Influence of process parameters on tool wear mechanism and material removal are considered from the proposed experimental design. While machining the tool has prone to discharge more materials due to production of high energy plasma spark and eddy current effect. The surface morphology of the machined surface were observed with high resolution FE SEM. Fused electrode found to be a spherical structure over the machined surface as clumps. Surface roughness were also measured with surface profile using profilometer. It is confirmed that there is no deviation and precise roundness of drilling is maintained.
Improving Machining Accuracy of CNC Machines with Innovative Design Methods
NASA Astrophysics Data System (ADS)
Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.
2018-03-01
The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-01-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311
ProQ3D: improved model quality assessments using deep learning.
Uziela, Karolis; Menéndez Hurtado, David; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne
2017-05-15
Protein quality assessment is a long-standing problem in bioinformatics. For more than a decade we have developed state-of-art predictors by carefully selecting and optimising inputs to a machine learning method. The correlation has increased from 0.60 in ProQ to 0.81 in ProQ2 and 0.85 in ProQ3 mainly by adding a large set of carefully tuned descriptions of a protein. Here, we show that a substantial improvement can be obtained using exactly the same inputs as in ProQ2 or ProQ3 but replacing the support vector machine by a deep neural network. This improves the Pearson correlation to 0.90 (0.85 using ProQ2 input features). ProQ3D is freely available both as a webserver and a stand-alone program at http://proq3.bioinfo.se/. arne@bioinfo.se. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
A new range-free localisation in wireless sensor networks using support vector machine
NASA Astrophysics Data System (ADS)
Wang, Zengfeng; Zhang, Hao; Lu, Tingting; Sun, Yujuan; Liu, Xing
2018-02-01
Location information of sensor nodes is of vital importance for most applications in wireless sensor networks (WSNs). This paper proposes a new range-free localisation algorithm using support vector machine (SVM) and polar coordinate system (PCS), LSVM-PCS. In LSVM-PCS, two sets of classes are first constructed based on sensor nodes' polar coordinates. Using the boundaries of the defined classes, the operation region of WSN field is partitioned into a finite number of polar grids. Each sensor node can be localised into one of the polar grids by executing two localisation algorithms that are developed on the basis of SVM classification. The centre of the resident polar grid is then estimated as the location of the sensor node. In addition, a two-hop mass-spring optimisation (THMSO) is also proposed to further improve the localisation accuracy of LSVM-PCS. In THMSO, both neighbourhood information and non-neighbourhood information are used to refine the sensor node location. The results obtained verify that the proposed algorithm provides a significant improvement over existing localisation methods.
NASA Astrophysics Data System (ADS)
Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj
2015-12-01
Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.
Evaluating the electrical discharge machining (EDM) parameters with using carbon nanotubes
NASA Astrophysics Data System (ADS)
Sari, M. M.; Noordin, M. Y.; Brusa, E.
2012-09-01
Electrical discharge machining (EDM) is one of the most accurate non traditional manufacturing processes available for creating tiny apertures, complex or simple shapes and geometries within parts and assemblies. Performance of the EDM process is usually evaluated in terms of surface roughness, existence of cracks, voids and recast layer on the surface of product, after machining. Unfortunately, the high heat generated on the electrically discharged material during the EDM process decreases the quality of products. Carbon nanotubes display unexpected strength and unique electrical and thermal properties. Multi-wall carbon nanotubes are therefore on purpose added to the dielectric used in the EDM process to improve its performance when machining the AISI H13 tool steel, by means of copper electrodes. Some EDM parameters such as material removal rate, electrode wear rate, surface roughness and recast layer are here first evaluated, then compared to the outcome of EDM performed without using nanotubes mixed to the dielectric. Independent variables investigated are pulse on time, peak current and interval time. Experimental evidences show that EDM process operated by mixing multi-wall carbon nanotubes within the dielectric looks more efficient, particularly if machining parameters are set at low pulse of energy.
Controlling interferometric properties of nanoporous anodic aluminium oxide
2012-01-01
A study of reflective interference spectroscopy [RIfS] properties of nanoporous anodic aluminium oxide [AAO] with the aim to develop a reliable substrate for label-free optical biosensing is presented. The influence of structural parameters of AAO including pore diameters, inter-pore distance, pore length, and surface modification by deposition of Au, Ag, Cr, Pt, Ni, and TiO2 on the RIfS signal (Fabry-Perot fringe) was explored. AAO with controlled pore dimensions was prepared by electrochemical anodization of aluminium using 0.3 M oxalic acid at different voltages (30 to 70 V) and anodization times (10 to 60 min). Results show the strong influence of pore structures and surface modifications on the interference signal and indicate the importance of optimisation of AAO pore structures for RIfS sensing. The pore length/pore diameter aspect ratio of AAO was identified as a suitable parameter to tune interferometric properties of AAO. Finally, the application of AAO with optimised pore structures for sensing of a surface binding reaction of alkanethiols (mercaptoundecanoic acid) on gold surface is demonstrated. PMID:22280884
Optimising rigid motion compensation for small animal brain PET imaging
NASA Astrophysics Data System (ADS)
Spangler-Bickell, Matthew G.; Zhou, Lin; Kyme, Andre Z.; De Laat, Bart; Fulton, Roger R.; Nuyts, Johan
2016-10-01
Motion compensation (MC) in PET brain imaging of awake small animals is attracting increased attention in preclinical studies since it avoids the confounding effects of anaesthesia and enables behavioural tests during the scan. A popular MC technique is to use multiple external cameras to track the motion of the animal’s head, which is assumed to be represented by the motion of a marker attached to its forehead. In this study we have explored several methods to improve the experimental setup and the reconstruction procedures of this method: optimising the camera-marker separation; improving the temporal synchronisation between the motion tracker measurements and the list-mode stream; post-acquisition smoothing and interpolation of the motion data; and list-mode reconstruction with appropriately selected subsets. These techniques have been tested and verified on measurements of a moving resolution phantom and brain scans of an awake rat. The proposed techniques improved the reconstructed spatial resolution of the phantom by 27% and of the rat brain by 14%. We suggest a set of optimal parameter values to use for awake animal PET studies and discuss the relative significance of each parameter choice.
Efficient photoassociation of ultracold cesium atoms with picosecond pulse laser
NASA Astrophysics Data System (ADS)
Hai, Yang; Hu, Xue-Jin; Li, Jing-Lun; Cong, Shu-Lin
2017-08-01
We investigate theoretically the formation of ultracold Cs2 molecules via photoassociation (PA) with three kinds of pulses (the Gaussian pulse, the asymmetric shaped laser pulse SL1 with a large rising time and a small falling time and the asymmetric shaped laser pulse SL2 with a small rising time and a large falling time). For the three kinds of pulses, the final population on vibrational levels from v‧ = 120 to 175 of the excited state displays a regular oscillation change with pulse width and interaction strength, and a high PA efficiency can be achieved with optimised parameters. The PA efficiency in the excited state steered by the SL1-pulse (SL2-pulse) train with optimised parameters which is composed of four SL1 (SL2) pulses is 1.74 times as much as that by the single SL1 (SL2) pulse due to the population accumulation effect. Moreover, a dump laser is employed to transfer the excited molecules from the excited state to the vibrational level v″ = 12 of the ground state to obtain stable molecules.
Optimisation of wavelength modulated Raman spectroscopy: towards high throughput cell screening.
Praveen, Bavishna B; Mazilu, Michael; Marchington, Robert F; Herrington, C Simon; Riches, Andrew; Dholakia, Kishan
2013-01-01
In the field of biomedicine, Raman spectroscopy is a powerful technique to discriminate between normal and cancerous cells. However the strong background signal from the sample and the instrumentation affects the efficiency of this discrimination technique. Wavelength Modulated Raman spectroscopy (WMRS) may suppress the background from the Raman spectra. In this study we demonstrate a systematic approach for optimizing the various parameters of WMRS to achieve a reduction in the acquisition time for potential applications such as higher throughput cell screening. The Signal to Noise Ratio (SNR) of the Raman bands depends on the modulation amplitude, time constant and total acquisition time. It was observed that the sampling rate does not influence the signal to noise ratio of the Raman bands if three or more wavelengths are sampled. With these optimised WMRS parameters, we increased the throughput in the binary classification of normal human urothelial cells and bladder cancer cells by reducing the total acquisition time to 6 s which is significantly lower in comparison to previous acquisition times required for the discrimination between similar cell types.
Silva, Fabrício R; Vidotti, Vanessa G; Cremasco, Fernanda; Dias, Marcelo; Gomi, Edson S; Costa, Vital P
2013-01-01
To evaluate the sensitivity and specificity of machine learning classifiers (MLCs) for glaucoma diagnosis using Spectral Domain OCT (SD-OCT) and standard automated perimetry (SAP). Observational cross-sectional study. Sixty two glaucoma patients and 48 healthy individuals were included. All patients underwent a complete ophthalmologic examination, achromatic standard automated perimetry (SAP) and retinal nerve fiber layer (RNFL) imaging with SD-OCT (Cirrus HD-OCT; Carl Zeiss Meditec Inc., Dublin, California). Receiver operating characteristic (ROC) curves were obtained for all SD-OCT parameters and global indices of SAP. Subsequently, the following MLCs were tested using parameters from the SD-OCT and SAP: Bagging (BAG), Naive-Bayes (NB), Multilayer Perceptron (MLP), Radial Basis Function (RBF), Random Forest (RAN), Ensemble Selection (ENS), Classification Tree (CTREE), Ada Boost M1(ADA),Support Vector Machine Linear (SVML) and Support Vector Machine Gaussian (SVMG). Areas under the receiver operating characteristic curves (aROC) obtained for isolated SAP and OCT parameters were compared with MLCs using OCT+SAP data. Combining OCT and SAP data, MLCs' aROCs varied from 0.777(CTREE) to 0.946 (RAN).The best OCT+SAP aROC obtained with RAN (0.946) was significantly larger the best single OCT parameter (p<0.05), but was not significantly different from the aROC obtained with the best single SAP parameter (p=0.19). Machine learning classifiers trained on OCT and SAP data can successfully discriminate between healthy and glaucomatous eyes. The combination of OCT and SAP measurements improved the diagnostic accuracy compared with OCT data alone.
Intensity limits of the PSI Injector II cyclotron
NASA Astrophysics Data System (ADS)
Kolano, A.; Adelmann, A.; Barlow, R.; Baumgarten, C.
2018-03-01
We investigate limits on the current of the PSI Injector II high intensity separate-sector isochronous cyclotron, in its present configuration and after a proposed upgrade. Accelerator Driven Subcritical Reactors, neutron and neutrino experiments, and medical isotope production all benefit from increases in current, even at the ∼ 10% level: the PSI cyclotrons provide relevant experience. As space charge dominates at low beam energy, the injector is critical. Understanding space charge effects and halo formation through detailed numerical modelling gives clues on how to maximise the extracted current. Simulation of a space-charge dominated low energy high intensity (9.5 mA DC) machine, with a complex collimator set up in the central region shaping the bunch, is not trivial. We use the OPAL code, a tool for charged-particle optics calculations in large accelerator structures and beam lines, including 3D space charge. We have a precise model of the present (production) Injector II, operating at 2.2 mA current. A simple model of the proposed future (upgraded) configuration of the cyclotron is also investigated. We estimate intensity limits based on the developed models, supported by fitted scaling laws and measurements. We have been able to perform more detailed analysis of the bunch parameters and halo development than any previous study. Optimisation techniques enable better matching of the simulation set-up with Injector II parameters and measurements. We show that in the production configuration the beam current scales to the power of three with the beam size. However, at higher intensities, 4th power scaling is a better fit, setting the limit of approximately 3 mA. Currents of over 5 mA, higher than have been achieved to date, can be produced if the collimation scheme is adjusted.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
Effect of cutting parameters on strain hardening of nickel–titanium shape memory alloy
NASA Astrophysics Data System (ADS)
Wang, Guijie; Liu, Zhanqiang; Ai, Xing; Huang, Weimin; Niu, Jintao
2018-07-01
Nickel–titanium shape memory alloy (SMA) has been widely used as implant materials due to its good biocompatibility, shape memory property and super-elasticity. However, the severe strain hardening is a main challenge due to cutting force and temperature caused by machining. An orthogonal experiment of nickel–titanium SMA with different milling parameters conditions was conducted in this paper. On the one hand, the effect of cutting parameters on work hardening is obtained. It is found that the cutting speed has the most important effect on work hardening. The depth of machining induced layer and the degree of hardening become smaller with the increase of cutting speed when the cutting speed is less than 200 m min‑1 and then get larger with further increase of cutting speed. The relative intensity of diffraction peak increases as the cutting speed increase. In addition, all of the depth of machining induced layer, the degree of hardening and the relative intensity of diffraction peak increase when the feed rate increases. On the other hand, it is found that the depth of machining induced layer is closely related with the degree of hardening and phase transition. The higher the content of austenite in the machined surface is, the higher the degree of hardening will be. The depth of the machining induced layer increases with the degree of hardening increasing.
Korvigo, Ilia; Afanasyev, Andrey; Romashchenko, Nikolay; Skoblov, Mikhail
2018-01-01
Many automatic classifiers were introduced to aid inference of phenotypical effects of uncategorised nsSNVs (nonsynonymous Single Nucleotide Variations) in theoretical and medical applications. Lately, several meta-estimators have been proposed that combine different predictors, such as PolyPhen and SIFT, to integrate more information in a single score. Although many advances have been made in feature design and machine learning algorithms used, the shortage of high-quality reference data along with the bias towards intensively studied in vitro models call for improved generalisation ability in order to further increase classification accuracy and handle records with insufficient data. Since a meta-estimator basically combines different scoring systems with highly complicated nonlinear relationships, we investigated how deep learning (supervised and unsupervised), which is particularly efficient at discovering hierarchies of features, can improve classification performance. While it is believed that one should only use deep learning for high-dimensional input spaces and other models (logistic regression, support vector machines, Bayesian classifiers, etc) for simpler inputs, we still believe that the ability of neural networks to discover intricate structure in highly heterogenous datasets can aid a meta-estimator. We compare the performance with various popular predictors, many of which are recommended by the American College of Medical Genetics and Genomics (ACMG), as well as available deep learning-based predictors. Thanks to hardware acceleration we were able to use a computationally expensive genetic algorithm to stochastically optimise hyper-parameters over many generations. Overfitting was hindered by noise injection and dropout, limiting coadaptation of hidden units. Although we stress that this work was not conceived as a tool comparison, but rather an exploration of the possibilities of deep learning application in ensemble scores, our results show that even relatively simple modern neural networks can significantly improve both prediction accuracy and coverage. We provide open-access to our finest model via the web-site: http://score.generesearch.ru/services/badmut/.
Wei, Kang-Lin; Wen, Zhi-Yu; Guo, Jian; Chen, Song-Bo
2012-07-01
Aiming at the monitoring and protecting of water resource environment, a multi-parameter water quality monitoring microsystem based on microspectrometer was put forward in the present paper. The microsystem is mainly composed of MOEMS microspectrometer, flow paths system and embedded measuring & controlling system. It has the functions of self-injecting samples and detection regents, automatic constant temperature, self -stirring, self- cleaning and samples' spectrum detection. The principle prototype machine of the microsystem was developed, and its structure principle was introduced in the paper. Through experiment research, it was proved that the principle prototype machine can rapidly detect quite a few water quality parameters and can meet the demands of on-line water quality monitoring, moreover, the principle prototype machine has strong function expansibility.
Numerical Simulation of Earth Pressure on Head Chamber of Shield Machine with FEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Shouju; Kang Chengang; Sun, Wei
2010-05-21
Model parameters of conditioned soils in head chamber of shield machine are determined based on tree-axial compression tests in laboratory. The loads acting on tunneling face are estimated according to static earth pressure principle. Based on Duncan-Chang nonlinear elastic constitutive model, the earth pressures on head chamber of shield machine are simulated in different aperture ratio cases for rotating cutterhead of shield machine. Relationship between pressure transportation factor and aperture ratio of shield machine is proposed by using aggression analysis.
Machine Learning and Inverse Problem in Geodynamics
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.
2017-12-01
During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques can successfully predict the magnitude of the mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex problems in mantle dynamics by employing deep learning algorithms for estimation of mantle properties such as viscosity, elastic parameters, and thermal and chemical anomalies.
Machine characterization based on an abstract high-level language machine
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene
1989-01-01
Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.
Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.
Lima, Clodoaldo A M; Coelho, André L V
2011-10-01
We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belley, M; Schmidt, M; Knutson, N
Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less
SU-E-T-473: A Patient-Specific QC Paradigm Based On Trajectory Log Files and DICOM Plan Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeMarco, J; McCloskey, S; Low, D
Purpose: To evaluate a remote QC tool for monitoring treatment machine parameters and treatment workflow. Methods: The Varian TrueBeamTM linear accelerator is a digital machine that records machine axis parameters and MLC leaf positions as a function of delivered monitor unit or control point. This information is saved to a binary trajectory log file for every treatment or imaging field in the patient treatment session. A MATLAB analysis routine was developed to parse the trajectory log files for a given patient, compare the expected versus actual machine and MLC positions as well as perform a cross-comparison with the DICOM-RT planmore » file exported from the treatment planning system. The parsing routine sorts the trajectory log files based on the time and date stamp and generates a sequential report file listing treatment parameters and provides a match relative to the DICOM-RT plan file. Results: The trajectory log parsing-routine was compared against a standard record and verify listing for patients undergoing initial IMRT dosimetry verification and weekly and final chart QC. The complete treatment course was independently verified for 10 patients of varying treatment site and a total of 1267 treatment fields were evaluated including pre-treatment imaging fields where applicable. In the context of IMRT plan verification, eight prostate SBRT plans with 4-arcs per plan were evaluated based on expected versus actual machine axis parameters. The average value for the maximum RMS MLC error was 0.067±0.001mm and 0.066±0.002mm for leaf bank A and B respectively. Conclusion: A real-time QC analysis program was tested using trajectory log files and DICOM-RT plan files. The parsing routine is efficient and able to evaluate all relevant machine axis parameters during a patient treatment course including MLC leaf positions and table positions at time of image acquisition and during treatment.« less
Parameter monitoring compensation system and method
Barkman, William E.; Babelay, Edwin F.; DeMint, Paul D.; Hebble, Thomas L.; Igou, Richard E.; Williams, Richard R.; Klages, Edward J.; Rasnick, William H.
1995-01-01
A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along preprogrammed path during a machining operation utilizes sensors for gathering information at a preselected stage of a machining operation relating to an actual condition. The controller compares the actual condition to a condition which the program presumes to exist at the preselected stage and alters the program in accordance with detected variations between the actual condition and the assumed condition. Such conditions may be related to process parameters, such as a position, dimension or shape of the cutting tool or workpiece or an environmental temperature associated with the machining operation, and such sensors may be a contact or a non-contact type of sensor or a temperature transducer.
Reducing the uncertainty in robotic machining by modal analysis
NASA Astrophysics Data System (ADS)
Alberdi, Iñigo; Pelegay, Jose Angel; Arrazola, Pedro Jose; Ørskov, Klaus Bonde
2017-10-01
The use of industrial robots for machining could lead to high cost and energy savings for the manufacturing industry. Machining robots offer several advantages respect to CNC machines such as flexibility, wide working space, adaptability and relatively low cost. However, there are some drawbacks that are preventing a widespread adoption of robotic solutions namely lower stiffness, vibration/chatter problems and lower accuracy and repeatability. Normally due to these issues conservative cutting parameters are chosen, resulting in a low material removal rate (MRR). In this article, an example of a modal analysis of a robot is presented. For that purpose the Tap-testing technology is introduced, which aims at maximizing productivity, reducing the uncertainty in the selection of cutting parameters and offering a stable process free from chatter vibrations.
NASA Technical Reports Server (NTRS)
Hippensteele, S. A.; Cochran, R. P.
1980-01-01
The effects of two design parameters, electrode diameter and hole angle, and two machine parameters, electrode current and current-on time, on air flow rates through small-diameter (0.257 to 0.462 mm) electric-discharge-machined holes were measured. The holes were machined individually in rows of 14 each through 1.6 mm thick IN-100 strips. The data showed linear increase in air flow rate with increases in electrode cross sectional area and current-on time and little change with changes in hole angle and electrode current. The average flow-rate deviation (from the mean flow rate for a given row) decreased linearly with electrode diameter and increased with hole angle. Burn time and finished hole diameter were also measured.
NASA Astrophysics Data System (ADS)
Pervaiz, S.; Anwar, S.; Kannan, S.; Almarfadi, A.
2018-04-01
Ti6Al4V is known as difficult-to-cut material due to its inherent properties such as high hot hardness, low thermal conductivity and high chemical reactivity. Though, Ti6Al4V is utilized by industrial sectors such as aeronautics, energy generation, petrochemical and bio-medical etc. For the metal cutting community, competent and cost-effective machining of Ti6Al4V is a challenging task. To optimize cost and machining performance for the machining of Ti6Al4V, finite element based cutting simulation can be a very useful tool. The aim of this paper is to develop a finite element machining model for the simulation of Ti6Al4V machining process. The study incorporates material constitutive models namely Power Law (PL) and Johnson – Cook (JC) material models to mimic the mechanical behaviour of Ti6Al4V. The study investigates cutting temperatures, cutting forces, stresses, and plastic strains with respect to different PL and JC material models with associated parameters. In addition, the numerical study also integrates different cutting tool rake angles in the machining simulations. The simulated results will be beneficial to draw conclusions for improving the overall machining performance of Ti6Al4V.
Optimisation and characterisation of tungsten thick coatings on copper based alloy substrates
NASA Astrophysics Data System (ADS)
Riccardi, B.; Montanari, R.; Casadei, M.; Costanza, G.; Filacchioni, G.; Moriani, A.
2006-06-01
Tungsten is a promising armour material for plasma facing components of nuclear fusion reactors because of its low sputter rate and favourable thermo-mechanical properties. Among all the techniques able to realise W armours, plasma spray looks particularly attractive owing to its simplicity and low cost. The present work concerns the optimisation of spraying parameters aimed at 4-5 mm thick W coating on copper-chromium-zirconium (Cu,Cr,Zr) alloy substrates. Characterisation of coatings was performed in order to assess microstructure, impurity content, density, tensile strength, adhesion strength, thermal conductivity and thermal expansion coefficient. The work performed has demonstrated the feasibility of thick W coatings on flat and curved geometries. These coatings appear as a reliable armour for medium heat flux plasma facing component.
NASA Astrophysics Data System (ADS)
Mántaras, Daniel A.; Luque, Pablo
2012-10-01
A virtual test rig is presented using a three-dimensional model of the elasto-kinematic behaviour of a vehicle. A general approach is put forward to determine the three-dimensional position of the body and the main parameters which influence the handling of the vehicle. For the design process, the variable input data are the longitudinal and lateral acceleration and the curve radius, which are defined by the user as a design goal. For the optimisation process, once the vehicle has been built, the variable input data are the travel of the four struts and the steering wheel angle, which is obtained through monitoring the vehicle. The virtual test rig has been applied to a standard vehicle and the validity of the results has been proven.
NASA Astrophysics Data System (ADS)
Chaczykowski, Maciej
2016-06-01
Basic organic Rankine cycle (ORC), and two variants of regenerative ORC have been considered for the recovery of exhaust heat from natural gas compressor station. The modelling framework for ORC systems has been presented and the optimisation of the systems was carried out with turbine power output as the variable to be maximized. The determination of ORC system design parameters was accomplished by means of the genetic algorithm. The study was aimed at estimating the thermodynamic potential of different ORC configurations with several working fluids employed. The first part of this paper describes the ORC equipment models which are employed to build a NLP formulation to tackle design problems representative for waste energy recovery on gas turbines driving natural gas pipeline compressors.
Effect of magnetic polarity on surface roughness during magnetic field assisted EDM of tool steel
NASA Astrophysics Data System (ADS)
Efendee, A. M.; Saifuldin, M.; Gebremariam, MA; Azhari, A.
2018-04-01
Electrical discharge machining (EDM) is one of the non-traditional machining techniques where the process offers wide range of parameters manipulation and machining applications. However, surface roughness, material removal rate, electrode wear and operation costs were among the topmost issue within this technique. Alteration of magnetic device around machining area offers exciting output to be investigated and the effects of magnetic polarity on EDM remain unacquainted. The aim of this research is to investigate the effect of magnetic polarity on surface roughness during magnetic field assisted electrical discharge machining (MFAEDM) on tool steel material (AISI 420 mod.) using graphite electrode. A Magnet with a force of 18 Tesla was applied to the EDM process at selected parameters. The sparks under magnetic field assisted EDM produced better surface finish than the normal conventional EDM process. At the presence of high magnetic field, the spark produced was squeezed and discharge craters generated on the machined surface was tiny and shallow. Correct magnetic polarity combination of MFAEDM process is highly useful to attain a high efficiency machining and improved quality of surface finish to meet the demand of modern industrial applications.
Buis, Arjan
2016-01-01
Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm – Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable. PMID:27695626
Mathur, Neha; Glesk, Ivan; Buis, Arjan
2016-06-01
Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm - Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable.
NASA Astrophysics Data System (ADS)
Poikselkä, Katja; Leinonen, Mikko; Palosaari, Jaakko; Vallivaara, Ilari; Röning, Juha; Juuti, Jari
2017-09-01
This paper introduces a new type of piezoelectric actuator, Mikbal. The Mikbal was developed from a Cymbal by adding steel structures around the steel cap to increase displacement and reduce the amount of piezoelectric material used. Here the parameters of the steel cap of Mikbal and Cymbal actuators were optimised by using genetic algorithms in combination with Comsol Multiphysics FEM modelling software. The blocking force of the actuator was maximised for different values of displacement by optimising the height and the top diameter of the end cap profile so that their effect on displacement, blocking force and stresses could be analysed. The optimisation process was done for five Mikbal- and two Cymbal-type actuators with different diameters varying between 15 and 40 mm. A Mikbal with a Ø 25 mm piezoceramic disc and a Ø 40 mm steel end cap was produced and the performances of unclamped measured and modelled cases were found to correspond within 2.8% accuracy. With a piezoelectric disc of Ø 25 mm, the Mikbal created 72% greater displacement while blocking force was decreased 57% compared with a Cymbal with the same size disc. Even with a Ø 20 mm piezoelectric disc, the Mikbal was able to generate ∼10% higher displacement than a Ø 25 mm Cymbal. Thus, the introduced Mikbal structure presents a way to extend the displacement capabilities of a conventional Cymbal actuator for low-to-moderate force applications.
Statistical optimisation techniques in fatigue signal editing problem
NASA Astrophysics Data System (ADS)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.
2015-02-01
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.
Optimisation of an idealised primitive equation ocean model using stochastic parameterization
NASA Astrophysics Data System (ADS)
Cooper, Fenwick C.
2017-05-01
Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.
NASA Astrophysics Data System (ADS)
Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.
2014-05-01
In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.
Statistical optimisation techniques in fatigue signal editing problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nopiah, Z. M.; Osman, M. H.; Baharin, N.
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window andmore » fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.« less
Escher, Graziela Bragueto; Santos, Jânio Sousa; Rosso, Neiva Deliberali; Marques, Mariza Boscacci; Azevedo, Luciana; do Carmo, Mariana Araújo Vieira; Daguer, Heitor; Molognoni, Luciano; Prado-Silva, Leonardo do; Sant'Ana, Anderson S; da Silva, Marcia Cristina; Granato, Daniel
2018-05-19
This study aimed to optimise the experimental conditions of extraction of the phytochemical compounds and functional properties of Centaurea cyanus petals. The following parameters were determined: the chemical composition (LC-ESI-MS/MS), the effects of pH on the stability and antioxidant activity of anthocyanins, the inhibition of lipid peroxidation, antioxidant activity, anti-hemolytic activity, antimicrobial, anti-hypertensive, and cytotoxic/cytoprotective effect, and the measurements of intracellular reactive oxygen species. Results showed that the temperature and time influenced (p ≤ 0.05) the content of flavonoids, anthocyanins, and FRAP. Only the temperature influenced the total phenolic content, non-anthocyanin flavonoids, and antioxidant activity (DPPH). The statistical approach made it possible to obtain the optimised experimental extraction conditions to increase the level of bioactive compounds. Chlorogenic, caffeic, ferulic, and p-coumaric acids, isoquercitrin, and coumarin were identified as the major compounds in the optimised extract. The optimised extract presented anti-hemolytic and anti-hypertensive activity in vitro, in addition to showing stability and reversibility of anthocyanins and antioxidant activity with pH variation. The C. cyanus petals aqueous extract exhibited high IC 50 and GI 50 (>900 μg/mL) values for all cell lines, meaning low cytotoxicity. Based on the stress oxidative assay, the extract exhibited pro-oxidant action (10-100 μg/mL) but did not cause damage or cell death. Copyright © 2018 Elsevier Ltd. All rights reserved.
Tool geometry and damage mechanisms influencing CNC turning efficiency of Ti6Al4V
NASA Astrophysics Data System (ADS)
Suresh, Sangeeth; Hamid, Darulihsan Abdul; Yazid, M. Z. A.; Nasuha, Nurdiyanah; Ain, Siti Nurul
2017-12-01
Ti6Al4V or Grade 5 titanium alloy is widely used in the aerospace, medical, automotive and fabrication industries, due to its distinctive combination of mechanical and physical properties. Ti6Al4V has always been perverse during its machining, strangely due to the same mix of properties mentioned earlier. Ti6Al4V machining has resulted in shorter cutting tool life which has led to objectionable surface integrity and rapid failure of the parts machined. However, the proven functional relevance of this material has prompted extensive research in the optimization of machine parameters and cutting tool characteristics. Cutting tool geometry plays a vital role in ensuring dimensional and geometric accuracy in machined parts. In this study, an experimental investigation is actualized to optimize the nose radius and relief angles of the cutting tools and their interaction to different levels of machining parameters. Low elastic modulus and thermal conductivity of Ti6Al4V contribute to the rapid tool damage. The impact of these properties over the tool tips damage is studied. An experimental design approach is utilized in the CNC turning process of Ti6Al4V to statistically analyze and propose optimum levels of input parameters to lengthen the tool life and enhance surface characteristics of the machined parts. A greater tool nose radius with a straight flank, combined with low feed rates have resulted in a desirable surface integrity. The presence of relief angle has proven to aggravate tool damage and also dimensional instability in the CNC turning of Ti6Al4V.
NASA Astrophysics Data System (ADS)
Uezu, Tatsuya; Kiyokawa, Shuji
2016-06-01
We investigate the supervised batch learning of Boolean functions expressed by a two-layer perceptron with a tree-like structure. We adopt continuous weights (spherical model) and the Gibbs algorithm. We study the Parity and And machines and two types of noise, input and output noise, together with the noiseless case. We assume that only the teacher suffers from noise. By using the replica method, we derive the saddle point equations for order parameters under the replica symmetric (RS) ansatz. We study the critical value αC of the loading rate α above which the learning phase exists for cases with and without noise. We find that αC is nonzero for the Parity machine, while it is zero for the And machine. We derive the exponents barβ of order parameters expressed as (α - α C)bar{β} when α is near to αC. Furthermore, in the Parity machine, when noise exists, we find a spin glass solution, in which the overlap between the teacher and student vectors is zero but that between student vectors is nonzero. We perform Markov chain Monte Carlo simulations by simulated annealing and also by exchange Monte Carlo simulations in both machines. In the Parity machine, we study the de Almeida-Thouless stability, and by comparing theoretical and numerical results, we find that there exist parameter regions where the RS solution is unstable, and that the spin glass solution is metastable or unstable. We also study asymptotic learning behavior for large α and derive the exponents hat{β } of order parameters expressed as α - hat{β } when α is large in both machines. By simulated annealing simulations, we confirm these results and conclude that learning takes place for the input noise case with any noise amplitude and for the output noise case when the probability that the teacher's output is reversed is less than one-half.
Application of TRIZ approach to machine vibration condition monitoring problems
NASA Astrophysics Data System (ADS)
Cempel, Czesław
2013-12-01
Up to now machine condition monitoring has not been seriously approached by TRIZ1TRIZ= Russian acronym for Inventive Problem Solving System, created by G. Altshuller ca 50 years ago. users, and the knowledge of TRIZ methodology has not been applied there intensively. However, there are some introductory papers of present author posted on Diagnostic Congress in Cracow (Cempel, in press [11]), and Diagnostyka Journal as well. But it seems to be further need to make such approach from different sides in order to see, if some new knowledge and technology will emerge. In doing this we need at first to define the ideal final result (IFR) of our innovation problem. As a next we need a set of parameters to describe the problems of system condition monitoring (CM) in terms of TRIZ language and set of inventive principles possible to apply, on the way to IFR. This means we should present the machine CM problem by means of contradiction and contradiction matrix. When specifying the problem parameters and inventive principles, one should use analogy and metaphorical thinking, which by definition is not exact but fuzzy, and leads sometimes to unexpected results and outcomes. The paper undertakes this important problem again and brings some new insight into system and machine CM problems. This may mean for example the minimal dimensionality of TRIZ engineering parameter set for the description of machine CM problems, and the set of most useful inventive principles applied to given engineering parameter and contradictions of TRIZ.
Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
CNC Machining Of The Complex Copper Electrodes
NASA Astrophysics Data System (ADS)
Popan, Ioan Alexandru; Balc, Nicolae; Popan, Alina
2015-07-01
This paper presents the machining process of the complex copper electrodes. Machining of the complex shapes in copper is difficult because this material is soft and sticky. This research presents the main steps for processing those copper electrodes at a high dimensional accuracy and a good surface quality. Special tooling solutions are required for this machining process and optimal process parameters have been found for the accurate CNC equipment, using smart CAD/CAM software.
Lévy flight artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Sharma, Harish; Bansal, Jagdish Chand; Arya, K. V.; Yang, Xin-She
2016-08-01
Artificial bee colony (ABC) optimisation algorithm is a relatively simple and recent population-based probabilistic approach for global optimisation. The solution search equation of ABC is significantly influenced by a random quantity which helps in exploration at the cost of exploitation of the search space. In the ABC, there is a high chance to skip the true solution due to its large step sizes. In order to balance between diversity and convergence in the ABC, a Lévy flight inspired search strategy is proposed and integrated with ABC. The proposed strategy is named as Lévy Flight ABC (LFABC) has both the local and global search capability simultaneously and can be achieved by tuning the Lévy flight parameters and thus automatically tuning the step sizes. In the LFABC, new solutions are generated around the best solution and it helps to enhance the exploitation capability of ABC. Furthermore, to improve the exploration capability, the numbers of scout bees are increased. The experiments on 20 test problems of different complexities and five real-world engineering optimisation problems show that the proposed strategy outperforms the basic ABC and recent variants of ABC, namely, Gbest-guided ABC, best-so-far ABC and modified ABC in most of the experiments.
Infrastructure optimisation via MBR retrofit: a design guide.
Bagg, W K
2009-01-01
Wastewater management is continually evolving with the development and implementation of new, more efficient technologies. One of these is the Membrane Bioreactor (MBR). Although a relatively new technology in Australia, MBR wastewater treatment has been widely used elsewhere for over 20 years, with thousands of MBRs now in operation worldwide. Over the past 5 years, MBR technology has been enthusiastically embraced in Australia as a potential treatment upgrade option, and via retrofit typically offers two major benefits: (1) more capacity using mostly existing facilities, and (2) very high quality treated effluent. However, infrastructure optimisation via MBR retrofit is not a simple or low-cost solution and there are many factors which should be carefully evaluated before deciding on this method of plant upgrade. The paper reviews a range of design parameters which should be carefully evaluated when considering an MBR retrofit solution. Several actual and conceptual case studies are considered to demonstrate both advantages and disadvantages. Whilst optimising existing facilities and production of high quality water for reuse are powerful drivers, it is suggested that MBRs are perhaps not always the most sustainable Whole-of-Life solution for a wastewater treatment plant upgrade, especially by way of a retrofit.
Machine-learned and codified synthesis parameters of oxide materials
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Kevin; Tomala, Alex; Matthews, Sara; Strubell, Emma; Saunders, Adam; McCallum, Andrew; Olivetti, Elsa
2017-09-01
Predictive materials design has rapidly accelerated in recent years with the advent of large-scale resources, such as materials structure and property databases generated by ab initio computations. In the absence of analogous ab initio frameworks for materials synthesis, high-throughput and machine learning techniques have recently been harnessed to generate synthesis strategies for select materials of interest. Still, a community-accessible, autonomously-compiled synthesis planning resource which spans across materials systems has not yet been developed. In this work, we present a collection of aggregated synthesis parameters computed using the text contained within over 640,000 journal articles using state-of-the-art natural language processing and machine learning techniques. We provide a dataset of synthesis parameters, compiled autonomously across 30 different oxide systems, in a format optimized for planning novel syntheses of materials.
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
NASA Astrophysics Data System (ADS)
Chilur, Rudragouda; Kumar, Sushilendra
2018-06-01
The Maize ( Zea mays L.) crop is one of the most important cereal in agricultural production systems of Northern Transition Zone (Hyderabad-Karnataka region) in India. These Hyderabad Karnataka farmers (small-medium) are lack of economic technologies with maize dehusking and shelling, which fulfils the two major needs as crops and as livestock in farming. The portable medium size (600 kg/h capacity) electric motor (2.23 kW) operated Maize Dehusker cum Sheller (MDS) was designed to resolve the issue by considering engineering properties of maize. The developed trapezium shaped MDS machine having overall dimensions (length × (top and bottom) × height) of 1200 × (500 and 610) × 810 mm. The selected operational parameters viz, cylinder peripheral speed (7.1 m/s), concave clearance (25 mm) and feed rate (600 kg/h) were studied for machine-performance and seed-quality parameters. The performance of machine under these parameters showed the dehusking efficiency of 99.56%, shelling efficiency of 98.01%, cleaning efficiency of 99.11%, total loss of 3.63% machine capacity of 527.11 kg/kW-h and germination percentage of 98.93%. Overall machine performance was found satisfactory for maize dehusking cum shelling operation as well as to produce the maize grains for seeding purpose.
NASA Astrophysics Data System (ADS)
Mohan, Dhanya; Kumar, C. Santhosh
2016-03-01
Predicting the physiological condition (normal/abnormal) of a patient is highly desirable to enhance the quality of health care. Multi-parameter patient monitors (MPMs) using heart rate, arterial blood pressure, respiration rate and oxygen saturation (S pO2) as input parameters were developed to monitor the condition of patients, with minimum human resource utilization. The Support vector machine (SVM), an advanced machine learning approach popularly used for classification and regression is used for the realization of MPMs. For making MPMs cost effective, we experiment on the hardware implementation of the MPM using support vector machine classifier. The training of the system is done using the matlab environment and the detection of the alarm/noalarm condition is implemented in hardware. We used different kernels for SVM classification and note that the best performance was obtained using intersection kernel SVM (IKSVM). The intersection kernel support vector machine classifier MPM has outperformed the best known MPM using radial basis function kernel by an absoute improvement of 2.74% in accuracy, 1.86% in sensitivity and 3.01% in specificity. The hardware model was developed based on the improved performance system using Verilog Hardware Description Language and was implemented on Altera cyclone-II development board.
NASA Astrophysics Data System (ADS)
Chilur, Rudragouda; Kumar, Sushilendra
2018-02-01
The Maize (Zea mays L.) crop is one of the most important cereal in agricultural production systems of Northern Transition Zone (Hyderabad-Karnataka region) in India. These Hyderabad Karnataka farmers (small-medium) are lack of economic technologies with maize dehusking and shelling, which fulfils the two major needs as crops and as livestock in farming. The portable medium size (600 kg/h capacity) electric motor (2.23 kW) operated Maize Dehusker cum Sheller (MDS) was designed to resolve the issue by considering engineering properties of maize. The developed trapezium shaped MDS machine having overall dimensions (length × (top and bottom) × height) of 1200 × (500 and 610) × 810 mm. The selected operational parameters viz, cylinder peripheral speed (7.1 m/s), concave clearance (25 mm) and feed rate (600 kg/h) were studied for machine-performance and seed-quality parameters. The performance of machine under these parameters showed the dehusking efficiency of 99.56%, shelling efficiency of 98.01%, cleaning efficiency of 99.11%, total loss of 3.63% machine capacity of 527.11 kg/kW-h and germination percentage of 98.93%. Overall machine performance was found satisfactory for maize dehusking cum shelling operation as well as to produce the maize grains for seeding purpose.
Gesture-controlled interfaces for self-service machines and other applications
NASA Technical Reports Server (NTRS)
Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)
2004-01-01
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.
NASA Astrophysics Data System (ADS)
Pitts, James Daniel
Rotary ultrasonic machining (RUM), a hybrid process combining ultrasonic machining and diamond grinding, was created to increase material removal rates for the fabrication of hard and brittle workpieces. The objective of this research was to experimentally derive empirical equations for the prediction of multiple machined surface roughness parameters for helically pocketed rotary ultrasonic machined Zerodur glass-ceramic workpieces by means of a systematic statistical experimental approach. A Taguchi parametric screening design of experiments was employed to systematically determine the RUM process parameters with the largest effect on mean surface roughness. Next empirically determined equations for the seven common surface quality metrics were developed via Box-Behnken surface response experimental trials. Validation trials were conducted resulting in predicted and experimental surface roughness in varying levels of agreement. The reductions in cutting force and tool wear associated with RUM, reported by previous researchers, was experimentally verified to also extended to helical pocketing of Zerodur glass-ceramic.
Developing Lathing Parameters for PBX 9501
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodrum, Randall Brock
This thesis presents the work performed on lathing PBX 9501 to gather and analyze cutting force and temperature data during the machining process. This data will be used to decrease federal-regulation-constrained machining time of the high explosive PBX 9501. The effects of machining parameters depth of cut, surface feet per minute, and inches per revolution on cutting force and cutting interface were evaluated. Cutting tools of tip radius 0.005 -inches and 0.05 -inches were tested to determine what effect the tool shape had on the machining process as well. A consistently repeatable relationship of temperature to changing depth of cutmore » and surface feet per minute is found, while only a weak dependence was found to changing inches per revolution. Results also show the relation of cutting force to depth of cut and inches per revolution, while weak dependence on SFM is found. Conclusions suggest rapid, shallow cuts optimize machining time for a billet of PBX 9501, while minimizing temperature increase and cutting force.« less
Watson, Christopher J E; Jochmans, Ina
2018-01-01
The purpose of this review was to summarise how machine perfusion could contribute to viability assessment of donor livers. In both hypothermic and normothermic machine perfusion, perfusate transaminase measurement has allowed pretransplant assessment of hepatocellular damage. Hypothermic perfusion permits transplantation of marginal grafts but as yet has not permitted formal viability assessment. Livers undergoing normothermic perfusion have been investigated using parameters similar to those used to evaluate the liver in vivo. Lactate clearance, glucose evolution and pH regulation during normothermic perfusion seem promising measures of viability. In addition, bile chemistry might inform on cholangiocyte viability and the likelihood of post-transplant cholangiopathy. While the use of machine perfusion technology has the potential to reduce and even remove uncertainty regarding liver graft viability, analysis of large datasets, such as those derived from large multicenter trials of machine perfusion, are needed to provide sufficient information to enable viability parameters to be defined and validated .
NASA Astrophysics Data System (ADS)
Robert-Perron, Etienne; Blais, Carl; Pelletier, Sylvain; Thomas, Yannig
2007-06-01
The green machining process is an interesting approach for solving the mediocre machining behavior of high-performance powder metallurgy (PM) steels. This process appears as a promising method for extending tool life and reducing machining costs. Recent improvements in binder/lubricant technologies have led to high green strength systems that enable green machining. So far, tool wear has been considered negligible when characterizing the machinability of green PM specimens. This inaccurate assumption may lead to the selection of suboptimum cutting conditions. The first part of this study involves the optimization of the machining parameters to minimize the effects of tool wear on the machinability in turning of green PM components. The second part of our work compares the sintered mechanical properties of components machined in green state with other machined after sintering.
Highly Productive Tools For Turning And Milling
NASA Astrophysics Data System (ADS)
Vasilko, Karol
2015-12-01
Beside cutting speed, shift is another important parameter of machining. Its considerable influence is shown mainly in the workpiece machined surface microgeometry. In practice, mainly its combination with the radius of cutting tool tip rounding is used. Options to further increase machining productivity and machined surface quality are hidden in this approach. The paper presents variations of the design of productive cutting tools for lathe work and milling on the base of the use of the laws of the relationship among the highest reached uneveness of machined surface, tool tip radius and shift.
NASA Astrophysics Data System (ADS)
Serianni, G.; De Muri, M.; Muraro, A.; Veltri, P.; Bonomo, F.; Chitarin, G.; Pasqualotto, R.; Pavei, M.; Rizzolo, A.; Valente, M.; Franzen, P.; Ruf, B.; Schiesko, L.
2014-02-01
The Source for Production of Ion of Deuterium Extracted from Rf plasma (SPIDER) test facility is under construction in Padova to optimise the operation of the beam source of ITER neutral beam injectors. The SPIDER beam will be characterised by the instrumented calorimeter STRIKE, whose main components are one-directional carbon-fibre-carbon-composite tiles. A small-scale version of the entire system has been employed in the BAvarian Test MAchine for Negative ions (BATMAN) testbed by arranging two prototype tiles in the vertical direction. The paper presents a description of the mini-STRIKE system and of the data analysis procedures, as well as some results concerning the BATMAN beam under varying operating conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serianni, G., E-mail: gianluigi.serianni@igi.cnr.it; De Muri, M.; Veltri, P.
2014-02-15
The Source for Production of Ion of Deuterium Extracted from Rf plasma (SPIDER) test facility is under construction in Padova to optimise the operation of the beam source of ITER neutral beam injectors. The SPIDER beam will be characterised by the instrumented calorimeter STRIKE, whose main components are one-directional carbon-fibre-carbon-composite tiles. A small-scale version of the entire system has been employed in the BAvarian Test MAchine for Negative ions (BATMAN) testbed by arranging two prototype tiles in the vertical direction. The paper presents a description of the mini-STRIKE system and of the data analysis procedures, as well as some resultsmore » concerning the BATMAN beam under varying operating conditions.« less
Parameter monitoring compensation system and method
Barkman, W.E.; Babelay, E.F.; DeMint, P.D.; Hebble, T.L.; Igou, R.E.; Williams, R.R.; Klages, E.J.; Rasnick, W.H.
1995-02-07
A compensation system is described for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation. It utilizes sensors for gathering information at a preselected stage of a machining operation relating to an actual condition. The controller compares the actual condition to a condition which the program presumes to exist at the preselected stage and alters the program in accordance with detected variations between the actual condition and the assumed condition. Such conditions may be related to process parameters, such as a position, dimension or shape of the cutting tool or workpiece or an environmental temperature associated with the machining operation, and such sensors may be a contact or a non-contact type of sensor or a temperature transducer. 7 figs.
NASA Astrophysics Data System (ADS)
Das, Anshuman; Patel, S. K.; Sateesh Kumar, Ch.; Biswal, B. B.
2018-03-01
The newer technological developments are exerting immense pressure on domain of production. These fabrication industries are busy finding solutions to reduce the costs of cutting materials, enhance the machined parts quality and testing different materials, which can be made versatile for cutting materials, which are difficult for machining. High-speed machining has been the domain of paramount importance for mechanical engineering. In this study, the variation of surface integrity parameters of hardened AISI 4340 alloy steel was analyzed. The surface integrity parameters like surface roughness, micro hardness, machined surface morphology and white layer of hardened AISI 4340 alloy steel were compared using coated and uncoated cermet inserts under dry cutting condition. From the results, it was deduced that coated insert outperformed uncoated one in terms of different surface integrity characteristics.
Biomachining - A new approach for micromachining of metals
NASA Astrophysics Data System (ADS)
Vigneshwaran, S. C. Sakthi; Ramakrishnan, R.; Arun Prakash, C.; Sashank, C.
2018-04-01
Machining is the process of removal of material from workpiece. Machining can be done by physical, chemical or biological methods. Though physical and chemical methods have been widely used in machining process, they have their own disadvantages such as development of heat affected zone and usage of hazardous chemicals. Biomachining is the machining process in which bacteria is used to remove material from the metal parts. Chemolithotrophic bacteria such as Acidothiobacillus ferroxidans has been used in biomachining of metals like copper, iron etc. These bacteria are used because of their property of catalyzing the oxidation of inorganic substances. Biomachining is a suitable process for micromachining of metals. This paper reviews the biomachining process and various mechanisms involved in biomachining. This paper also briefs about various parameters/factors to be considered in biomachining and also the effect of those parameters on metal removal rate.
High speed machining of space shuttle external tank liquid hydrogen barrel panel
NASA Technical Reports Server (NTRS)
Hankins, J. D.
1983-01-01
Actual and projected optimum High Speed Machining data for producing shuttle external tank liquid hydrogen barrel panels of aluminum alloy 2219-T87 are reported. The data included various machining parameters; e.g., spindle speeds, cutting speed, table feed, chip load, metal removal rate, horsepower, cutting efficiency, cutter wear (lack of) and chip removal methods.
High speed machining of space shuttle external tank liquid hydrogen barrel panel
NASA Astrophysics Data System (ADS)
Hankins, J. D.
1983-11-01
Actual and projected optimum High Speed Machining data for producing shuttle external tank liquid hydrogen barrel panels of aluminum alloy 2219-T87 are reported. The data included various machining parameters; e.g., spindle speeds, cutting speed, table feed, chip load, metal removal rate, horsepower, cutting efficiency, cutter wear (lack of) and chip removal methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlov, S V; Trofimov, N S; Chekhlova, T K
2014-07-31
A possibility of designing optical waveguide devices based on sol – gel SiO{sub 2} – TiO{sub 2} films using the temperature dependence of the effective refractive index is shown. The dependences of the device characteristics on the parameters of the film and opticalsystem elements are analysed. The operation of a temperature recorder and a temperature limiter with a resolution of 0.6 K mm{sup -1} is demonstrated. The film and output-prism parameters are optimised. (fibreoptic and nonlinear-optic devices)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerns, James R.; Followill, David S.; Imaging and Radiation Oncology Core-Houston, The University of Texas Health Science Center-Houston, Houston, Texas
Purpose: To compare radiation machine measurement data collected by the Imaging and Radiation Oncology Core at Houston (IROC-H) with institutional treatment planning system (TPS) values, to identify parameters with large differences in agreement; the findings will help institutions focus their efforts to improve the accuracy of their TPS models. Methods and Materials: Between 2000 and 2014, IROC-H visited more than 250 institutions and conducted independent measurements of machine dosimetric data points, including percentage depth dose, output factors, off-axis factors, multileaf collimator small fields, and wedge data. We compared these data with the institutional TPS values for the same points bymore » energy, class, and parameter to identify differences and similarities using criteria involving both the medians and standard deviations for Varian linear accelerators. Distributions of differences between machine measurements and institutional TPS values were generated for basic dosimetric parameters. Results: On average, intensity modulated radiation therapy–style and stereotactic body radiation therapy–style output factors and upper physical wedge output factors were the most problematic. Percentage depth dose, jaw output factors, and enhanced dynamic wedge output factors agreed best between the IROC-H measurements and the TPS values. Although small differences were shown between 2 common TPS systems, neither was superior to the other. Parameter agreement was constant over time from 2000 to 2014. Conclusions: Differences in basic dosimetric parameters between machine measurements and TPS values vary widely depending on the parameter, although agreement does not seem to vary by TPS and has not changed over time. Intensity modulated radiation therapy–style output factors, stereotactic body radiation therapy–style output factors, and upper physical wedge output factors had the largest disagreement and should be carefully modeled to ensure accuracy.« less
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-01-01
Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-10-01
In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e., when no adaptation is performed, using wrong prior parameters. We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Estimation results of two-stage ADs and ξ * were close and much better than those obtained with ξ 0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three- and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement.
Afshari, Roya; Khaksar, Ramin; Mohammadifar, Mohammad Amin; Amiri, Zohre; Komeili, Rozita; Khaneghah, Amin Mousavi
2015-01-01
Summary In this study, the D-optimal mixture design methodology was applied to determine the optimised proportions of inulin, β-glucan and breadcrumbs in formulation of low-fat beef burgers containing pre-emulsified canola and olive oil blend. Also, the effect of each of the ingredients individually as well as their interactions on cooking characteristics, texture, colour and sensory properties of low-fat beef burgers were investigated. The results of this study revealed that the increase of inulin content in the formulations of burgers led to lower cooking yield, moisture retention and increased lightness, overall acceptability, mouldability and desired textural parameters. In contrast, incorporation of β-glucan increased the cooking yield, moisture retention and decreased lightness, overall acceptability, mouldability and desired textural parameters of burger patties. The interaction between inulin and β-glucan improved the cooking characteristics of the burgers without significantly negative effect on the colour or sensory properties. The results of the study clearly stated that the optimum mixture for the burger formulation consisted of (in g per 100 g): inulin 3.1, β-glucan 2.2 and breadcrumbs 2.7. The texture parameters and cooking characteristics were improved by using the mixture of inulin, β-glucan and breadcrumbs, without any negative effects on the sensory properties of the burgers. PMID:27904378
Research on axisymmetric aspheric surface numerical design and manufacturing technology
NASA Astrophysics Data System (ADS)
Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng
2006-02-01
The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.
Heidelberg Retina Tomograph 3 machine learning classifiers for glaucoma detection
Townsend, K A; Wollstein, G; Danks, D; Sung, K R; Ishikawa, H; Kagemann, L; Gabriele, M L; Schuman, J S
2010-01-01
Aims To assess performance of classifiers trained on Heidelberg Retina Tomograph 3 (HRT3) parameters for discriminating between healthy and glaucomatous eyes. Methods Classifiers were trained using HRT3 parameters from 60 healthy subjects and 140 glaucomatous subjects. The classifiers were trained on all 95 variables and smaller sets created with backward elimination. Seven types of classifiers, including Support Vector Machines with radial basis (SVM-radial), and Recursive Partitioning and Regression Trees (RPART), were trained on the parameters. The area under the ROC curve (AUC) was calculated for classifiers, individual parameters and HRT3 glaucoma probability scores (GPS). Classifier AUCs and leave-one-out accuracy were compared with the highest individual parameter and GPS AUCs and accuracies. Results The highest AUC and accuracy for an individual parameter were 0.848 and 0.79, for vertical cup/disc ratio (vC/D). For GPS, global GPS performed best with AUC 0.829 and accuracy 0.78. SVM-radial with all parameters showed significant improvement over global GPS and vC/ D with AUC 0.916 and accuracy 0.85. RPART with all parameters provided significant improvement over global GPS with AUC 0.899 and significant improvement over global GPS and vC/D with accuracy 0.875. Conclusions Machine learning classifiers of HRT3 data provide significant enhancement over current methods for detection of glaucoma. PMID:18523087
Girault, C.; Chevron, V.; Richard, J. C.; Daudenthun, I.; Pasquis, P.; Leroy, J.; Bonmarchand, G.
1997-01-01
BACKGROUND: A study was undertaken to investigate the effects of non- invasive assist-control ventilation (ACV) by nasal mask on respiratory physiological parameters and comfort in acute on chronic respiratory failure (ACRF). METHODS: Fifteen patients with chronic obstructive pulmonary disease (COPD) were prospectively and randomly assigned to two non-invasive ventilation (NIV) sequences in spontaneous breathing (SB) and ACV mode. ACV settings were always optimised and therefore subsequently adjusted according to patient's tolerance and air leaks. RESULTS: ACV significantly decreased all the total inspiratory work of breathing (WOBinsp) parameters, pressure time product, and oesophageal pressure variation in comparison with SB mode. The ACV mode also resulted in a significant reduction in surface diaphragmatic electromyographic activity to 36% of the control values and significantly improved the breathing pattern. SB did not change the arterial blood gas tensions from baseline values whereas ACV significantly improved both the PaO2 from a mean (SD) of 8.45 (2.95) kPa to 13.31 (2.15) kPa, PaCO2 from 9.52 (1.61) kPa to 7.39 (1.39) kPa, and the pH from 7.32 (0.03) to 7.40 (0.07). The respiratory comfort was significantly lower with ACV than with SB. CONCLUSIONS: This study shows that the clinical benefit of non-invasive ACV in the management of ACRF in patients with COPD results in a reduced inspiratory muscle activity providing an improvement in breathing pattern and gas exchange. Despite respiratory discomfort, the muscle rest provided appears sufficient when ACV settings are optimised. PMID:9337827
Tugwell, J R; England, A; Hogg, P
2017-08-01
Physical and technical differences exist between imaging on an x-ray tabletop and imaging on a trolley. This study evaluates how trolley imaging impacts image quality and radiation dose for an antero-posterior (AP) pelvis projection whilst subsequently exploring means of optimising this imaging examination. An anthropomorphic pelvis phantom was imaged on a commercially available trolley under various conditions. Variables explored included two mattresses, two image receptor holder positions, three source to image distances (SIDs) and four mAs values. Image quality was evaluated using relative visual grading analysis with the reference image acquired on the x-ray tabletop. Contrast to noise ratio (CNR) was calculated. Effective dose was established using Monte Carlo simulation. Optimisation scores were derived as a figure of merit by dividing effective dose with visual image quality scores. Visual image quality reduced significantly (p < 0.05) whilst effective dose increased significantly (p < 0.05) for images acquired on the trolley using identical acquisition parameters to the reference image. The trolley image with the highest optimisation score was acquired using 130 cm SID, 20 mAs, the standard mattress and platform not elevated. A difference of 12.8 mm was found between the image with the lowest and highest magnification factor (18%). The acquisition parameters used for AP pelvis on the x-ray tabletop are not transferable to trolley imaging and should be modified accordingly to compensate for the differences that exist. Exposure charts should be developed for trolley imaging to ensure optimal image quality at lowest possible dose. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Designing synthetic networks in silico: a generalised evolutionary algorithm approach.
Smith, Robert W; van Sluijs, Bob; Fleck, Christian
2017-12-02
Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.
NASA Astrophysics Data System (ADS)
Semenov, Mikhail A.; Stratonovitch, Pierre; Paul, Matthew J.
2017-04-01
Short periods of extreme weather, such as a spell of high temperature or drought during a sensitive stage of development, could result in substantial yield losses due to reduction in grain number and grain size. In a modelling study (Stratonovitch & Semenov 2015), heat tolerance around flowering in wheat was identified as a key trait for increased yield potential in Europe under climate change. Ji et all (Ji et al. 2010) demonstrated cultivar specific responses of yield to drought stress around flowering in wheat. They hypothesised that carbohydrate supply to anthers may be the key in maintaining pollen fertility and grain number in wheat. It was shown in (Nuccio et al. 2015) that genetically modified varieties of maize that increase the concentration of sucrose in ear spikelets, performed better under non-drought and drought conditions in field experiments. The objective of this modelling study was to assess potential benefits of tolerance to drought during reproductive development for wheat yield potential and yield stability across Europe. We used the Sirius wheat model to optimise wheat ideotypes for 2050 (HadGEM2, RCP8.5) climate scenarios at selected European sites. Eight cultivar parameters were optimised to maximise mean yields, including parameters controlling phenology, canopy growth and water limitation. At those sites where water could be limited, ideotypes sensitive to drought produced substantially lower mean yields and higher yield variability compare with tolerant ideotypes. Therefore, tolerance to drought during reproductive development is likely to be required for wheat cultivars optimised for the future climate in Europe in order to achieve high yield potential and yield stability.
Optimisation of quantitative lung SPECT applied to mild COPD: a software phantom simulation study.
Norberg, Pernilla; Olsson, Anna; Alm Carlsson, Gudrun; Sandborg, Michael; Gustafsson, Agnetha
2015-01-01
The amount of inhomogeneities in a (99m)Tc Technegas single-photon emission computed tomography (SPECT) lung image, caused by reduced ventilation in lung regions affected by chronic obstructive pulmonary disease (COPD), is correlated to disease advancement. A quantitative analysis method, the CVT method, measuring these inhomogeneities was proposed in earlier work. To detect mild COPD, which is a difficult task, optimised parameter values are needed. In this work, the CVT method was optimised with respect to the parameter values of acquisition, reconstruction and analysis. The ordered subset expectation maximisation (OSEM) algorithm was used for reconstructing the lung SPECT images. As a first step towards clinical application of the CVT method in detecting mild COPD, this study was based on simulated SPECT images of an advanced anthropomorphic lung software phantom including respiratory and cardiac motion, where the mild COPD lung had an overall ventilation reduction of 5%. The best separation between healthy and mild COPD lung images as determined using the CVT measure of ventilation inhomogeneity and 125 MBq (99m)Tc was obtained using a low-energy high-resolution collimator (LEHR) and a power 6 Butterworth post-filter with a cutoff frequency of 0.6 to 0.7 cm(-1). Sixty-four reconstruction updates and a small kernel size should be used when the whole lung is analysed, and for the reduced lung a greater number of updates and a larger kernel size are needed. A LEHR collimator and 125 (99m)Tc MBq together with an optimal combination of cutoff frequency, number of updates and kernel size, gave the best result. Suboptimal selections of either cutoff frequency, number of updates and kernel size will reduce the imaging system's ability to detect mild COPD in the lung phantom.
Material model of pelvic bone based on modal analysis: a study on the composite bone.
Henyš, Petr; Čapek, Lukáš
2017-02-01
Digital models based on finite element (FE) analysis are widely used in orthopaedics to predict the stress or strain in the bone due to bone-implant interaction. The usability of the model depends strongly on the bone material description. The material model that is most commonly used is based on a constant Young's modulus or on the apparent density of bone obtained from computer tomography (CT) data. The Young's modulus of bone is described in many experimental works with large variations in the results. The concept of measuring and validating the material model of the pelvic bone based on modal analysis is introduced in this pilot study. The modal frequencies, damping, and shapes of the composite bone were measured precisely by an impact hammer at 239 points. An FE model was built using the data pertaining to the geometry and apparent density obtained from the CT of the composite bone. The isotropic homogeneous Young's modulus and Poisson's ratio of the cortical and trabecular bone were estimated from the optimisation procedure including Gaussian statistical properties. The performance of the updated model was investigated through the sensitivity analysis of the natural frequencies with respect to the material parameters. The maximal error between the numerical and experimental natural frequencies of the bone reached 1.74 % in the first modal shape. Finally, the optimised parameters were matched with the data sheets of the composite bone. The maximal difference between the calibrated material properties and that obtained from the data sheet was 34 %. The optimisation scheme of the FE model based on the modal analysis data provides extremely useful calibration of the FE models with the uncertainty bounds and without the influence of the boundary conditions.
NASA Astrophysics Data System (ADS)
Lu, Lihao; Zhang, Jianxiong; Tang, Wansheng
2016-04-01
An inventory system for perishable items with limited replenishment capacity is introduced in this paper. The demand rate depends on the stock quantity displayed in the store as well as the sales price. With the goal to realise profit maximisation, an optimisation problem is addressed to seek for the optimal joint dynamic pricing and replenishment policy which is obtained by solving the optimisation problem with Pontryagin's maximum principle. A joint mixed policy, in which the sales price is a static decision variable and the replenishment rate remains to be a dynamic decision variable, is presented to compare with the joint dynamic policy. Numerical results demonstrate the advantages of the joint dynamic one, and further show the effects of different system parameters on the optimal joint dynamic policy and the maximal total profit.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
Computation of the Distribution of the Fiber-Matrix Interface Cracks in the Edge Trimming of CFRP
NASA Astrophysics Data System (ADS)
Wang, Fu-ji; Zhang, Bo-yu; Ma, Jian-wei; Bi, Guang-jian; Hu, Hai-bo
2018-04-01
Edge trimming is commonly used to bring the CFRP components to right dimension and shape in aerospace industries. However, various forms of undesirable machining damage occur frequently which will significantly decrease the material performance of CFRP. The damage is difficult to predict and control due to the complicated changing laws, causing unsatisfactory machining quality of CFRP components. Since the most of damage has the same essence: the fiber-matrix interface cracks, this study aims to calculate the distribution of them in edge trimming of CFRP, thereby to obtain the effects of the machining parameters, which could be helpful to guide the optimal selection of the machining parameters in engineering. Through the orthogonal cutting experiments, the quantitative relation between the fiber-matrix interface crack depth and the fiber cutting angle, cutting depth as well as cutting speed is established. According to the analysis on material removal process on any location of the workpiece in edge trimming, the instantaneous cutting parameters are calculated, and the formation process of the fiber-matrix interface crack is revealed. Finally, the computational method for the fiber-matrix interface cracks in edge trimming of CFRP is proposed. Upon the computational results, it is found that the fiber orientations of CFRP workpieces is the most significant factor on the fiber-matrix interface cracks, which can not only change the depth of them from micrometers to millimeters, but control the distribution image of them. Other machining parameters, only influence the fiber-matrix interface cracks depth but have little effect on the distribution image.
AMS 4.0: consensus prediction of post-translational modifications in protein sequences.
Plewczynski, Dariusz; Basu, Subhadip; Saha, Indrajit
2012-08-01
We present here the 2011 update of the AutoMotif Service (AMS 4.0) that predicts the wide selection of 88 different types of the single amino acid post-translational modifications (PTM) in protein sequences. The selection of experimentally confirmed modifications is acquired from the latest UniProt and Phospho.ELM databases for training. The sequence vicinity of each modified residue is represented using amino acids physico-chemical features encoded using high quality indices (HQI) obtaining by automatic clustering of known indices extracted from AAindex database. For each type of the numerical representation, the method builds the ensemble of Multi-Layer Perceptron (MLP) pattern classifiers, each optimising different objectives during the training (for example the recall, precision or area under the ROC curve (AUC)). The consensus is built using brainstorming technology, which combines multi-objective instances of machine learning algorithm, and the data fusion of different training objects representations, in order to boost the overall prediction accuracy of conserved short sequence motifs. The performance of AMS 4.0 is compared with the accuracy of previous versions, which were constructed using single machine learning methods (artificial neural networks, support vector machine). Our software improves the average AUC score of the earlier version by close to 7 % as calculated on the test datasets of all 88 PTM types. Moreover, for the selected most-difficult sequence motifs types it is able to improve the prediction performance by almost 32 %, when compared with previously used single machine learning methods. Summarising, the brainstorming consensus meta-learning methodology on the average boosts the AUC score up to around 89 %, averaged over all 88 PTM types. Detailed results for single machine learning methods and the consensus methodology are also provided, together with the comparison to previously published methods and state-of-the-art software tools. The source code and precompiled binaries of brainstorming tool are available at http://code.google.com/p/automotifserver/ under Apache 2.0 licensing.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Temperature based Restricted Boltzmann Machines
NASA Astrophysics Data System (ADS)
Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping
2016-01-01
Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.
Kagkadis, K A; Rekkas, D M; Dallas, P P; Choulis, N H
1996-01-01
In this study a complex of Ibuprofen and b-Hydroxypropylcyclodextrin was prepared employing a freeze drying method. The production parameters and the final specifications of this product were optimized by using response surface methodology. The results show that the freeze dried complex meets the requirements for solubility to be considered as a possible injectable form.
Optimising the inactivation of grape juice spoilage organisms by pulse electric fields.
Marsellés-Fontanet, A Robert; Puig, Anna; Olmos, Paola; Mínguez-Sanz, Santiago; Martín-Belloso, Olga
2009-04-15
The effect of some pulsed electric field (PEF) processing parameters (electric field strength, pulse frequency and treatment time), on a mixture of microorganisms (Kloeckera apiculata, Saccharomyces cerevisiae, Lactobacillus plantarum, Lactobacillus hilgardii and Gluconobacter oxydans) typically present in grape juice and wine were evaluated. An experimental design based on response surface methodology (RSM) was used and results were also compared with those of a factorially designed experiment. The relationship between the levels of inactivation of microorganisms and the energy applied to the grape juice was analysed. Yeast and bacteria were inactivated by the PEF treatments, with reductions that ranged from 2.24 to 3.94 log units. All PEF parameters affected microbial inactivation. Optimal inactivation of the mixture of spoilage microorganisms was predicted by the RSM models at 35.0 kV cm(-1) with 303 Hz pulse width for 1 ms. Inactivation was greater for yeasts than for bacteria, as was predicted by the RSM. The maximum efficacy of the PEF treatment for inactivation of microorganisms in grape juice was observed around 1500 MJ L(-1) for all the microorganisms investigated. The RSM could be used in the fruit juice industry to optimise the inactivation of spoilage microorganisms by PEF.
On an efficient multilevel inverter assembly: structural savings and design optimisations
NASA Astrophysics Data System (ADS)
Choupan, Reza; Nazarpour, Daryoush; Golshannavaz, Sajjad
2018-01-01
This study puts forward an efficient unit cell to be taken in use in multilevel inverter assemblies. The proposed structure is in line with reductions in number of direct current (dc) voltage sources, insulated-gate bipolar transistors (IGBTs), gate driver circuits, installation area, and hence the implementation costs. Such structural savings do not sacrifice the technical performance of the proposed design wherein an increased number of output voltage levels is attained, interestingly. Targeting a techno-economic characteristic, the contemplated structure is included as the key unit of cascaded multilevel inverters. Such extensions require development of applicable design procedures. To this end, two efficient strategies are elaborated to determine the magnitudes of input dc voltage sources. As well, an optimisation process is developed to explore the optimal allocation of different parameters in overall performance of the proposed inverter. These parameters are investigated as the number of IGBTs, dc sources, diodes, and overall blocked voltage on switches. In the lights of these characteristics, a comprehensive analysis is established to compare the proposed design with the conventional and recently developed structures. Detailed simulation and experimental studies are conducted to assess the performance of the proposed design. The obtained results are discussed in depth.
Capacity-optimized mp2 audio watermarking
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Dittmann, Jana
2003-06-01
Today a number of audio watermarking algorithms have been proposed, some of them at a quality making them suitable for commercial applications. The focus of most of these algorithms is copyright protection. Therefore, transparency and robustness are the most discussed and optimised parameters. But other applications for audio watermarking can also be identified stressing other parameters like complexity or payload. In our paper, we introduce a new mp2 audio watermarking algorithm optimised for high payload. Our algorithm uses the scale factors of an mp2 file for watermark embedding. They are grouped and masked based on a pseudo-random pattern generated from a secret key. In each group, we embed one bit. Depending on the bit to embed, we change the scale factors by adding 1 where necessary until it includes either more even or uneven scale factors. An uneven group has a 1 embedded, an even group a 0. The same rule is later applied to detect the watermark. The group size can be increased or decreased for transparency/payload trade-off. We embed 160 bits or more in an mp2 file per second without reducing perceived quality. As an application example, we introduce a prototypic Karaoke system displaying song lyrics embedded as a watermark.
Integration of a Capacitive EIS Sensor into a FIA System for pH and Penicillin Determination
Rolka, David; Poghossian, Arshak; Schöning, Michael J.
2004-01-01
A field-effect based capacitive EIS (electrolyte-insulator-semiconductor) sensor with a p-Si-SiO2-Ta2O5 structure has been successfully integrated into a commercial FIA (flow-injection analysis) system and system performances have been proven and optimised for pH and penicillin detection. A flow-through cell was designed taking into account the requirement of a variable internal volume (from 12 μl up to 48 μl) as well as an easy replacement of the EIS sensor. FIA parameters (sample volume, flow rate, distance between the injection valve and the EIS sensor) have been optimised in terms of high sensitivity and reproducibility as well as a minimum dispersion of the injected sample zone. An acceptable compromise between different FIA parameters has been found. For the cell design used in this study, best results have been achieved with a flow rate of 1.4 ml/min, distance between the injection valve and the EIS sensor of 6.5 cm, probe volume of 0.75 ml, cell internal volume of 12 μl. A sample throughput of at least 15 samples/h was typically obtained.
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.
2010-05-01
Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.
Heat engine generator control system
Rajashekara, K.; Gorti, B.V.; McMullen, S.R.; Raibert, R.J.
1998-05-12
An electrical power generation system includes a heat engine having an output member operatively coupled to the rotor of a dynamoelectric machine. System output power is controlled by varying an electrical parameter of the dynamoelectric machine. A power request signal is related to an engine speed and the electrical parameter is varied in accordance with a speed control loop. Initially, the sense of change in the electrical parameter in response to a change in the power request signal is opposite that required to effectuate a steady state output power consistent with the power request signal. Thereafter, the electrical parameter is varied to converge the output member speed to the speed known to be associated with the desired electrical output power. 8 figs.
Heat engine generator control system
Rajashekara, Kaushik; Gorti, Bhanuprasad Venkata; McMullen, Steven Robert; Raibert, Robert Joseph
1998-01-01
An electrical power generation system includes a heat engine having an output member operatively coupled to the rotor of a dynamoelectric machine. System output power is controlled by varying an electrical parameter of the dynamoelectric machine. A power request signal is related to an engine speed and the electrical parameter is varied in accordance with a speed control loop. Initially, the sense of change in the electrical parameter in response to a change in the power request signal is opposite that required to effectuate a steady state output power consistent with the power request signal. Thereafter, the electrical parameter is varied to converge the output member speed to the speed known to be associated with the desired electrical output power.
NASA Astrophysics Data System (ADS)
Belwanshi, Vinod; Topkar, Anita
2016-05-01
Finite element analysis study has been carried out to optimize the design parameters for bulk micro-machined silicon membranes for piezoresistive pressure sensing applications. The design is targeted for measurement of pressure up to 200 bar for nuclear reactor applications. The mechanical behavior of bulk micro-machined silicon membranes in terms of deflection and stress generation has been simulated. Based on the simulation results, optimization of the membrane design parameters in terms of length, width and thickness has been carried out. Subsequent to optimization of membrane geometrical parameters, the dimensions and location of the high stress concentration region for implantation of piezoresistors have been obtained for sensing of pressure using piezoresistive sensing technique.
Chowdhury, M A K; Sharif Ullah, A M M; Anwar, Saqib
2017-09-12
Ti6Al4V alloys are difficult-to-cut materials that have extensive applications in the automotive and aerospace industry. A great deal of effort has been made to develop and improve the machining operations of Ti6Al4V alloys. This paper presents an experimental study that systematically analyzes the effects of the machining conditions (ultrasonic power, feed rate, spindle speed, and tool diameter) on the performance parameters (cutting force, tool wear, overcut error, and cylindricity error), while drilling high precision holes on the workpiece made of Ti6Al4V alloys using rotary ultrasonic machining (RUM). Numerical results were obtained by conducting experiments following the design of an experiment procedure. The effects of the machining conditions on each performance parameter have been determined by constructing a set of possibility distributions (i.e., trapezoidal fuzzy numbers) from the experimental data. A possibility distribution is a probability-distribution-neural representation of uncertainty, and is effective in quantifying the uncertainty underlying physical quantities when there is a limited number of data points which is the case here. Lastly, the optimal machining conditions have been identified using these possibility distributions.
NASA Astrophysics Data System (ADS)
Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.
2018-01-01
Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.
Machine processing of ERTS and ground truth data
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Peacock, K.
1973-01-01
The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.
NASA Astrophysics Data System (ADS)
Ee, K. C.; Dillon, O. W.; Jawahir, I. S.
2004-06-01
This paper discusses the influence of major chip-groove parameters of a cutting tool on the chip formation process in orthogonal machining using finite element (FE) methods. In the FE formulation, a thermal elastic-viscoplastic material model is used together with a modified Johnson-Cook material law for the flow stress. The chip back-flow angle and the chip up-curl radius are calculated for a range of cutting conditions by varying the chip-groove parameters. The analysis provides greater understanding of the effectiveness of chip-groove configurations and points a way to correlate cutting conditions with tool-wear when machining with a grooved cutting tool.
NASA Astrophysics Data System (ADS)
Sudhakara, Dara; Prasanthi, Guvvala
2017-04-01
Wire Cut EDM is an unconventional machining process used to build components of complex shape. The current work mainly deals with optimization of surface roughness while machining P/M CW TOOL STEEL by Wire cut EDM using Taguchi method. The process parameters of the Wire Cut EDM is ON, OFF, IP, SV, WT, and WP. L27 OA is used for to design of the experiments for conducting experimentation. In order to find out the effecting parameters on the surface roughness, ANOVA analysis is engaged. The optimum levels for getting minimum surface roughness is ON = 108 µs, OFF = 63 µs, IP = 11 A, SV = 68 V and WT = 8 g.
NASA Astrophysics Data System (ADS)
Govorov, Michael; Gienko, Gennady; Putrenko, Viktor
2018-05-01
In this paper, several supervised machine learning algorithms were explored to define homogeneous regions of con-centration of uranium in surface waters in Ukraine using multiple environmental parameters. The previous study was focused on finding the primary environmental parameters related to uranium in ground waters using several methods of spatial statistics and unsupervised classification. At this step, we refined the regionalization using Artifi-cial Neural Networks (ANN) techniques including Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Convolutional Neural Network (CNN). The study is focused on building local ANN models which may significantly improve the prediction results of machine learning algorithms by taking into considerations non-stationarity and autocorrelation in spatial data.
Motion Simulation Analysis of Rail Weld CNC Fine Milling Machine
NASA Astrophysics Data System (ADS)
Mao, Huajie; Shu, Min; Li, Chao; Zhang, Baojun
CNC fine milling machine is a new advanced equipment of rail weld precision machining with high precision, high efficiency, low environmental pollution and other technical advantages. The motion performance of this machine directly affects its machining accuracy and stability, which makes it an important consideration for its design. Based on the design drawings, this article completed 3D modeling of 60mm/kg rail weld CNC fine milling machine by using Solidworks. After that, the geometry was imported into Adams to finish the motion simulation analysis. The displacement, velocity, angular velocity and some other kinematical parameters curves of the main components were obtained in the post-processing and these are the scientific basis for the design and development for this machine.
Miranda-Fuentes, Antonio; Rodríguez-Lizana, Antonio; Gil, Emilio; Agüera-Vega, J; Gil-Ribes, Jesús A
2015-12-15
Olive is a key crop in Europe, especially in countries around the Mediterranean Basin. Optimising the parameters of a spray is essential for sustainable pesticide use, especially in high-input systems, such as the super-intensive hedgerow system. Parameters may be optimised by adjusting the applied volume and airflow rate of sprays, in addition to the liquid to air proportion and the relationship between air velocity and airflow rate. Two spray experiments using a commercial airblast sprayer were conducted in a super-intensive orchard to study how varying the liquid volume rate (testing volumes of 182, 619, and 1603 l ha(-1)) and volumetric airflow rate (with flow rates of 11.93, 8.90, and 6.15 m(3) s(-1)) influences the coverage parameters and the amount and distribution of deposits in different zones of the canopy.. Our results showed that an increase in the application volume raised the mean deposit and percentage coverage, but decreased the application efficiency, spray penetration, and deposit homogeneity. Furthermore, we found that the volumetric airflow rate had a lower influence on the studied parameters than the liquid volume; however, an increase in the airflow rate improved the application efficiency and homogeneity to a certain threshold, after which the spray quality decreased. This decrease was observed in the high-flow treatment. Our results demonstrate that intermediate liquid volume rates and volumetric airflow rates are required for the optimal spraying of pesticides on super-intensive olive crops, and would reduce current pollution levels. Copyright © 2015 Elsevier B.V. All rights reserved.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Investigation of irradiated 1H-Benzo[b]pyrrole by ESR, thermal methods and learning algorithm
NASA Astrophysics Data System (ADS)
Algul, Gulay; Ceylan, Yusuf; Usta, Keziban; Yumurtaci Aydogmus, Hacer; Usta, Ayhan; Asik, Biray
2016-05-01
1H-Benzo[b]pyrrole samples were irradiated in the air with gamma source at 0.969 kGy per hour at room temperature for 24, 48 and 72 h. After irradiation, electron spin resonance, thermogravimetry analysis (TGA) and differential thermal analysis (DTA) measurements were immediately carried out on the irradiated and unirradiated samples. The ESR measurements were performed between 320 and 400 K. ESR spectra were recorded from the samples irradiated for 48 and 72 h. The obtained spectra were observed to be dependent on temperature. Two radical-type centres were detected on the sample. Detected radiation-induced radicals were attributed to R-+•NH and R=•CC2H2. The g-values and hyperfine constants were calculated by means of the experimental spectra. It was also determined from TGA spectrum that both the unirradiated and irradiated samples were decomposed at one step with the rising temperature. Moreover, a theoretical study was presented. Success of the machine learning methods was tested. It was found that bagging techniques, which are widely used in the machine learning literature, could optimise prediction accuracy noticeably.
The Integration of an API619 Screw Compressor Package into the Industrial Internet of Things
NASA Astrophysics Data System (ADS)
Milligan, W. J.; Poli, G.; Harrison, D. K.
2017-08-01
The Industrial Internet of Things (IIoT) is the industrial subset of the Internet of Things (IoT). IIoT incorporates big data technology, harnessing the instrumentation data, machine to machine communication and automation technologies that have existed in industrial settings for years. As industry in general trends towards the IIoT and as the screw compressor packages developed by Howden Compressors are designed with a minimum design life of 25 years, it is imperative this technology is embedded immediately. This paper provides the reader with a description on the Industrial Internet of Things before moving onto describing the scope of the problem for an organisation like Howden Compressors who deploy multiple compressor technologies across multiple locations and focuses on the critical measurements particular to high specification screw compressor packages. A brief analysis of how this differs from high volume package manufacturers deploying similar systems is offered. Then follows a description on how the measured information gets from the tip of the instrument in the process pipework or drive train through the different layers, with a description of each layer, into the final presentation layer. The functions available within the presentation layer are taken in turn and the benefits analysed with specific focus on efficiency and availability. The paper concludes with how packagers adopting the IIoT can not only optimise their package but by utilising the machine learning technology and pattern detection applications can adopt completely new business models.
Assessing the depth of hypnosis of xenon anaesthesia with the EEG.
Stuttmann, Ralph; Schultz, Arthur; Kneif, Thomas; Krauss, Terence; Schultz, Barbara
2010-04-01
Xenon was approved as an inhaled anaesthetic in Germany in 2005 and in other countries of the European Union in 2007. Owing to its low blood/gas partition coefficient, xenons effects on the central nervous system show a fast onset and offset and, even after long xenon anaesthetics, the wake-up times are very short. The aim of this study was to examine which electroencephalogram (EEG) stages are reached during xenon application and whether these stages can be identified by an automatic EEG classification. Therefore, EEG recordings were performed during xenon anaesthetics (EEG monitor: Narcotrend®). A total of 300 EEG epochs were assessed visually with regard to the EEG stages. These epochs were also classified automatically by the EEG monitor Narcotrend® using multivariate algorithms. There was a high correlation between visual and automatic classification (Spearman's rank correlation coefficient r=0.957, prediction probability Pk=0.949). Furthermore, it was observed that very deep stages of hypnosis were reached which are characterised by EEG activity in the low frequency range (delta waves). The burst suppression pattern was not seen. In deep hypnosis, in contrast to the xenon EEG, the propofol EEG was characterised by a marked superimposed higher frequency activity. To ensure an optimised dosage for the single patient, anaesthetic machines for xenon should be combined with EEG monitoring. To date, only a few anaesthetic machines for xenon are available. Because of the high price of xenon, new and further developments of machines focus on optimizing xenon consumption.
Pearson's Functions to Describe FSW Weld Geometry
NASA Astrophysics Data System (ADS)
Lacombe, D.; Gutierrez-Orrantia, M. E.; Coupard, D.; Tcherniaeff, S.; Girot, F.
2011-01-01
Friction stir welding (FSW) is a relatively new joining technique particularly for aluminium alloys that are difficult to fusion weld. In this study, the geometry of the weld has been investigated and modelled using Pearson's functions. It has been demonstrated that the Pearson's parameters (mean, standard deviation, skewness, kurtosis and geometric constant) can be used to characterize the weld geometry and the tensile strength of the weld assembly. Pearson's parameters and process parameters are strongly correlated allowing to define a control process procedure for FSW assemblies which make radiographic or ultrasonic controls unnecessary. Finally, an optimisation using a Generalized Gradient Method allows to determine the geometry of the weld which maximises the assembly tensile strength.
Kerns, James R; Followill, David S; Lowenstein, Jessica; Molineu, Andrea; Alvarez, Paola; Taylor, Paige A; Stingo, Francesco C; Kry, Stephen F
2016-05-01
Accurate data regarding linear accelerator (Linac) radiation characteristics are important for treatment planning system modeling as well as regular quality assurance of the machine. The Imaging and Radiation Oncology Core-Houston (IROC-H) has measured the dosimetric characteristics of numerous machines through their on-site dosimetry review protocols. Photon data are presented and can be used as a secondary check of acquired values, as a means to verify commissioning a new machine, or in preparation for an IROC-H site visit. Photon data from IROC-H on-site reviews from 2000 to 2014 were compiled and analyzed. Specifically, data from approximately 500 Varian machines were analyzed. Each dataset consisted of point measurements of several dosimetric parameters at various locations in a water phantom to assess the percentage depth dose, jaw output factors, multileaf collimator small field output factors, off-axis factors, and wedge factors. The data were analyzed by energy and parameter, with similarly performing machine models being assimilated into classes. Common statistical metrics are presented for each machine class. Measurement data were compared against other reference data where applicable. Distributions of the parameter data were shown to be robust and derive from a student's t distribution. Based on statistical and clinical criteria, all machine models were able to be classified into two or three classes for each energy, except for 6 MV for which there were eight classes. Quantitative analysis of the measurements for 6, 10, 15, and 18 MV photon beams is presented for each parameter; supplementary material has also been made available which contains further statistical information. IROC-H has collected numerous data on Varian Linacs and the results of photon measurements from the past 15 years are presented. The data can be used as a comparison check of a physicist's acquired values. Acquired values that are well outside the expected distribution should be verified by the physicist to identify whether the measurements are valid. Comparison of values to this reference data provides a redundant check to help prevent gross dosimetric treatment errors.
AFM surface imaging of AISI D2 tool steel machined by the EDM process
NASA Astrophysics Data System (ADS)
Guu, Y. H.
2005-04-01
The surface morphology, surface roughness and micro-crack of AISI D2 tool steel machined by the electrical discharge machining (EDM) process were analyzed by means of the atomic force microscopy (AFM) technique. Experimental results indicate that the surface texture after EDM is determined by the discharge energy during processing. An excellent machined finish can be obtained by setting the machine parameters at a low pulse energy. The surface roughness and the depth of the micro-cracks were proportional to the power input. Furthermore, the AFM application yielded information about the depth of the micro-cracks is particularly important in the post treatment of AISI D2 tool steel machined by EDM.
Survey of beam instrumentation used in SLC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecklund, S.D.
A survey of beam instruments used at SLAC in the SLC machine is presented. The basic utility and operation of each device is briefly described. The various beam instruments used at the Stanford Linear Collider (SLC), can be classified by the function they perform. Beam intensity, position and size are typical of the parameters of beam which are measured. Each type of parameter is important for adjusting or tuning the machine in order to achieve optimum performance. 39 refs.
Physical mechanism of ultrasonic machining
NASA Astrophysics Data System (ADS)
Isaev, A.; Grechishnikov, V.; Kozochkin, M.; Pivkin, P.; Petuhov, Y.; Romanov, V.
2016-04-01
In this paper, the main aspects of ultrasonic machining of constructional materials are considered. Influence of coolant on surface parameters is studied. Results of experiments on ultrasonic lathe cutting with application of tangential vibrations and with use of coolant are considered.
NASA Astrophysics Data System (ADS)
Soepangkat, Bobby O. P.; Suhardjono, Pramujati, Bambang
2017-06-01
Machining under minimum quantity lubrication (MQL) has drawn the attention of researchers as an alternative to the traditionally used wet and dry machining conditions with the purpose to minimize the cooling and lubricating cost, as well as to reduce cutting zone temperature, tool wear, and hole surface roughness. Drilling is one of the important operations to assemble machine components. The objective of this study was to optimize drilling parameters such as cutting feed and cutting speed, drill type and drill point angle on the thrust force, torque, hole surface roughness and tool flank wear in drilling EMS 45 tool steel using MQL. In this study, experiments were carried out as per Taguchi design of experiments while an L18 orthogonal array was used to study the influence of various combinations of drilling parameters and tool geometries on the thrust force, torque, hole surface roughness and tool flank wear. The optimum drilling parameters was determined by using grey relational grade obtained from grey relational analysis for multiple-performance characteristics. The drilling experiments were carried out by using twist drill and CNC machining center. This work is useful for optimum values selection of various drilling parameters and tool geometries that would not only minimize the thrust force and torque, but also reduce hole surface roughness and tool flank wear.
Machine Learning for Mapping Groundwater Salinity with Oil Well Log Data
NASA Astrophysics Data System (ADS)
Chang, W. H.; Shimabukuro, D.; Gillespie, J. M.; Stephens, M.
2016-12-01
An oil field may have thousands of wells with detailed petrophysical logs, and far fewer direct measurements of groundwater salinity. Can the former be used to extrapolate the latter into a detailed map of groundwater salinity? California Senate Bill 4, with its requirement to identify Underground Sources of Drinking Water, makes this a question worth answering. A well-known obstacle is that the basic petrophysical equations describe ideal scenarios ("clean wet sand") and even these equations contain many parameters that may vary with location and depth. Accounting for other common scenarios such as high-conductivity shaly sands or low-permeability diatomite (both characteristic of California's Central Valley) causes parameters to proliferate to the point where the model is underdetermined by the data. When parameters outnumber data points, however, is when machine learning methods are most advantageous. We present a method for modeling a generic oil field, where groundwater salinity and lithology are depth series parameters, and the constants in petrophysical equations are scalar parameters. The data are well log measurements (resistivity, porosity, spontaneous potential, and gamma ray) and a small number of direct groundwater salinity measurements. Embedded in the model are petrophysical equations that account for shaly sand and diatomite formations. As a proof of concept, we feed in well logs and salinity measurements from the Lost Hills Oil Field in Kern County, California, and show that with proper regularization and validation the model makes reasonable predictions of groundwater salinity despite the large number of parameters. The model is implemented using Tensorflow, which is an open-source software released by Google in November, 2015 that has been rapidly and widely adopted by machine learning researchers. The code will be made available on Github, and we encourage scrutiny and modification by machine learning researchers and hydrogeologists alike.
The construction of support vector machine classifier using the firefly algorithm.
Chao, Chih-Feng; Horng, Ming-Huwi
2015-01-01
The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.
The Construction of Support Vector Machine Classifier Using the Firefly Algorithm
Chao, Chih-Feng; Horng, Ming-Huwi
2015-01-01
The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511
Giasin, Khaled; Ayvar-Soberanis, Sabino
2016-07-28
The rise in cutting temperatures during the machining process can influence the final quality of the machined part. The impact of cutting temperatures is more critical when machining composite-metal stacks and fiber metal laminates due to the stacking nature of those hybrids which subjects the composite to heat from direct contact with metallic part of the stack and the evacuated hot chips. In this paper, the workpiece surface temperature of two grades of fiber metal laminates commercially know as GLARE is investigated. An experimental study was carried out using thermocouples and infrared thermography to determine the emissivity of the upper, lower and side surfaces of GLARE laminates. In addition, infrared thermography was used to determine the maximum temperature of the bottom surface of machined holes during drilling GLARE under dry and minimum quantity lubrication (MQL) cooling conditions under different cutting parameters. The results showed that during the machining process, the workpiece surface temperature increased with the increase in feed rate and fiber orientation influenced the developed temperature in the laminate.
Computational Fluid Dynamic Simulation of Flow in Abrasive Water Jet Machining
NASA Astrophysics Data System (ADS)
Venugopal, S.; Sathish, S.; Jothi Prakash, V. M.; Gopalakrishnan, T.
2017-03-01
Abrasive water jet cutting is one of the most recently developed non-traditional manufacturing technologies. In this machining, the abrasives are mixed with suspended liquid to form semi liquid mixture. The general nature of flow through the machining, results in fleeting wear of the nozzle which decrease the cutting performance. The inlet pressure of the abrasive water suspension has main effect on the major destruction characteristics of the inner surface of the nozzle. The aim of the project is to analyze the effect of inlet pressure on wall shear and exit kinetic energy. The analysis could be carried out by changing the taper angle of the nozzle, so as to obtain optimized process parameters for minimum nozzle wear. The two phase flow analysis would be carried by using computational fluid dynamics tool CFX. It is also used to analyze the flow characteristics of abrasive water jet machining on the inner surface of the nozzle. The availability of optimized process parameters of abrasive water jet machining (AWJM) is limited to water and experimental test can be cost prohibitive. In this case, Computational fluid dynamics analysis would provide better results.
Giasin, Khaled; Ayvar-Soberanis, Sabino
2016-01-01
The rise in cutting temperatures during the machining process can influence the final quality of the machined part. The impact of cutting temperatures is more critical when machining composite-metal stacks and fiber metal laminates due to the stacking nature of those hybrids which subjects the composite to heat from direct contact with metallic part of the stack and the evacuated hot chips. In this paper, the workpiece surface temperature of two grades of fiber metal laminates commercially know as GLARE is investigated. An experimental study was carried out using thermocouples and infrared thermography to determine the emissivity of the upper, lower and side surfaces of GLARE laminates. In addition, infrared thermography was used to determine the maximum temperature of the bottom surface of machined holes during drilling GLARE under dry and minimum quantity lubrication (MQL) cooling conditions under different cutting parameters. The results showed that during the machining process, the workpiece surface temperature increased with the increase in feed rate and fiber orientation influenced the developed temperature in the laminate. PMID:28773757
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
NASA Astrophysics Data System (ADS)
Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.
2011-02-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm. PMID:23737718
Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.
Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima
2013-01-01
The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation
NASA Astrophysics Data System (ADS)
MacNish, Cara
2007-12-01
Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R (2) values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes.
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R 2 values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes. PMID:25785274
NASA Astrophysics Data System (ADS)
Zhou, Qianxiang; Liu, Zhongqi
With the development of manned space technology, space rendezvous and docking (RVD) technology will play a more and more important role. The astronauts’ participation in a final close period of man-machine combination control is an important way of RVD technology. Spacecraft RVD control involves control problem of a total of 12 degrees of freedom (location) and attitude which it relative to the inertial space the orbit. Therefore, in order to reduce the astronauts’ operation load and reduce the security requirements to the ground station and achieve an optimal performance of the whole man-machine system, it is need to study how to design the number of control parameters of astronaut or aircraft automatic control system. In this study, with the laboratory conditions on the ground, a method was put forward to develop an experimental system in which the performance evaluation of spaceship RVD integration control by man and machine could be completed. After the RVD precision requirements were determined, 26 male volunteers aged 20-40 took part in the performance evaluation experiments. The RVD integration control success rates and total thruster ignition time were chosen as evaluation indices. Results show that if less than three RVD parameters control tasks were finished by subject and the rest of parameters control task completed by automation, the RVD success rate would be larger than eighty-eight percent and the fuel consumption would be optimized. In addition, there were two subjects who finished the whole six RVD parameters control tasks by enough train. In conclusion, if the astronauts' role should be integrated into the RVD control, it was suitable for them to finish the heading, pitch and roll control in order to assure the man-machine system high performance. If astronauts were needed to finish all parameter control, two points should be taken into consideration, one was enough fuel and another was enough long operation time.
NASA Astrophysics Data System (ADS)
Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.
2018-03-01
Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Vandecasteele, Frederik P J; Hess, Thomas F; Crawford, Ronald L
2007-07-01
The functioning of natural microbial ecosystems is determined by biotic interactions, which are in turn influenced by abiotic environmental conditions. Direct experimental manipulation of such conditions can be used to purposefully drive ecosystems toward exhibiting desirable functions. When a set of environmental conditions can be manipulated to be present at a discrete number of levels, finding the right combination of conditions to obtain the optimal desired effect becomes a typical combinatorial optimisation problem. Genetic algorithms are a class of robust and flexible search and optimisation techniques from the field of computer science that may be very suitable for such a task. To verify this idea, datasets containing growth levels of the total microbial community of four different natural microbial ecosystems in response to all possible combinations of a set of five chemical supplements were obtained. Subsequently, the ability of a genetic algorithm to search this parameter space for combinations of supplements driving the microbial communities to high levels of growth was compared to that of a random search, a local search, and a hill-climbing algorithm, three intuitive alternative optimisation approaches. The results indicate that a genetic algorithm is very suitable for driving microbial ecosystems in desirable directions, which opens opportunities for both fundamental ecological research and industrial applications.
Manogaran, Motharasan; Shukor, Mohd Yunus; Yasid, Nur Adeela; Khalil, Khalilah Abdul; Ahmad, Siti Aqlima
2018-02-01
The herbicide glyphosate is often used to control weeds in agricultural lands. However, despite its ability to effectively kill weeds at low cost, health problems are still reported due to its toxicity level. The removal of glyphosate from the environment is usually done by microbiological process since chemical process of degradation is ineffective due to the presence of highly stable bonds. Therefore, finding glyphosate-degrading microorganisms in the soil of interest is crucial to remediate this glyphosate. Burkholderia vietnamiensis strain AQ5-12 was found to have glyphosate-degrading ability. Optimisation of biodegradation condition was carried out utilising one factor at a time (OFAT) and response surface methodology (RSM). Five parameters including carbon and nitrogen source, pH, temperature and glyphosate concentration were optimised. Based on OFAT result, glyphosate degradation was observed to be optimum at fructose concentration of 6, 0.5 g/L ammonia sulphate, pH 6.5, temperature of 32 °C and glyphosate concentration at 100 ppm. Meanwhile, RSM resulted in a better degradation with 92.32% of 100 ppm glyphosate compared to OFAT. The bacterium was seen to tolerate up to 500 ppm glyphosate while increasing concentration results in reduced degradation and bacterial growth rate.
Oladejo, Ayobami Olayemi; Ma, Haile
2016-08-01
Sweet potato is a highly nutritious tuber crop that is rich in β-carotene. Osmotic dehydration is a pretreatment method for drying of fruit and vegetables. Recently, ultrasound technology has been applied in food processing because of its numerous advantages which include time saving, little damage to the quality of the food. Thus, there is need to investigate and optimise the process parameters [frequency (20-50 kHz), time (10-30 min) and sucrose concentration (20-60% w/v)] for ultrasound-assisted osmotic dehydration of sweet potato using response surface methodology. The optimised values obtained were frequency of 33.93 kHz, time of 30 min and sucrose concentration of 35.69% (w/v) to give predicted values of 21.62, 4.40 and 17.23% for water loss, solid gain and weight reduction, respectively. The water loss and weight reduction increased when the ultrasound frequency increased from 20 to 35 kHz and then decreased as the frequency increased from 35 to 50 kHz. The results from this work show that low ultrasound frequency favours the osmotic dehydration of sweet potato and also reduces the use of raw material (sucrose) needed for the osmotic dehydration of sweet potato. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System
Bhudolia, Somen K.; Perrotey, Pavel; Joshi, Sunil C.
2017-01-01
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium®. Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented. PMID:28772654
Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System.
Bhudolia, Somen K; Perrotey, Pavel; Joshi, Sunil C
2017-03-15
For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium ® . Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented.
NASA Astrophysics Data System (ADS)
Rudrapati, R.; Sahoo, P.; Bandyopadhyay, A.
2016-09-01
The main aim of the present work is to analyse the significance of turning parameters on surface roughness in computer numerically controlled (CNC) turning operation while machining of aluminium alloy material. Spindle speed, feed rate and depth of cut have been considered as machining parameters. Experimental runs have been conducted as per Box-Behnken design method. After experimentation, surface roughness is measured by using stylus profile meter. Factor effects have been studied through analysis of variance. Mathematical modelling has been done by response surface methodology, to made relationships between the input parameters and output response. Finally, process optimization has been made by teaching learning based optimization (TLBO) algorithm. Predicted turning condition has been validated through confirmatory experiment.
Parameters estimation for reactive transport: A way to test the validity of a reactive model
NASA Astrophysics Data System (ADS)
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
Cardiac phenotyping in ex vivo murine embryos using microMRI.
Cleary, Jon O; Price, Anthony N; Thomas, David L; Scambler, Peter J; Kyriakopoulou, Vanessa; McCue, Karen; Schneider, Jürgen E; Ordidge, Roger J; Lythgoe, Mark F
2009-10-01
Microscopic MRI (microMRI) is an emerging technique for high-throughput phenotyping of transgenic mouse embryos, and is capable of visualising abnormalities in cardiac development. To identify cardiac defects in embryos, we have optimised embryo preparation and MR acquisition parameters to maximise image quality and assess the phenotypic changes in chromodomain helicase DNA-binding protein 7 (Chd7) transgenic mice. microMRI methods rely on tissue penetration with a gadolinium chelate contrast agent to reduce tissue T(1), thus improving signal-to-noise ratio (SNR) in rapid gradient echo sequences. We investigated 15.5 days post coitum (dpc) wild-type CD-1 embryos fixed in gadolinium-diethylene triamine pentaacetic acid (Gd-DTPA) solutions for either 3 days (2 and 4 mM) or 2 weeks (2, 4, 8 and 16 mM). To assess penetration of the contrast agent into heart tissue and enable image contrast simulations, T(1) and T(*) (2) were measured in heart and background agarose. Compared to 3-day, 2-week fixation showed reduced mean T(1) in the heart at both 2 and 4 mM concentrations (p < 0.0001), resulting in calculated signal gains of 23% (2 mM) and 29% (4 mM). Using T(1) and T(*) (2) values from 2-week concentrations, computer simulation of heart and background signal, and ex vivo 3D gradient echo imaging, we demonstrated that 2-week fixed embryos in 8 mM Gd-DTPA in combination with optimised parameters (TE/TR/alpha/number of averages: 9 ms/20 ms/60 degrees /7) produced the largest SNR in the heart (23.2 +/- 1.0) and heart chamber contrast-to-noise ratio (CNR) (27.1 +/- 1.6). These optimised parameters were then applied to an MRI screen of embryos heterozygous for the gene Chd7, implicated in coloboma of the eye, heart defects, atresia of the choanae, retardation of growth, genital/urinary abnormalities, ear abnormalities and deafness (CHARGE) syndrome (a condition partly characterised by cardiovascular birth defects in humans). A ventricular septal defect was readily identified in the screen, consistent with the human phenotype. (c) 2009 John Wiley & Sons, Ltd.
Neural network feedforward control of a closed-circuit wind tunnel
NASA Astrophysics Data System (ADS)
Sutcliffe, Peter
Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the derived NNMPC controller provided an excellent level of control over the test-section temperature in adherence to a reference trajectory even when faced with unforeseen disturbances such as rapid changes in the operating environment.
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
NASA Astrophysics Data System (ADS)
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
Hardware Design of the Energy Efficient Fall Detection Device
NASA Astrophysics Data System (ADS)
Skorodumovs, A.; Avots, E.; Hofmanis, J.; Korāts, G.
2016-04-01
Health issues for elderly people may lead to different injuries obtained during simple activities of daily living. Potentially the most dangerous are unintentional falls that may be critical or even lethal to some patients due to the heavy injury risk. In the project "Wireless Sensor Systems in Telecare Application for Elderly People", we have developed a robust fall detection algorithm for a wearable wireless sensor. To optimise the algorithm for hardware performance and test it in field, we have designed an accelerometer based wireless fall detector. Our main considerations were: a) functionality - so that the algorithm can be applied to the chosen hardware, and b) power efficiency - so that it can run for a very long time. We have picked and tested the parts, built a prototype, optimised the firmware for lowest consumption, tested the performance and measured the consumption parameters. In this paper, we discuss our design choices and present the results of our work.
Electrocardiographic signals and swarm-based support vector machine for hypoglycemia detection.
Nuryani, Nuryani; Ling, Steve S H; Nguyen, H T
2012-04-01
Cardiac arrhythmia relating to hypoglycemia is suggested as a cause of death in diabetic patients. This article introduces electrocardiographic (ECG) parameters for artificially induced hypoglycemia detection. In addition, a hybrid technique of swarm-based support vector machine (SVM) is introduced for hypoglycemia detection using the ECG parameters as inputs. In this technique, a particle swarm optimization (PSO) is proposed to optimize the SVM to detect hypoglycemia. In an experiment using medical data of patients with Type 1 diabetes, the introduced ECG parameters show significant contributions to the performance of the hypoglycemia detection and the proposed detection technique performs well in terms of sensitivity and specificity.
Le, Van So; Do, Zoe Phuc-Hien; Le, Minh Khoi; Le, Vicki; Le, Natalie Nha-Truc
2014-06-10
Methods of increasing the performance of radionuclide generators used in nuclear medicine radiotherapy and SPECT/PET imaging were developed and detailed for 99Mo/99mTc and 68Ge/68Ga radionuclide generators as the cases. Optimisation methods of the daughter nuclide build-up versus stand-by time and/or specific activity using mean progress functions were developed for increasing the performance of radionuclide generators. As a result of this optimisation, the separation of the daughter nuclide from its parent one should be performed at a defined optimal time to avoid the deterioration in specific activity of the daughter nuclide and wasting stand-by time of the generator, while the daughter nuclide yield is maintained to a reasonably high extent. A new characteristic parameter of the formation-decay kinetics of parent/daughter nuclide system was found and effectively used in the practice of the generator production and utilisation. A method of "early elution schedule" was also developed for increasing the daughter nuclide production yield and specific radioactivity, thus saving the cost of the generator and improving the quality of the daughter radionuclide solution. These newly developed optimisation methods in combination with an integrated elution-purification-concentration system of radionuclide generators recently developed is the most suitable way to operate the generator effectively on the basis of economic use and improvement of purposely suitable quality and specific activity of the produced daughter radionuclides. All these features benefit the economic use of the generator, the improved quality of labelling/scan, and the lowered cost of nuclear medicine procedure. Besides, a new method of quality control protocol set-up for post-delivery test of radionuclidic purity has been developed based on the relationship between gamma ray spectrometric detection limit, required limit of impure radionuclide activity and its measurement certainty with respect to optimising decay/measurement time and product sample activity used for QC quality control. The optimisation ensures a certainty of measurement of the specific impure radionuclide and avoids wasting the useful amount of valuable purified/concentrated daughter nuclide product. This process is important for the spectrometric measurement of very low activity of impure radionuclide contamination in the radioisotope products of much higher activity used in medical imaging and targeted radiotherapy.
Yang, Lingjian; Ainali, Chrysanthi; Tsoka, Sophia; Papageorgiou, Lazaros G
2014-12-05
Applying machine learning methods on microarray gene expression profiles for disease classification problems is a popular method to derive biomarkers, i.e. sets of genes that can predict disease state or outcome. Traditional approaches where expression of genes were treated independently suffer from low prediction accuracy and difficulty of biological interpretation. Current research efforts focus on integrating information on protein interactions through biochemical pathway datasets with expression profiles to propose pathway-based classifiers that can enhance disease diagnosis and prognosis. As most of the pathway activity inference methods in literature are either unsupervised or applied on two-class datasets, there is good scope to address such limitations by proposing novel methodologies. A supervised multiclass pathway activity inference method using optimisation techniques is reported. For each pathway expression dataset, patterns of its constituent genes are summarised into one composite feature, termed pathway activity, and a novel mathematical programming model is proposed to infer this feature as a weighted linear summation of expression of its constituent genes. Gene weights are determined by the optimisation model, in a way that the resulting pathway activity has the optimal discriminative power with regards to disease phenotypes. Classification is then performed on the resulting low-dimensional pathway activity profile. The model was evaluated through a variety of published gene expression profiles that cover different types of disease. We show that not only does it improve classification accuracy, but it can also perform well in multiclass disease datasets, a limitation of other approaches from the literature. Desirable features of the model include the ability to control the maximum number of genes that may participate in determining pathway activity, which may be pre-specified by the user. Overall, this work highlights the potential of building pathway-based multi-phenotype classifiers for accurate disease diagnosis and prognosis problems.
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Temperature Measurement and Numerical Prediction in Machining Inconel 718.
Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar
2017-06-30
Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.
Investigation of Machine-ability of Inconel 800 in EDM with Coated Electrode
NASA Astrophysics Data System (ADS)
Karunakaran, K.; Chandrasekaran, M.
2017-03-01
The Inconel 800 is a high temperature application alloy which is classified as a nickel based super alloy. It has wide scope in aerospace engineering, gas Turbine etc. The machine-ability studies were found limited on this material. Hence This research focuses on machine-ability studies on EDM of Inconel 800 with Silver Coated Electrolyte Copper Electrode. The purpose of coating on electrode is to reduce tool wear. The factors pulse on Time, Pulse off Time and Peck Current were considered to observe the responses of surface roughness, material removal rate, tool wear rate. Taguchi Full Factorial Design is employed for Design the experiment. Some specific findings were reported and the percentage of contribution of each parameter was furnished
Product design for energy reduction in concurrent engineering: An Inverted Pyramid Approach
NASA Astrophysics Data System (ADS)
Alkadi, Nasr M.
Energy factors in product design in concurrent engineering (CE) are becoming an emerging dimension for several reasons; (a) the rising interest in "green design and manufacturing", (b) the national energy security concerns and the dramatic increase in energy prices, (c) the global competition in the marketplace and global climate change commitments including carbon tax and emission trading systems, and (d) the widespread recognition of the need for sustainable development. This research presents a methodology for the intervention of energy factors in concurrent engineering product development process to significantly reduce the manufacturing energy requirement. The work presented here is the first attempt at integrating the design for energy in concurrent engineering framework. It adds an important tool to the DFX toolbox for evaluation of the impact of design decisions on the product manufacturing energy requirement early during the design phase. The research hypothesis states that "Product Manufacturing Energy Requirement is a Function of Design Parameters". The hypothesis was tested by conducting experimental work in machining and heat treating that took place at the manufacturing lab of the Industrial and Management Systems Engineering Department (IMSE) at West Virginia University (WVU) and at a major U.S steel manufacturing plant, respectively. The objective of the machining experiment was to study the effect of changing specific product design parameters (Material type and diameter) and process design parameters (metal removal rate) on a gear head lathe input power requirement through performing defined sets of machining experiments. The objective of the heat treating experiment was to study the effect of varying product charging temperature on the fuel consumption of a walking beams reheat furnace. The experimental work in both directions have revealed important insights into energy utilization in machining and heat-treating processes and its variance based on product, process, and system design parameters. In depth evaluation to how the design and manufacturing normally happen in concurrent engineering provided a framework to develop energy system levels in machining within the concurrent engineering environment using the method of "Inverted Pyramid Approach", (IPA). The IPA features varying levels of output energy based information depending on the input design parameters that is available during each stage (level) of the product design. The experimental work, the in-depth evaluation of design and manufacturing in CE, and the developed energy system levels in machining provided a solid base for the development of the model for the design for energy reduction in CE. The model was used to analyze an example part where 12 evolving designs were thoroughly reviewed to investigate the sensitivity of energy to design parameters in machining. The model allowed product design teams to address manufacturing energy concerns early during the design stage. As a result, ranges for energy sensitive design parameters impacting product manufacturing energy consumption were found in earlier levels. As designer proceeds to deeper levels in the model, this range tightens and results in significant energy reductions.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Generalised form of a power law threshold function for rainfall-induced landslides
NASA Astrophysics Data System (ADS)
Cepeda, Jose; Díaz, Manuel Roberto; Nadim, Farrokh; Høeg, Kaare; Elverhøi, Anders
2010-05-01
The following new function is proposed for estimating thresholds for rainfall-triggered landslides: I = α1Anα2Dβ, where I is rainfall intensity in mm/h, D is rainfall duration in h, An is the n-hours or n-days antecedent precipitation, and α1, α2, β and n are threshold parameters. A threshold model that combines two functions with different durations of antecedent precipitation is also introduced. A storm observation exceeds the threshold when the storm parameters are located at or above the two functions simultaneously. A novel optimisation procedure for estimating the threshold parameters is proposed using Receiver Operating Characteristics (ROC) analysis. The new threshold function and optimisation procedure are applied for estimating thresholds for triggering of debris flows in the Western Metropolitan Area of San Salvador (AMSS), El Salvador, where up to 500 casualties were produced by a single event. The resulting thresholds are I = 2322 A7d-1D-0.43 and I = 28534 A150d-1D-0.43 for debris flows having volumes greater than 3000 m3. Thresholds are also derived for debris flows greater than 200 000 m3 and for hyperconcentrated flows initiating in burned areas caused by forest fires. The new thresholds show an improved performance compared to the traditional formulations, indicated by a reduction in false alarms from 51 to 5 for the 3000 m3 thresholds and from 6 to 0 false alarms for the 200 000 m3 thresholds.
Geometric dependence of the parasitic components and thermal properties of HEMTs
NASA Astrophysics Data System (ADS)
Vun, Peter V.; Parker, Anthony E.; Mahon, Simon J.; Fattorini, Anthony
2007-12-01
For integrated circuit design up to 50GHz and beyond accurate models of the transistor access structures and intrinsic structures are necessary for prediction of circuit performance. The circuit design process relies on optimising transistor geometry parameters such as unit gate width, number of gates, number of vias and gate-to-gate spacing. So the relationship between electrical and thermal parasitic components in transistor access structures, and transistor geometry is important to understand when developing models for transistors of differing geometries. Current approaches to describing the geometric dependence of models are limited to empirical methods which only describe a finite set of geometries and only include unit gate width and number of gates as variables. A better understanding of the geometric dependence is seen as a way to provide scalable models that remain accurate for continuous variation of all geometric parameters. Understanding the distribution of parasitic elements between the manifold, the terminal fingers, and the reference plane discontinuities is an issue identified as important in this regard. Examination of dc characteristics and thermal images indicates that gate-to-gate thermal coupling and increased thermal conductance at the gate ends, affects the device total thermal conductance. Consequently, a distributed thermal model is proposed which accounts for these effects. This work is seen as a starting point for developing comprehensive scalable models that will allow RF circuit designers to optimise circuit performance parameters such as total die area, maximum output power, power-added-efficiency (PAE) and channel temperature/lifetime.
Calculation of parameters of technological equipment for deep-sea mining
NASA Astrophysics Data System (ADS)
Yungmeister, D. A.; Ivanov, S. E.; Isaev, A. I.
2018-03-01
The actual problem of extracting minerals from the bottom of the world ocean is considered. On the ocean floor, three types of minerals are of interest: iron-manganese concretions (IMC), cobalt-manganese crusts (CMC) and sulphides. The analysis of known designs of machines and complexes for the extraction of IMC is performed. These machines are based on the principle of excavating the bottom surface; however such methods do not always correspond to “gentle” methods of mining. The ecological purity of such mining methods does not meet the necessary requirements. Such machines require the transmission of high electric power through the water column, which in some cases is a significant challenge. The authors analyzed the options of transportation of the extracted mineral from the bottom. The paper describes the design of machines that collect IMC by the method of vacuum suction. In this method, the gripping plates or drums are provided with cavities in which a vacuum is created and individual IMC are attracted to the devices by a pressure drop. The work of such machines can be called “gentle” processing technology of the bottom areas. Their environmental impact is significantly lower than mechanical devices that carry out the raking of IMC. The parameters of the device for lifting the IMC collected on the bottom are calculated. With the use of Kevlar ropes of serial production up to 0.06 meters in diameter, with a cycle time of up to 2 hours and a lifting speed of up to 3 meters per second, a productivity of about 400,000 tons per year can be realized for IMC. The development of machines based on the calculated parameters and approbation of their designs will create a unique complex for the extraction of minerals at oceanic deposits.
Machining process influence on the chip form and surface roughness by neuro-fuzzy technique
NASA Astrophysics Data System (ADS)
Anicic, Obrad; Jović, Srđan; Aksić, Danilo; Skulić, Aleksandar; Nedić, Bogdan
2017-04-01
The main aim of the study was to analyze the influence of six machining parameters on the chip shape formation and surface roughness as well during turning of Steel 30CrNiMo8. Three components of cutting forces were used as inputs together with cutting speed, feed rate, and depth of cut. It is crucial for the engineers to use optimal machining parameters to get the best results or to high control of the machining process. Therefore, there is need to find the machining parameters for the optimal procedure of the machining process. Adaptive neuro-fuzzy inference system (ANFIS) was used to estimate the inputs influence on the chip shape formation and surface roughness. According to the results, the cutting force in direction of the depth of cut has the highest influence on the chip form. The testing error for the cutting force in direction of the depth of cut has testing error 0.2562. This cutting force determines the depth of cut. According to the results, the depth of cut has the highest influence on the surface roughness. Also the depth of cut has the highest influence on the surface roughness. The testing error for the cutting force in direction of the depth of cut has testing error 5.2753. Generally the depth of cut and the cutting force which provides the depth of cut are the most dominant factors for chip forms and surface roughness. Any small changes in depth of cut or in cutting force which provide the depth of cut could drastically affect the chip form or surface roughness of the working material.
NASA Astrophysics Data System (ADS)
Sahu, Anshuman Kumar; Chatterjee, Suman; Nayak, Praveen Kumar; Sankar Mahapatra, Siba
2018-03-01
Electrical discharge machining (EDM) is a non-traditional machining process which is widely used in machining of difficult-to-machine materials. EDM process can produce complex and intrinsic shaped component made of difficult-to-machine materials, largely applied in aerospace, biomedical, die and mold making industries. To meet the required applications, the EDMed components need to possess high accuracy and excellent surface finish. In this work, EDM process is performed using Nitinol as work piece material and AlSiMg prepared by selective laser sintering (SLS) as tool electrode along with conventional copper and graphite electrodes. The SLS is a rapid prototyping (RP) method to produce complex metallic parts by additive manufacturing (AM) process. Experiments have been carried out varying different process parameters like open circuit voltage (V), discharge current (Ip), duty cycle (τ), pulse-on-time (Ton) and tool material. The surface roughness parameter like average roughness (Ra), maximum height of the profile (Rt) and average height of the profile (Rz) are measured using surface roughness measuring instrument (Talysurf). To reduce the number of experiments, design of experiment (DOE) approach like Taguchi’s L27 orthogonal array has been chosen. The surface properties of the EDM specimen are optimized by desirability function approach and the best parametric setting is reported for the EDM process. Type of tool happens to be the most significant parameter followed by interaction of tool type and duty cycle, duty cycle, discharge current and voltage. Better surface finish of EDMed specimen can be obtained with low value of voltage (V), discharge current (Ip), duty cycle (τ) and pulse on time (Ton) along with the use of AlSiMg RP electrode.
PredicT-ML: a tool for automating machine learning model building with big clinical data.
Luo, Gang
2016-01-01
Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.
Optimization of Machining Process Parameters for Surface Roughness of Al-Composites
NASA Astrophysics Data System (ADS)
Sharma, S.
2013-10-01
Metal matrix composites (MMCs) have become a leading material among the various types of composite materials for different applications due to their excellent engineering properties. Among the various types of composites materials, aluminum MMCs have received considerable attention in automobile and aerospace applications. These materials are known as the difficult-to-machine materials because of the hardness and abrasive nature of reinforcement element-like silicon carbide particles. In the present investigation Al-SiC composite was produced by stir casting process. The Brinell hardness of the alloy after SiC addition had increased from 74 ± 2 to 95 ± 5 respectively. The composite was machined using CNC turning center under different machining parameters such as cutting speed (S), feed rate (F), depth of cut (D) and nose radius (R). The effect of machining parameters on surface roughness (Ra) was studied using response surface methodology. Face centered composite design with three levels of each factor was used for surface roughness study of the developed composite. A response surface model for surface roughness was developed in terms of main factors (S, F, D and R) and their significant interactions (SD, SR, FD and FR). The developed model was validated by conducting experiments under different conditions. Further the model was optimized for minimum surface roughness. An error of 3-7 % was observed in the modeled and experimental results. Further, it was fond that the surface roughness of Al-alloy at optimum conditions is lower than that of Al-SiC composite.
Continuous Rating for Diggability Assessment in Surface Mines
NASA Astrophysics Data System (ADS)
IPHAR, Melih
2016-10-01
The rocks can be loosened either by drilling-blasting or direct excavation using powerful machines in opencast mining operations. The economics of rock excavation is considered for each method to be applied. If blasting operation is not preferred and also the geological structures and rock mass properties in site are convenient (favourable ground conditions) for ripping or direct excavation method by mining machines, the next step is to determine which machine or excavator should be selected for the excavation purposes. Many researchers have proposed several diggability or excavatability assessment methods for deciding on excavator type to be used in the field. Most of these systems are generally based on assigning a rating for the parameters having importance in rock excavation process. However, the sharp transitions between the two adjacent classes for a given parameter can lead to some uncertainties. In this paper, it has been proposed that varying rating should be assigned for a given parameter called as “continuous rating” instead of giving constant rating for a given class.
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting
NASA Astrophysics Data System (ADS)
Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy
2018-06-01
Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.
Intelligent Internet-based information system optimises diabetes mellitus management in communities.
Wei, Xuejuan; Wu, Hao; Cui, Shuqi; Ge, Caiying; Wang, Li; Jia, Hongyan; Liang, Wannian
2018-05-01
To evaluate the effect of an intelligent Internet-based information system upon optimising the management of patients diagnosed with type 2 diabetes mellitus (T2DM). In 2015, a T2DM information system was introduced to optimise the management of T2DM patients for 1 year in Fangzhuang community of Beijing, China. A total of 602 T2DM patients who were registered in the health service centre of Fangzhuang community were enrolled based on an isometric sampling technique. The data from 587 patients were used in the final analysis. The intervention effect was subsequently assessed by statistically comparing multiple parameters, such as the prevalence of glycaemic control, standard health management and annual outpatient consultation visits per person, before and after the implementation of the T2DM information system. In 2015, a total of 1668 T2DM patients were newly registered in Fangzhuang community. The glycaemic control rate was calculated as 37.65% in 2014 and significantly elevated up to 62.35% in 2015 ( p < 0.001). After application of the Internet-based information system, the rate of standard health management was increased from 48.04% to 85.01% ( p < 0.001). Among all registered T2DM patients, the annual outpatient consultation visits per person in Fangzhuang community was 24.88% in 2014, considerably decreased to 22.84% in 2015 ( p < 0.001) and declined from 14.59% to 13.66% in general hospitals ( p < 0.05). Application of the T2DM information system optimised the management of T2DM patients in Fangzhuang community and decreased the outpatient numbers in both community and general hospitals, which played a positive role in assisting T2DM patients and their healthcare providers to better manage this chronic illness.
Production of biosolid fuels from municipal sewage sludge: Technical and economic optimisation.
Wzorek, Małgorzata; Tańczuk, Mariusz
2015-08-01
The article presents the technical and economic analysis of the production of fuels from municipal sewage sludge. The analysis involved the production of two types of fuel compositions: sewage sludge with sawdust (PBT fuel) and sewage sludge with meat and bone meal (PBM fuel). The technology of the production line of these sewage fuels was proposed and analysed. The main objective of the study is to find the optimal production capacity. The optimisation analysis was performed for the adopted technical and economic parameters under Polish conditions. The objective function was set as a maximum of the net present value index and the optimisation procedure was carried out for the fuel production line input capacity from 0.5 to 3 t h(-1), using the search step 0.5 t h(-1). On the basis of technical and economic assumptions, economic efficiency indexes of the investment were determined for the case of optimal line productivity. The results of the optimisation analysis show that under appropriate conditions, such as prices of components and prices of produced fuels, the production of fuels from sewage sludge can be profitable. In the case of PBT fuel, calculated economic indexes show the best profitability for the capacity of a plant over 1.5 t h(-1) output, while production of PBM fuel is beneficial for a plant with the maximum of searched capacities: 3.0 t h(-1). Sensitivity analyses carried out during the investigation show that influence of both technical and economic assessments on the location of maximum of objective function (net present value) is significant. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hussain, A
Purpose: Novel linac machines, TrueBeam (TB) and Elekta Versa have updated head designing and software control system, include flattening-filter-free (FFF) photon and electron beams. Later on FFF beams were also introduced on C-Series machines. In this work FFF beams for same energy 6MV but from different machine versions were studied with reference to beam data parameters. Methods: The 6MV-FFF percent depth doses, profile symmetry and flatness, dose rate tables, and multi-leaf collimator (MLC) transmission factors were measured during commissioning process of both C-series and Truebeam machines. The scanning and dosimetric data for 6MV-FFF beam from Truebeam and C-Series linacs wasmore » compared. A correlation of 6MV-FFF beam from Elekta Versa with that of Varian linacs was also found. Results: The scanning files were plotted for both qualitative and quantitative analysis. The dosimetric leaf gap (DLG) for C-Series 6MV-FFF beam is 1.1 mm. Published values for Truebeam dosimetric leaf gap is 1.16 mm. 6MV MLC transmission factor varies between 1.3 % and 1.4 % in two separate measurements and measured DLG values vary between 1.32 mm and 1.33 mm on C-Series machine. MLC transmission factor from C-Series machine varies between 1.5 % and 1.6 %. Some of the measured data values from C-Series FFF beam are compared with Truebeam representative data. 6MV-FFF beam parameter values like dmax, OP factors, beam symmetry and flatness and additional parameters for C-Series and Truebeam liancs will be presented and compared in graphical form and tabular data form if selected. Conclusion: The 6MV flattening filter (FF) beam data from C-Series & Truebeam and 6MV-FFF beam data from Truebeam has already presented. This particular analysis to compare 6MV-FFF beam from C-Series and Truebeam provides opportunity to better elaborate FFF mode on novel machines. It was found that C-Series and Truebeam 6MV-FFF dosimetric and beam data was quite similar.« less
NASA Astrophysics Data System (ADS)
Qianxiang, Zhou
2012-07-01
It is very important to clarify the geometric characteristic of human body segment and constitute analysis model for ergonomic design and the application of ergonomic virtual human. The typical anthropometric data of 1122 Chinese men aged 20-35 years were collected using three-dimensional laser scanner for human body. According to the correlation between different parameters, curve fitting were made between seven trunk parameters and ten body parameters with the SPSS 16.0 software. It can be concluded that hip circumference and shoulder breadth are the most important parameters in the models and the two parameters have high correlation with the others parameters of human body. By comparison with the conventional regressive curves, the present regression equation with the seven trunk parameters is more accurate to forecast the geometric dimensions of head, neck, height and the four limbs with high precision. Therefore, it is greatly valuable for ergonomic design and analysis of man-machine system.This result will be very useful to astronaut body model analysis and application.
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang Baolong; Department of Mathematics and Physics, Hefei University, Hefei, 230022; Song Qingming
We present a scheme to realize a special quantum cloning machine in separate cavities. The quantum cloning machine can copy the quantum information from a photon pulse to two distant atoms. Choosing the different parameters, the method can perform optimal symmetric (asymmetric) universal quantum cloning and optimal symmetric (asymmetric) phase-covariant cloning.
Fast machine-learning online optimization of ultra-cold-atom experiments.
Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R
2016-05-16
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.
Fast machine-learning online optimization of ultra-cold-atom experiments
Wigley, P. B.; Everitt, P. J.; van den Hengel, A.; Bastian, J. W.; Sooriyabandara, M. A.; McDonald, G. D.; Hardman, K. S.; Quinlivan, C. D.; Manju, P.; Kuhn, C. C. N.; Petersen, I. R.; Luiten, A. N.; Hope, J. J.; Robins, N. P.; Hush, M. R.
2016-01-01
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system. PMID:27180805
Vidyasagar, Mathukumalli
2015-01-01
This article reviews several techniques from machine learning that can be used to study the problem of identifying a small number of features, from among tens of thousands of measured features, that can accurately predict a drug response. Prediction problems are divided into two categories: sparse classification and sparse regression. In classification, the clinical parameter to be predicted is binary, whereas in regression, the parameter is a real number. Well-known methods for both classes of problems are briefly discussed. These include the SVM (support vector machine) for classification and various algorithms such as ridge regression, LASSO (least absolute shrinkage and selection operator), and EN (elastic net) for regression. In addition, several well-established methods that do not directly fall into machine learning theory are also reviewed, including neural networks, PAM (pattern analysis for microarrays), SAM (significance analysis for microarrays), GSEA (gene set enrichment analysis), and k-means clustering. Several references indicative of the application of these methods to cancer biology are discussed.
NASA Astrophysics Data System (ADS)
Durga Prasada Rao, V.; Harsha, N.; Raghu Ram, N. S.; Navya Geethika, V.
2018-02-01
In this work, turning was performed to optimize the surface finish or roughness (Ra) of stainless steel 304 with uncoated and coated carbide tools under dry conditions. The carbide tools were coated with Titanium Aluminium Nitride (TiAlN) nano coating using Physical Vapour Deposition (PVD) method. The machining parameters, viz., cutting speed, depth of cut and feed rate which show major impact on Ra are considered during turning. The experiments are designed as per Taguchi orthogonal array and machining process is done accordingly. Then second-order regression equations have been developed on the basis of experimental results for Ra in terms of machining parameters used. Regarding the effect of machining parameters, an upward trend is observed in Ra with respect to feed rate, and as cutting speed increases the Ra value increased slightly due to chatter and vibrations. The adequacy of response variable (Ra) is tested by conducting additional experiments. The predicted Ra values are found to be a close match of their corresponding experimental values of uncoated and coated tools. The corresponding average % errors are found to be within the acceptable limits. Then the surface roughness equations of uncoated and coated tools are set as the objectives of optimization problem and are solved by using Differential Evolution (DE) algorithm. Also the tool lives of uncoated and coated tools are predicted by using Taylor’s tool life equation.
Sotomayor, Gonzalo; Hampel, Henrietta; Vázquez, Raúl F
2018-03-01
A non-supervised (k-means) and a supervised (k-Nearest Neighbour in combination with genetic algorithm optimisation, k-NN/GA) pattern recognition algorithms were applied for evaluating and interpreting a large complex matrix of water quality (WQ) data collected during five years (2008, 2010-2013) in the Paute river basin (southern Ecuador). 21 physical, chemical and microbiological parameters collected at 80 different WQ sampling stations were examined. At first, the k-means algorithm was carried out to identify classes of sampling stations regarding their associated WQ status by considering three internal validation indexes, i.e., Silhouette coefficient, Davies-Bouldin and Caliński-Harabasz. As a result, two WQ classes were identified, representing low (C1) and high (C2) pollution. The k-NN/GA algorithm was applied on the available data to construct a classification model with the two WQ classes, previously defined by the k-means algorithm, as the dependent variables and the 21 physical, chemical and microbiological parameters being the independent ones. This algorithm led to a significant reduction of the multidimensional space of independent variables to only nine, which are likely to explain most of the structure of the two identified WQ classes. These parameters are, namely, electric conductivity, faecal coliforms, dissolved oxygen, chlorides, total hardness, nitrate, total alkalinity, biochemical oxygen demand and turbidity. Further, the land use cover of the study basin revealed a very good agreement with the WQ spatial distribution suggested by the k-means algorithm, confirming the credibility of the main results of the used WQ data mining approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reference dosimetry study for 3 MEV electron beam accelerator in malaysia
NASA Astrophysics Data System (ADS)
Ali, Noriah Mod; Sunaga, Hiromi; Tanaka, Ryuichi
1995-09-01
An effective quality assurance programme is initiated for the use of the electron beam with energies up to 3 MeV. The key element of the programme is the establishment of a relationship between the standardised beam to the routine technique which is employed to verify the beam parameter. A total absorbing calorimeter was adopted as a suitable reference system and when used in combination with the electron current densitymeter (ECD) will enable to determine the mean energy for electron with energies between 1 to 3 MeV. An appropriate method of transfering the standard parameter is studied and the work that is expected to optimise the accuracy attainable with routine check-up of the irradiation parameter are presented.
NASA Astrophysics Data System (ADS)
Sui, Yi; Zheng, Ping; Cheng, Luming; Wang, Weinan; Liu, Jiaqi
2017-05-01
A single-phase axially-magnetized permanent-magnet (PM) oscillating machine which can be integrated with a free-piston Stirling engine to generate electric power, is investigated for miniature aerospace power sources. Machine structure, operating principle and detent force characteristic are elaborately studied. With the sinusoidal speed characteristic of the mover considered, the proposed machine is designed by 2D finite-element analysis (FEA), and some main structural parameters such as air gap diameter, dimensions of PMs, pole pitches of both stator and mover, and the pole-pitch combinations, etc., are optimized to improve both the power density and force capability. Compared with the three-phase PM linear machines, the proposed single-phase machine features less PM use, simple control and low controller cost. The power density of the proposed machine is higher than that of the three-phase radially-magnetized PM linear machine, but lower than the three-phase axially-magnetized PM linear machine.
Gradient Evolution-based Support Vector Machine Algorithm for Classification
NASA Astrophysics Data System (ADS)
Zulvia, Ferani E.; Kuo, R. J.
2018-03-01
This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.