Sample records for process parameter selection

  1. Selective laser melting of Ni-rich NiTi: selection of process parameters and the superelastic response

    NASA Astrophysics Data System (ADS)

    Shayesteh Moghaddam, Narges; Saedi, Soheil; Amerinatanzi, Amirhesam; Saghaian, Ehsan; Jahadakbar, Ahmadreza; Karaca, Haluk; Elahinia, Mohammad

    2018-03-01

    Material and mechanical properties of NiTi shape memory alloys strongly depend on the fabrication process parameters and the resulting microstructure. In selective laser melting, the combination of parameters such as laser power, scanning speed, and hatch spacing determine the microstructural defects, grain size and texture. Therefore, processing parameters can be adjusted to tailor the microstructure and mechanical response of the alloy. In this work, NiTi samples were fabricated using Ni50.8Ti (at.%) powder via SLM PXM by Phenix/3D Systems and the effects of processing parameters were systematically studied. The relationship between the processing parameters and superelastic properties were investigated thoroughly. It will be shown that energy density is not the only parameter that governs the material response. It will be shown that hatch spacing is the dominant factor to tailor the superelastic response. It will be revealed that with the selection of right process parameters, perfect superelasticity with recoverable strains of up to 5.6% can be observed in the as-fabricated condition.

  2. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  3. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  4. Method and apparatus for assessing weld quality

    DOEpatents

    Smartt, Herschel B.; Kenney, Kevin L.; Johnson, John A.; Carlson, Nancy M.; Clark, Denis E.; Taylor, Paul L.; Reutzel, Edward W.

    2001-01-01

    Apparatus for determining a quality of a weld produced by a welding device according to the present invention includes a sensor operatively associated with the welding device. The sensor is responsive to at least one welding process parameter during a welding process and produces a welding process parameter signal that relates to the at least one welding process parameter. A computer connected to the sensor is responsive to the welding process parameter signal produced by the sensor. A user interface operatively associated with the computer allows a user to select a desired welding process. The computer processes the welding process parameter signal produced by the sensor in accordance with one of a constant voltage algorithm, a short duration weld algorithm or a pulsed current analysis module depending on the desired welding process selected by the user. The computer produces output data indicative of the quality of the weld.

  5. Improving tablet coating robustness by selecting critical process parameters from retrospective data.

    PubMed

    Galí, A; García-Montoya, E; Ascaso, M; Pérez-Lozano, P; Ticó, J R; Miñarro, M; Suñé-Negre, J M

    2016-09-01

    Although tablet coating processes are widely used in the pharmaceutical industry, they often lack adequate robustness. Up-scaling can be challenging as minor changes in parameters can lead to varying quality results. To select critical process parameters (CPP) using retrospective data of a commercial product and to establish a design of experiments (DoE) that would improve the robustness of the coating process. A retrospective analysis of data from 36 commercial batches. Batches were selected based on the quality results generated during batch release, some of which revealed quality deviations concerning the appearance of the coated tablets. The product is already marketed and belongs to the portfolio of a multinational pharmaceutical company. The Statgraphics 5.1 software was used for data processing to determine critical process parameters in order to propose new working ranges. This study confirms that it is possible to determine the critical process parameters and create design spaces based on retrospective data of commercial batches. This type of analysis is thus converted into a tool to optimize the robustness of existing processes. Our results show that a design space can be established with minimum investment in experiments, since current commercial batch data are processed statistically.

  6. Development of Processing Parameters for Organic Binders Using Selective Laser Sintering

    NASA Technical Reports Server (NTRS)

    Mobasher, Amir A.

    2003-01-01

    This document describes rapid prototyping, its relation to Computer Aided Design (CAD), and the application of these techniques to choosing parameters for Selective Laser Sintering (SLS). The document reviews the parameters selected by its author for his project, the SLS machine used, and its software.

  7. A Comparison of the One-, the Modified Three-, and the Three-Parameter Item Response Theory Models in the Test Development Item Selection Process.

    ERIC Educational Resources Information Center

    Eignor, Daniel R.; Douglass, James B.

    This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…

  8. Selection of the most influential factors on the water-jet assisted underwater laser process by adaptive neuro-fuzzy technique

    NASA Astrophysics Data System (ADS)

    Nikolić, Vlastimir; Petković, Dalibor; Lazov, Lyubomir; Milovančević, Miloš

    2016-07-01

    Water-jet assisted underwater laser cutting has shown some advantages as it produces much less turbulence, gas bubble and aerosols, resulting in a more gentle process. However, this process has relatively low efficiency due to different losses in water. It is important to determine which parameters are the most important for the process. In this investigation was analyzed the water-jet assisted underwater laser cutting parameters forecasting based on the different parameters. The method of ANFIS (adaptive neuro fuzzy inference system) was applied to the data in order to select the most influential factors for water-jet assisted underwater laser cutting parameters forecasting. Three inputs are considered: laser power, cutting speed and water-jet speed. The ANFIS process for variable selection was also implemented in order to detect the predominant factors affecting the forecasting of the water-jet assisted underwater laser cutting parameters. According to the results the combination of laser power cutting speed forms the most influential combination foe the prediction of water-jet assisted underwater laser cutting parameters. The best prediction was observed for the bottom kerf-width (R2 = 0.9653). The worst prediction was observed for dross area per unit length (R2 = 0.6804). According to the results, a greater improvement in estimation accuracy can be achieved by removing the unnecessary parameter.

  9. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  10. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  11. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  12. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  13. Experimental Research on Selective Laser Melting AlSi10Mg Alloys: Process, Densification and Performance

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Wei, Zhengying; Wei, Pei; Chen, Shenggui; Lu, Bingheng; Du, Jun; Li, Junfeng; Zhang, Shuzhe

    2017-12-01

    In this work, a set of experiments was designed to investigate the effect of process parameters on the relative density of the AlSi10Mg parts manufactured by SLM. The influence of laser scan speed v, laser power P and hatch space H, which were considered as the dominant parameters, on the powder melting and densification behavior was also studied experimentally. In addition, the laser energy density was introduced to evaluate the combined effect of the above dominant parameters, so as to control the SLM process integrally. As a result, a high relative density (> 97%) was obtained by SLM at an optimized laser energy density of 3.5-5.5 J/mm2. Moreover, a parameter-densification map was established to visually select the optimum process parameters for the SLM-processed AlSi10Mg parts with elevated density and required mechanical properties. The results provide an important experimental guidance for obtaining AlSi10Mg components with full density and gradient functional porosity by SLM.

  14. Effect of Electron Beam Freeform Fabrication (EBF3) Processing Parameters on Composition of Ti-6-4

    NASA Technical Reports Server (NTRS)

    Lach, Cynthia L.; Taminger, Karen; Schuszler, A. Bud, II; Sankaran, Sankara; Ehlers, Helen; Nasserrafi, Rahbar; Woods, Bryan

    2007-01-01

    The Electron Beam Freeform Fabrication (EBF3) process developed at NASA Langley Research Center was evaluated using a design of experiments approach to determine the effect of processing parameters on the composition and geometry of Ti-6-4 deposits. The effects of three processing parameters: beam power, translation speed, and wire feed rate, were investigated by varying one while keeping the remaining parameters constant. A three-factorial, three-level, fully balanced mutually orthogonal array (L27) design of experiments approach was used to examine the effects of low, medium, and high settings for the processing parameters on the chemistry, geometry, and quality of the resulting deposits. Single bead high deposits were fabricated and evaluated for 27 experimental conditions. Loss of aluminum in Ti-6-4 was observed in EBF3 processing due to selective vaporization of the aluminum from the sustained molten pool in the vacuum environment; therefore, the chemistries of the deposits were measured and compared with the composition of the initial wire and base plate to determine if the loss of aluminum could be minimized through careful selection of processing parameters. The influence of processing parameters and coupling between these parameters on bulk composition, measured by Direct Current Plasma (DCP), local microchemistries determined by Wavelength Dispersive Spectrometry (WDS), and deposit geometry will also be discussed.

  15. Investigation into the influence of laser energy input on selective laser melted thin-walled parts by response surface method

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui

    2018-04-01

    Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.

  16. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    NASA Astrophysics Data System (ADS)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  17. An Attempt of Formalizing the Selection Parameters for Settlements Generalization in Small-Scales

    NASA Astrophysics Data System (ADS)

    Karsznia, Izabela

    2014-12-01

    The paper covers one of the most important problems concerning context-sensitive settlement selection for the purpose of the small-scale maps. So far, no formal parameters for small-scale settlements generalization have been specified, hence the problem seems to be an important and innovative challenge. It is also crucial from the practical point of view as it is necessary to develop appropriate generalization algorithms for the purpose of the General Geographic Objects Database generalization which is the essential Spatial Data Infrastructure component in Poland. The author proposes and verifies quantitative generalization parameters for the purpose of the settlement selection process in small-scale maps. The selection of settlements was carried out in two research areas - in Lower Silesia and Łódź Province. Based on the conducted analysis appropriate contextual-sensitive settlements selection parameters have been defined. Particular effort has been made to develop a methodology of quantitative settlements selection which would be useful in the automation processes and that would make it possible to keep specifics of generalized objects unchanged.

  18. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    NASA Astrophysics Data System (ADS)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  19. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  20. Evaluating Acoustic Emission Signals as an in situ process monitoring technique for Selective Laser Melting (SLM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, Karl A.; Candy, Jim V.; Guss, Gabe

    2016-10-14

    In situ real-time monitoring of the Selective Laser Melting (SLM) process has significant implications for the AM community. The ability to adjust the SLM process parameters during a build (in real-time) can save time, money and eliminate expensive material waste. Having a feedback loop in the process would allow the system to potentially ‘fix’ problem regions before a next powder layer is added. In this study we have investigated acoustic emission (AE) phenomena generated during the SLM process, and evaluated the results in terms of a single process parameter, of an in situ process monitoring technique.

  1. Parameters in selective laser melting for processing metallic powders

    NASA Astrophysics Data System (ADS)

    Kurzynowski, Tomasz; Chlebus, Edward; Kuźnicka, Bogumiła; Reiner, Jacek

    2012-03-01

    The paper presents results of studies on Selective Laser Melting. SLM is an additive manufacturing technology which may be used to process almost all metallic materials in the form of powder. Types of energy emission sources, mainly fiber lasers and/or Nd:YAG laser with similar characteristics and the wavelength of 1,06 - 1,08 microns, are provided primarily for processing metallic powder materials with high absorption of laser radiation. The paper presents results of selected variable parameters (laser power, scanning time, scanning strategy) and fixed parameters such as the protective atmosphere (argon, nitrogen, helium), temperature, type and shape of the powder material. The thematic scope is very broad, so the work was focused on optimizing the process of selective laser micrometallurgy for producing fully dense parts. The density is closely linked with other two conditions: discontinuity of the microstructure (microcracks) and stability (repeatability) of the process. Materials used for the research were stainless steel 316L (AISI), tool steel H13 (AISI), and titanium alloy Ti6Al7Nb (ISO 5832-11). Studies were performed with a scanning electron microscope, a light microscopes, a confocal microscope and a μCT scanner.

  2. On selecting satellite conjunction filter parameters

    NASA Astrophysics Data System (ADS)

    Alfano, Salvatore; Finkleman, David

    2014-06-01

    This paper extends concepts of signal detection theory to predict the performance of conjunction screening techniques and guiding the selection of keepout and screening thresholds. The most efficient way to identify satellites likely to collide is to employ filters to identify orbiting pairs that should not come close enough over a prescribed time period to be considered hazardous. Such pairings can then be eliminated from further computation to accelerate overall processing time. Approximations inherent in filtering techniques include screening using only unperturbed Newtonian two body astrodynamics and uncertainties in orbit elements. Therefore, every filtering process is vulnerable to including objects that are not threats and excluding some that are threats, Type I and Type II errors. The approach in this paper guides selection of the best operating point for the filters suited to a user's tolerance for false alarms and unwarned threats. We demonstrate the approach using three archetypal filters with an initial three-day span, select filter parameters based on performance, and then test those parameters using eight historical snapshots of the space catalog. This work provides a mechanism for selecting filter parameters but the choices depend on the circumstances.

  3. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  4. Terrestrial photovoltaic cell process testing

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.

  5. Terrestrial photovoltaic cell process testing

    NASA Astrophysics Data System (ADS)

    Burger, D. R.

    The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.

  6. Distribution and avoidance of debris on epoxy resin during UV ns-laser scanning processes

    NASA Astrophysics Data System (ADS)

    Veltrup, Markus; Lukasczyk, Thomas; Ihde, Jörg; Mayer, Bernd

    2018-05-01

    In this paper the distribution of debris generated by a nanosecond UV laser (248 nm) on epoxy resin and the prevention of the corresponding re-deposition effects by parameter selection for a ns-laser scanning process were investigated. In order to understand the mechanisms behind the debris generation, in-situ particle measurements were performed during laser treatment. These measurements enabled the determination of the ablation threshold of the epoxy resin as well as the particle density and size distribution in relation to the applied laser parameters. The experiments showed that it is possible to reduce debris on the surface with an adapted selection of pulse overlap with respect to laser fluence. A theoretical model for the parameter selection was developed and tested. Based on this model, the correct choice of laser parameters with reduced laser fluence resulted in a surface without any re-deposited micro-particles.

  7. Extending the Peak Bandwidth of Parameters for Softmax Selection in Reinforcement Learning.

    PubMed

    Iwata, Kazunori

    2016-05-11

    Softmax selection is one of the most popular methods for action selection in reinforcement learning. Although various recently proposed methods may be more effective with full parameter tuning, implementing a complicated method that requires the tuning of many parameters can be difficult. Thus, softmax selection is still worth revisiting, considering the cost savings of its implementation and tuning. In fact, this method works adequately in practice with only one parameter appropriately set for the environment. The aim of this paper is to improve the variable setting of this method to extend the bandwidth of good parameters, thereby reducing the cost of implementation and parameter tuning. To achieve this, we take advantage of the asymptotic equipartition property in a Markov decision process to extend the peak bandwidth of softmax selection. Using a variety of episodic tasks, we show that our setting is effective in extending the bandwidth and that it yields a better policy in terms of stability. The bandwidth is quantitatively assessed in a series of statistical tests.

  8. Investigating the CO 2 laser cutting parameters of MDF wood composite material

    NASA Astrophysics Data System (ADS)

    Eltawahni, H. A.; Olabi, A. G.; Benyounis, K. Y.

    2011-04-01

    Laser cutting of medium density fibreboard (MDF) is a complicated process and the selection of the process parameters combinations is essential to get the highest quality cut section. This paper presents a means for selecting the process parameters for laser cutting of MDF based on the design of experiments (DOE) approach. A CO 2 laser was used to cut three thicknesses, 4, 6 and 9 mm, of MDF panels. The process factors investigated are: laser power, cutting speed, air pressure and focal point position. In this work, cutting quality was evaluated by measuring the upper kerf width, the lower kerf width, the ratio between the upper kerf width to the lower kerf width, the cut section roughness and the operating cost. The effect of each factor on the quality measures was determined. The optimal cutting combinations were presented in favours of high quality process output and in favours of low cutting cost.

  9. Optimizing the availability of a buffered industrial process

    DOEpatents

    Martz, Jr., Harry F.; Hamada, Michael S.; Koehler, Arthur J.; Berg, Eric C.

    2004-08-24

    A computer-implemented process determines optimum configuration parameters for a buffered industrial process. A population size is initialized by randomly selecting a first set of design and operation values associated with subsystems and buffers of the buffered industrial process to form a set of operating parameters for each member of the population. An availability discrete event simulation (ADES) is performed on each member of the population to determine the product-based availability of each member. A new population is formed having members with a second set of design and operation values related to the first set of design and operation values through a genetic algorithm and the product-based availability determined by the ADES. Subsequent population members are then determined by iterating the genetic algorithm with product-based availability determined by ADES to form improved design and operation values from which the configuration parameters are selected for the buffered industrial process.

  10. Statistical analysis of porosity of 17-4PH alloy processed by selective laser melting

    NASA Astrophysics Data System (ADS)

    Ponnusamy, P.; Masood, S. H.; Ruan, D.; Palanisamy, S.; Mohamed, O. A.

    2017-07-01

    Selective Laser Melting (SLM) is a powder-bed type Additive Manufacturing (AM) process, where parts are built layer-by-layer by laser melting of powder layers of metal. There are several SLM process parameters that affect the accuracy and quality of the metal parts produced by SLM. Therefore, it is essential to understand the effect of these parameters on the quality and properties of the parts built by this process. In this paper, using Taguchi design of experiments, the effect of four SLM process parameters namely laser power, defocus distance, layer thickness and build orientation are considered on the porosity of 17-4PH stainless steel parts built on ProX200 SLM direct metal printer. The porositywas found to be optimum at a defocus distance of -4mm and a laser power of 240 W with a layer thickness of 30 μm and using vertical build orientation.

  11. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).

  12. Process for depositing epitaxial alkaline earth oxide onto a substrate and structures prepared with the process

    DOEpatents

    McKee, Rodney A.; Walker, Frederick J.

    1996-01-01

    A process and structure involving a silicon substrate utilize molecular beam epitaxy (MBE) and/or electron beam evaporation methods and an ultra-high vacuum facility to grow a layup of epitaxial alkaline earth oxide films upon the substrate surface. By selecting metal constituents for the oxides and in the appropriate proportions so that the lattice parameter of each oxide grown closely approximates that of the substrate or base layer upon which oxide is grown, lattice strain at the film/film or film/substrate interface of adjacent films is appreciably reduced or relieved. Moreover, by selecting constituents for the oxides so that the lattice parameters of the materials of adjacent oxide films either increase or decrease in size from one parameter to another parameter, a graded layup of films can be grown (with reduced strain levels therebetween) so that the outer film has a lattice parameter which closely approximates that of, and thus accomodates the epitaxial growth of, a pervoskite chosen to be grown upon the outer film.

  13. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    PubMed

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  14. Equilibrium Noise in Ion Selective Field Effect Transistors.

    DTIC Science & Technology

    1982-07-21

    face. These parameters have been evaluated for several ion-selective membranes. DD I JAN ") 1473 EDITION or I Mov 09SIS OSSOLETE ONi 0102-LF-0146601...the "integrated circuit" noise on the processing parameters which were different for the two laboratories. This variability in the "integrated circuit...systems and is useful in the identification of the parameters limiting the performance of -11- these systems. In thermodynamic equilibrium, every

  15. Warpage optimization on a mobile phone case using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.

    2017-09-01

    Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0. The warpage in y direction recommended by RSM were reduced by 70 %. RSM performed well in solving warpage issue.

  16. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  17. Examining Mechanical Strength Characteristics of Selective Inhibition Sintered HDPE Specimens Using RSM and Desirability Approach

    NASA Astrophysics Data System (ADS)

    Rajamani, D.; Esakki, Balasubramanian

    2017-09-01

    Selective inhibition sintering (SIS) is a powder based additive manufacturing (AM) technique to produce functional parts with an inexpensive system compared with other AM processes. Mechanical properties of SIS fabricated parts are of high dependence on various process parameters importantly layer thickness, heat energy, heater feedrate, and printer feedrate. In this paper, examining the influence of these process parameters on evaluating mechanical properties such as tensile and flexural strength using Response Surface Methodology (RSM) is carried out. The test specimens are fabricated using high density polyethylene (HDPE) and mathematical models are developed to correlate the control factors to the respective experimental design response. Further, optimal SIS process parameters are determined using desirability approach to enhance the mechanical properties of HDPE specimens. Optimization studies reveal that, combination of high heat energy, low layer thickness, medium heater feedrate and printer feedrate yielded superior mechanical strength characteristics.

  18. Determination of Process Parameters for High-Density, Ti-6Al-4V Parts Using Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, C.

    In our earlier work, we described an approach for determining the process parameters that re- sult in high-density parts manufactured using the additive-manufacturing process of selective laser melting (SLM). Our approach, which combines simple simulations and experiments, was demon- strated using 316L stainless steel. We have also used the approach successfully for several other materials. This short note summarizes the results of our work in determining process parameters for Ti-6Al-4V using a Concept Laser M2 system.

  19. The design and development of transonic multistage compressors

    NASA Technical Reports Server (NTRS)

    Ball, C. L.; Steinke, R. J.; Newman, F. A.

    1988-01-01

    The development of the transonic multistage compressor is reviewed. Changing trends in design and performance parameters are noted. These changes are related to advances in compressor aerodynamics, computational fluid mechanics and other enabling technologies. The parameters normally given to the designer and those that need to be established during the design process are identified. Criteria and procedures used in the selection of these parameters are presented. The selection of tip speed, aerodynamic loading, flowpath geometry, incidence and deviation angles, blade/vane geometry, blade/vane solidity, stage reaction, aerodynamic blockage, inlet flow per unit annulus area, stage/overall velocity ratio, and aerodynamic losses are considered. Trends in these parameters both spanwise and axially through the machine are highlighted. The effects of flow mixing and methods for accounting for the mixing in the design process are discussed.

  20. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    NASA Astrophysics Data System (ADS)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  1. An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography

    NASA Technical Reports Server (NTRS)

    Ivancic, Will (Technical Monitor); Eddy, Wesley M.

    2005-01-01

    Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.

  2. Selective attention and the "Asynchrony Theory" in native Hebrew-speaking adult dyslexics: Behavioral and ERPs measures.

    PubMed

    Menashe, Shay

    2017-01-01

    The main aim of the present study was to determine whether adult dyslexic readers demonstrate the "Asynchrony Theory" (Breznitz [Reading Fluency: Synchronization of Processes, Lawrence Erlbaum and Associates, Mahwah, NJ, USA, 2006]) when selective attention is studied. Event-related potentials (ERPs) and behavioral parameters were collected from nonimpaired readers group and dyslexic readers group performing alphabetic and nonalphabetic tasks. The dyslexic readers group was found to demonstrate asynchrony between the auditory and the visual modalities when it came to processing alphabetic stimuli. These findings were found both for behavioral and ERPs parameters. Unlike the dyslexic readers, the nonimpaired readers showed synchronized speed of processing in the auditory and the visual modalities while processing alphabetic stimuli. The current study suggests that established reading is dependent on a synchronization between the auditory and the visual modalities even when it comes to selective attention.

  3. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  4. A Computational Study on Porosity Evolution in Parts Produced by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Tan, J. L.; Tang, C.; Wong, C. H.

    2018-06-01

    Selective laser melting (SLM) is a powder-bed additive manufacturing process that uses laser to melt powders, layer by layer to generate a functional 3D part. There are many different parameters, such as laser power, scanning speed, and layer thickness, which play a role in determining the quality of the printed part. These parameters contribute to the energy density applied on the powder bed. Defects arise when insufficient or excess energy density is applied. A common defect in these cases is the presence of porosity. This paper studies the formation of porosities when inappropriate energy densities are used. A computational model was developed to simulate the melting and solidification process of SS316L powders in the SLM process. Three different sets of process parameters were used to produce 800-µm-long melt tracks, and the characteristics of the porosities were analyzed. It was found that when low energy density parameters were used, the pores were found to be irregular in shapes and were located near the top surface of the powder bed. However, when high energy density parameters were used, the pores were either elliptical or spherical in shapes and were usually located near the bottom of the keyholes.

  5. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  6. Correlation of Selected Cognitive Abilities and Cognitive Processing Parameters: An Exploratory Study.

    ERIC Educational Resources Information Center

    Snow, Richard E.; And Others

    This pilot study investigated some relationships between tested ability variables and processing parameters obtained from memory search and visual search tasks. The 25 undergraduates who participated had also participated in a previous investigation by Chiang and Atkinson. A battery of traditional ability tests and several film tests were…

  7. Critical literature review of relationships between processing parameters and physical properties of particleboard

    Treesearch

    Myron W. Kelly

    1977-01-01

    The pertinent literature has been reviewed, and the apparent effects of selected processing parameters on the resultant particleboard properties, as generally reported in the literature, have been determined. Resin efficiency, type and level, furnish, and pressing conditions are reviewed for their reported effects on physical, strength, and moisture and dimensional...

  8. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    NASA Astrophysics Data System (ADS)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  9. Punishment induced behavioural and neurophysiological variability reveals dopamine-dependent selection of kinematic movement parameters

    PubMed Central

    Galea, Joseph M.; Ruge, Diane; Buijink, Arthur; Bestmann, Sven; Rothwell, John C.

    2013-01-01

    Action selection describes the high-level process which selects between competing movements. In animals, behavioural variability is critical for the motor exploration required to select the action which optimizes reward and minimizes cost/punishment, and is guided by dopamine (DA). The aim of this study was to test in humans whether low-level movement parameters are affected by punishment and reward in ways similar to high-level action selection. Moreover, we addressed the proposed dependence of behavioural and neurophysiological variability on DA, and whether this may underpin the exploration of kinematic parameters. Participants performed an out-and-back index finger movement and were instructed that monetary reward and punishment were based on its maximal acceleration (MA). In fact, the feedback was not contingent on the participant’s behaviour but pre-determined. Blocks highly-biased towards punishment were associated with increased MA variability relative to blocks with either reward or without feedback. This increase in behavioural variability was positively correlated with neurophysiological variability, as measured by changes in cortico-spinal excitability with transcranial magnetic stimulation over the primary motor cortex. Following the administration of a DA-antagonist, the variability associated with punishment diminished and the correlation between behavioural and neurophysiological variability no longer existed. Similar changes in variability were not observed when participants executed a pre-determined MA, nor did DA influence resting neurophysiological variability. Thus, under conditions of punishment, DA-dependent processes influence the selection of low-level movement parameters. We propose that the enhanced behavioural variability reflects the exploration of kinematic parameters for less punishing, or conversely more rewarding, outcomes. PMID:23447607

  10. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  11. Pilot-Configurable Information on a Display Unit

    NASA Technical Reports Server (NTRS)

    Bell, Charles Frederick (Inventor); Ametsitsi, Julian (Inventor); Che, Tan Nhat (Inventor); Shafaat, Syed Tahir (Inventor)

    2017-01-01

    A small thin display unit that can be installed in the flight deck for displaying only flight crew-selected tactical information needed for the task at hand. The flight crew can select the tactical information to be displayed by means of any conventional user interface. Whenever the flight crew selects tactical information for processes the request, including periodically retrieving measured current values or computing current values for the requested tactical parameters and returning those current tactical parameter values to the display unit for display.

  12. TkPl_SU: An Open-source Perl Script Builder for Seismic Unix

    NASA Astrophysics Data System (ADS)

    Lorenzo, J. M.

    2017-12-01

    TkPl_SU (beta) is a graphical user interface (GUI) to select parameters for Seismic Unix (SU) modules. Seismic Unix (Stockwell, 1999) is a widely distributed free software package for processing seismic reflection and signal processing. Perl/Tk is a mature, well-documented and free object-oriented graphical user interface for Perl. In a classroom environment, shell scripting of SU modules engages students and helps focus on the theoretical limitations and strengths of signal processing. However, complex interactive processing stages, e.g., selection of optimal stacking velocities, killing bad data traces, or spectral analysis requires advanced flows beyond the scope of introductory classes. In a research setting, special functionality from other free seismic processing software such as SioSeis (UCSD-NSF) can be incorporated readily via an object-oriented style to programming. An object oriented approach is a first step toward efficient extensible programming of multi-step processes, and a simple GUI simplifies parameter selection and decision making. Currently, in TkPl_SU, Perl 5 packages wrap 19 of the most common SU modules that are used in teaching undergraduate and first-year graduate student classes (e.g., filtering, display, velocity analysis and stacking). Perl packages (classes) can advantageously add new functionality around each module and clarify parameter names for easier usage. For example, through the use of methods, packages can isolate the user from repetitive control structures, as well as replace the names of abbreviated parameters with self-describing names. Moose, an extension of the Perl 5 object system, greatly facilitates an object-oriented style. Perl wrappers are self-documenting via Perl programming document markup language.

  13. Microstructure and Magnetic Properties of Magnetic Material Fabricated by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Jhong, Kai Jyun; Huang, Wei-Chin; Lee, Wen Hsi

    Selective Laser Melting (SLM) is a powder-based additive manufacturing which is capable of producing parts layer-by-layer from a 3D CAD model. The aim of this study is to adopt the selective laser melting technique to magnetic material fabrication. [1]For the SLM process to be practical in industrial use, highly specific mechanical properties of the final product must be achieved. The integrity of the manufactured components depend strongly on each single laser-melted track and every single layer, as well as the strength of the connections between them. In this study, effects of the processing parameters, such as the space distance of surface morphology is analyzed. Our hypothesis is that when a magnetic product is made by the selective laser melting techniques instead of traditional techniques, the finished component will have more precise and effective properties. This study analyzed the magnitudes of magnetic properties in comparison with different parameters in the SLM process and compiled a completed product to investigate the efficiency in contrast with products made with existing manufacturing processes.

  14. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    PubMed

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Perseveration in Tool Use: A Window for Understanding the Dynamics of the Action-Selection Process

    ERIC Educational Resources Information Center

    Smitsman, Ad W.; Cox, Ralf F. A.

    2008-01-01

    Two experiments investigated how 3-year-old children select a tool to perform a manual task, with a focus on their perseverative parameter choices for the various relationships involved in handling a tool: the actor-to-tool relation and the tool-to-target relation (topology). The first study concerned the parameter value for the tool-to-target…

  16. Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection.

    PubMed

    Caiazzo, Fabrizia; Caggiano, Alessandra

    2018-04-20

    Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti⁻6Al⁻4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data.

  17. Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection

    PubMed Central

    2018-01-01

    Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti–6Al–4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data. PMID:29677114

  18. TU-FG-209-11: Validation of a Channelized Hotelling Observer to Optimize Chest Radiography Image Processing for Nodule Detection: A Human Observer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, A; Little, K; Chung, J

    Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less

  19. Deploying response surface methodology (RSM) and glowworm swarm optimization (GSO) in optimizing warpage on a mobile phone cover

    NASA Astrophysics Data System (ADS)

    Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.

    2017-09-01

    Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM) and Glowworm Swarm Optimization (GSO). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM and GSO. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0 whereas the GSO was utilized by using MATLAB. The warpage in y direction recommended by RSM were reduced by 70 %. The warpages recommended by GSO were decreased by 61 % in y direction. The resulting warpages under optimal parameter setting by RSM and GSO were validated by simulation in AMI 2012. RSM performed better than GSO in solving warpage issue.

  20. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  1. Mathematical Model Of Variable-Polarity Plasma Arc Welding

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1996-01-01

    Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.

  2. Simulating soil moisture change in a semiarid rangeland watershed with a process-based water-balance model

    Treesearch

    Howard Evan Canfield; Vicente L. Lopes

    2000-01-01

    A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...

  3. Optimization of Gas Metal Arc Welding Process Parameters

    NASA Astrophysics Data System (ADS)

    Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.

    2016-09-01

    This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.

  4. Microstructure based simulations for prediction of flow curves and selection of process parameters for inter-critical annealing in DP steel

    NASA Astrophysics Data System (ADS)

    Deepu, M. J.; Farivar, H.; Prahl, U.; Phanikumar, G.

    2017-04-01

    Dual phase steels are versatile advanced high strength steels that are being used for sheet metal applications in automotive industry. It also has the potential for application in bulk components like gear. The inter-critical annealing in dual phase steels is one of the crucial steps that determine the mechanical properties of the material. Selection of the process parameters for inter-critical annealing, in particular, the inter-critical annealing temperature and time is important as it plays a major role in determining the volume fractions of ferrite and martensite, which in turn determines the mechanical properties. Selection of these process parameters to obtain a particular required mechanical property requires large number of experimental trials. Simulation of microstructure evolution and virtual compression/tensile testing can help in reducing the number of such experimental trials. In the present work, phase field modeling implemented in the commercial software Micress® is used to predict the microstructure evolution during inter-critical annealing. Virtual compression tests are performed on the simulated microstructure using finite element method implemented in the commercial software, to obtain the effective flow curve of the macroscopic material. The flow curves obtained by simulation are experimentally validated with physical simulation in Gleeble® and compared with that obtained using linear rule of mixture. The methodology could be used in determining the inter-critical annealing process parameters required for achieving a particular flow curve.

  5. Investigation of selected structural parameters in Fe 95Si 5 amorphous alloy during crystallization process

    NASA Astrophysics Data System (ADS)

    Fronczyk, Adam

    2007-04-01

    In this study, we report on a crystallization behavior of the Fe 95Si 5 metallic glasses using a differential scanning cabrimetry (DSC), and X-ray diffraction. The paper presents the results of experimental investigation of Fe 95Si 5 amorphous alloy, subjected to the crystallizing process by the isothermal annealing. The objective of the experiment was to determine changes in the structural parameters during crystallization process of the examined alloy. Crystalline diameter and the lattice constant of the crystallizing phase were used as parameters to evaluate structural changes in material.

  6. Large Area Scene Selection Interface (LASSI). Methodology of Selecting Landsat Imagery for the Global Land Survey 2005

    NASA Technical Reports Server (NTRS)

    Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry

    2009-01-01

    The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.

  7. Underground Mining Method Selection Using WPM and PROMETHEE

    NASA Astrophysics Data System (ADS)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  8. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-04-04

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  9. Efficient packet forwarding using cyber-security aware policies

    DOEpatents

    Ros-Giralt, Jordi

    2017-10-25

    For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.

  10. Disentangling inhibition-based and retrieval-based aftereffects of distractors: Cognitive versus motor processes.

    PubMed

    Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian

    2018-05-01

    Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting—Selection Guidelines

    PubMed Central

    Konda Gokuldoss, Prashanth; Kolla, Sri; Eckert, Jürgen

    2017-01-01

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties. PMID:28773031

  12. Additive Manufacturing Processes: Selective Laser Melting, Electron Beam Melting and Binder Jetting-Selection Guidelines.

    PubMed

    Gokuldoss, Prashanth Konda; Kolla, Sri; Eckert, Jürgen

    2017-06-19

    Additive manufacturing (AM), also known as 3D printing or rapid prototyping, is gaining increasing attention due to its ability to produce parts with added functionality and increased complexities in geometrical design, on top of the fact that it is theoretically possible to produce any shape without limitations. However, most of the research on additive manufacturing techniques are focused on the development of materials/process parameters/products design with different additive manufacturing processes such as selective laser melting, electron beam melting, or binder jetting. However, we do not have any guidelines that discuss the selection of the most suitable additive manufacturing process, depending on the material to be processed, the complexity of the parts to be produced, or the design considerations. Considering the very fact that no reports deal with this process selection, the present manuscript aims to discuss the different selection criteria that are to be considered, in order to select the best AM process (binder jetting/selective laser melting/electron beam melting) for fabricating a specific component with a defined set of material properties.

  13. Parallel optimization of signal detection in active magnetospheric signal injection experiments

    NASA Astrophysics Data System (ADS)

    Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor

    2018-05-01

    Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.

  14. Grey Relational Analysis Coupled with Principal Component Analysis for Optimization of Stereolithography Process to Enhance Part Quality

    NASA Astrophysics Data System (ADS)

    Raju, B. S.; Sekhar, U. Chandra; Drakshayani, D. N.

    2017-08-01

    The paper investigates optimization of stereolithography process for SL5530 epoxy resin material to enhance part quality. The major characteristics indexed for performance selected to evaluate the processes are tensile strength, Flexural strength, Impact strength and Density analysis and corresponding process parameters are Layer thickness, Orientation and Hatch spacing. In this study, the process is intrinsically with multiple parameters tuning so that grey relational analysis which uses grey relational grade as performance index is specially adopted to determine the optimal combination of process parameters. Moreover, the principal component analysis is applied to evaluate the weighting values corresponding to various performance characteristics so that their relative importance can be properly and objectively desired. The results of confirmation experiments reveal that grey relational analysis coupled with principal component analysis can effectively acquire the optimal combination of process parameters. Hence, this confirm that the proposed approach in this study can be an useful tool to improve the process parameters in stereolithography process, which is very useful information for machine designers as well as RP machine users.

  15. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation with the spatial heterogeneity under the three vegetation types. According to the temporal and spatial heterogeneity of the optimal values, the parameters of the BIOME-BGC model could be classified in order to adopt different parameter strategies in practical application. The conclusion could help to deeply understand the parameters and the optimal values of the ecological process models, and provide a way or reference for obtaining the reasonable values of parameters in models application.

  16. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  17. New generation photoelectric converter structure optimization using nano-structured materials

    NASA Astrophysics Data System (ADS)

    Dronov, A.; Gavrilin, I.; Zheleznyakova, A.

    2014-12-01

    In present work the influence of anodizing process parameters on PAOT geometric parameters for optimizing and increasing ETA-cell efficiency was studied. During the calculations optimal geometrical parameters were obtained. Parameters such as anodizing current density, electrolyte composition and temperature, as well as the anodic oxidation process time were selected for this investigation. Using the optimized TiO2 photoelectrode layer with 3,6 μm porous layer thickness and pore diameter more than 80 nm the ETA-cell efficiency has been increased by 3 times comparing to not nanostructured TiO2 photoelectrode.

  18. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  19. Recapturing Graphite-Based Fuel Element Technology for Nuclear Thermal Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trammell, Michael P; Jolly, Brian C; Miller, James Henry

    ORNL is currently recapturing graphite based fuel forms for Nuclear Thermal Propulsion (NTP). This effort involves research and development on materials selection, extrusion, and coating processes to produce fuel elements representative of historical ROVER and NERVA fuel. Initially, lab scale specimens were fabricated using surrogate oxides to develop processing parameters that could be applied to full length NTP fuel elements. Progress toward understanding the effect of these processing parameters on surrogate fuel microstructure is presented.

  20. Optimisation of process parameters on thin shell part using response surface methodology (RSM) and genetic algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    This study conducts the simulation on optimisation of injection moulding process parameters using Autodesk Moldflow Insight (AMI) software. This study has applied some process parameters which are melt temperature, mould temperature, packing pressure, and cooling time in order to analyse the warpage value of the part. Besides, a part has been selected to be studied which made of Polypropylene (PP). The combination of the process parameters is analysed using Analysis of Variance (ANOVA) and the optimised value is obtained using Response Surface Methodology (RSM). The RSM as well as Genetic Algorithm are applied in Design Expert software in order to minimise the warpage value. The outcome of this study shows that the warpage value improved by using RSM and GA.

  1. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  2. Process Simulation of Gas Metal Arc Welding Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, Paul E.

    2005-09-06

    ARCWELDER is a Windows-based application that simulates gas metal arc welding (GMAW) of steel and aluminum. The software simulates the welding process in an accurate and efficient manner, provides menu items for process parameter selection, and includes a graphical user interface with the option to animate the process. The user enters the base and electrode material, open circuit voltage, wire diameter, wire feed speed, welding speed, and standoff distance. The program computes the size and shape of a square-groove or V-groove weld in the flat position. The program also computes the current, arc voltage, arc length, electrode extension, transfer ofmore » droplets, heat input, filler metal deposition, base metal dilution, and centerline cooling rate, in English or SI units. The simulation may be used to select welding parameters that lead to desired operation conditions.« less

  3. IMMAN: free software for information theory-based chemometric analysis.

    PubMed

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.

  4. Systematic development of technical textiles

    NASA Astrophysics Data System (ADS)

    Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.

    2016-07-01

    Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.

  5. Using the Modification Index and Standardized Expected Parameter Change for Model Modification

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.

    2012-01-01

    Model modification is oftentimes conducted after discovering a badly fitting structural equation model. During the modification process, the modification index (MI) and the standardized expected parameter change (SEPC) are 2 statistics that may be used to aid in the selection of parameters to add to a model to improve the fit. The purpose of this…

  6. Multiobjective Optimization of Atmospheric Plasma Spray Process Parameters to Deposit Yttria-Stabilized Zirconia Coatings Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Ramachandran, C. S.; Balasubramanian, V.; Ananthapadmanabhan, P. V.

    2011-03-01

    Atmospheric plasma spraying is used extensively to make Thermal Barrier Coatings of 7-8% yttria-stabilized zirconia powders. The main problem faced in the manufacture of yttria-stabilized zirconia coatings by the atmospheric plasma spraying process is the selection of the optimum combination of input variables for achieving the required qualities of coating. This problem can be solved by the development of empirical relationships between the process parameters (input power, primary gas flow rate, stand-off distance, powder feed rate, and carrier gas flow rate) and the coating quality characteristics (deposition efficiency, tensile bond strength, lap shear bond strength, porosity, and hardness) through effective and strategic planning and the execution of experiments by response surface methodology. This article highlights the use of response surface methodology by designing a five-factor five-level central composite rotatable design matrix with full replication for planning, conduction, execution, and development of empirical relationships. Further, response surface methodology was used for the selection of optimum process parameters to achieve desired quality of yttria-stabilized zirconia coating deposits.

  7. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  8. Selective Laser Sintering of Nano Al2O3 Infused Polyamide

    PubMed Central

    Warnakula, Anthony; Singamneni, Sarat

    2017-01-01

    Nano Al2O3 polyamide composites are evaluated for processing by selective laser sintering. A thermal characterization of the polymer composite powders allowed us to establish the possible initial settings. Initial experiments are conducted to identify the most suitable combinations of process parameters. Based on the results of the initial trials, more promising ranges of different process parameters could be identified. The post sintering characterization showed evidence of sufficient inter-particle sintering and intra-layer coalescence. While the inter-particle coalescence gradually improved, the porosity levels slightly decreased with increasing laser power. The nano-filler particles tend to agglomerate around the beads along the solid tracks, possibly due to Van der Walls forces. The tensile stress results showed an almost linear increase with increasing nano-filler content. PMID:28773220

  9. Optimisation of process parameters on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Faiz, J. M.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Rashidi, M. M.

    2017-09-01

    This study is carried out to focus on optimisation of process parameters by simulation using Autodesk Moldflow Insight (AMI) software. The process parameters are taken as the input in order to analyse the warpage value which is the output in this study. There are some significant parameters that have been used which are melt temperature, mould temperature, packing pressure, and cooling time. A plastic part made of Polypropylene (PP) has been selected as the study part. Optimisation of process parameters is applied in Design Expert software with the aim to minimise the obtained warpage value. Response Surface Methodology (RSM) has been applied in this study together with Analysis of Variance (ANOVA) in order to investigate the interactions between parameters that are significant to the warpage value. Thus, the optimised warpage value can be obtained using the model designed using RSM due to its minimum error value. This study comes out with the warpage value improved by using RSM.

  10. A novel process for production of spherical PBT powders and their processing behavior during laser beam melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Jochen, E-mail: jochen.schmidt@fau.de; Sachs, Marius; Fanselow, Stephanie

    2016-03-09

    Additive manufacturing processes like laser beam melting of polymers are established for production of prototypes and individualized parts. The transfer to other areas of application and to serial production is currently hindered by the limited availability of polymer powders with good processability. Within this contribution a novel process route for the production of spherical polymer micron-sized particles of good flowability has been established and applied to produce polybutylene terephthalate (PBT) powders. Moreover, the applicability of the PBT powders in selective laser beam melting and the dependencies of process parameters on device properties will be outlined. First, polymer micro particles aremore » produced by a novel wet grinding method. To improve the flowability the produced particles the particle shape is optimized by rounding in a heated downer reactor. A further improvement of flowability of the cohesive spherical PBT particles is realized by dry coating. An improvement of flowability by a factor of about 5 is achieved by subsequent rounding of the comminution product and dry-coating as proven by tensile strength measurements of the powders. The produced PBT powders were characterized with respect to their processability. Therefore thermal, rheological, optical and bulk properties were analyzed. Based on these investigations a range of processing parameters was derived. Parameter studies on thin layers, produced in a selective laser melting system, were conducted. Hence appropriate parameters for processing the PBT powders by laser beam melting, like building chamber temperature, scan speed and laser power have been identified.« less

  11. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  12. SENSITIVE PARAMETER EVALUATION FOR A VADOSE ZONE FATE AND TRANSPORT MODEL

    EPA Science Inventory

    This report presents information pertaining to quantitative evaluation of the potential impact of selected parameters on output of vadose zone transport and fate models used to describe the behavior of hazardous chemicals in soil. The Vadose 2one Interactive Processes (VIP) model...

  13. An experimental analysis of process parameters to manufacture micro-channels in AISI H13 tempered steel by laser micro-milling

    NASA Astrophysics Data System (ADS)

    Teixidor, D.; Ferrer, I.; Ciurana, J.

    2012-04-01

    This paper reports the characterization of laser machining (milling) process to manufacture micro-channels in order to understand the incidence of process parameters on the final features. Selection of process operational parameters is highly critical for successful laser micromachining. A set of designed experiments is carried out in a pulsed Nd:YAG laser system using AISI H13 hardened tool steel as work material. Several micro-channels have been manufactured as micro-mold cavities varying parameters such as scanning speed (SS), pulse intensity (PI) and pulse frequency (PF). Results are obtained by evaluating the dimensions and the surface finish of the micro-channel. The dimensions and shape of the micro-channels produced with laser-micro-milling process exhibit variations. In general the use of low scanning speeds increases the quality of the feature in both surface finishing and dimensional.

  14. Aqueous enzymatic extraction of Moringa oleifera oil.

    PubMed

    Mat Yusoff, Masni; Gordon, Michael H; Ezeh, Onyinye; Niranjan, Keshavan

    2016-11-15

    This paper reports on the extraction of Moringa oleifera (MO) oil by using aqueous enzymatic extraction (AEE) method. The effect of different process parameters on the oil recovery was discovered by using statistical optimization, besides the effect of selected parameters on the formation of its oil-in-water cream emulsions. Within the pre-determined ranges, the use of pH 4.5, moisture/kernel ratio of 8:1 (w/w), and 300stroke/min shaking speed at 40°C for 1h incubation time resulted in highest oil recovery of approximately 70% (goil/g solvent-extracted oil). These optimized parameters also result in a very thin emulsion layer, indicating minute amount of emulsion formed. Zero oil recovery with thick emulsion were observed when the used aqueous phase was re-utilized for another AEE process. The findings suggest that the critical selection of AEE parameters is key to high oil recovery with minimum emulsion formation thereby lowering the load on the de-emulsification step. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Identification of Optimum Magnetic Behavior of NanoCrystalline CmFeAl Type Heusler Alloy Powders Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Srivastava, Y.; Srivastava, S.; Boriwal, L.

    2016-09-01

    Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.

  16. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  17. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  18. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    NASA Astrophysics Data System (ADS)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  19. Exploring Several Methods of Groundwater Model Selection

    NASA Astrophysics Data System (ADS)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  20. Morphological effects on the selectivity of intramolecular versus intermolecular catalytic reaction on Au nanoparticles.

    PubMed

    Wang, Dan; Sun, Yuanmiao; Sun, Yinghui; Huang, Jing; Liang, Zhiqiang; Li, Shuzhou; Jiang, Lin

    2017-06-14

    It is hard for metal nanoparticle catalysts to control the selectivity of a catalytic reaction in a simple process. In this work, we obtain active Au nanoparticle catalysts with high selectivity for the hydrogenation reaction of aromatic nitro compounds, by simply employing spine-like Au nanoparticles. The density functional theory (DFT) calculations further elucidate that the morphological effect on thermal selectivity control is an internal key parameter to modulate the nitro hydrogenation process on the surface of Au spines. These results show that controlled morphological effects may play an important role in catalysis reactions of noble metal NPs with high selectivity.

  1. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Systems and methods for optimal power flow on a radial network

    DOEpatents

    Low, Steven H.; Peng, Qiuyu

    2018-04-24

    Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.

  3. A Comparative Study on Ni-Based Coatings Prepared by HVAF, HVOF, and APS Methods for Corrosion Protection Applications

    NASA Astrophysics Data System (ADS)

    Sadeghimeresht, E.; Markocsan, N.; Nylén, P.

    2016-12-01

    Selection of the thermal spray process is the most important step toward a proper coating solution for a given application as important coating characteristics such as adhesion and microstructure are highly dependent on it. In the present work, a process-microstructure-properties-performance correlation study was performed in order to figure out the main characteristics and corrosion performance of the coatings produced by different thermal spray techniques such as high-velocity air fuel (HVAF), high-velocity oxy fuel (HVOF), and atmospheric plasma spraying (APS). Previously optimized HVOF and APS process parameters were used to deposit Ni, NiCr, and NiAl coatings and compare with HVAF-sprayed coatings with randomly selected process parameters. As the HVAF process presented the best coating characteristics and corrosion behavior, few process parameters such as feed rate and standoff distance (SoD) were investigated to systematically optimize the HVAF coatings in terms of low porosity and high corrosion resistance. The Ni and NiAl coatings with lower porosity and better corrosion behavior were obtained at an average SoD of 300 mm and feed rate of 150 g/min. The NiCr coating sprayed at a SoD of 250 mm and feed rate of 75 g/min showed the highest corrosion resistance among all investigated samples.

  4. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  5. A Computational approach in optimizing process parameters of GTAW for SA 106 Grade B steel pipes using Response surface methodology

    NASA Astrophysics Data System (ADS)

    Sumesh, A.; Sai Ramnadh, L. V.; Manish, P.; Harnath, V.; Lakshman, V.

    2016-09-01

    Welding is one of the most common metal joining techniques used in industry for decades. As in the global manufacturing scenario the products should be more cost effective. Therefore the selection of right process with optimal parameters will help the industry in minimizing their cost of production. SA 106 Grade B steel has a wide application in Automobile chassis structure, Boiler tubes and pressure vessels industries. Employing central composite design the process parameters for Gas Tungsten Arc Welding was optimized. The input parameters chosen were weld current, peak current and frequency. The joint tensile strength was the response considered in this study. Analysis of variance was performed to determine the statistical significance of the parameters and a Regression analysis was performed to determine the effect of input parameters over the response. From the experiment the maximum tensile strength obtained was 95 KN reported for a weld current of 95 Amp, frequency of 50 Hz and peak current of 100 Amp. With an aim of maximizing the joint strength using Response optimizer a target value of 100 KN is selected and regression models were optimized. The output results are achievable with a Weld current of 62.6148 Amp, Frequency of 23.1821 Hz, and Peak current of 65.9104 Amp. Using Die penetration test the weld joints were also classified in to 2 categories as good weld and weld with defect. This will also help in getting a defect free joint when welding is performed using GTAW process.

  6. A Flexible Pilot-Scale Setup for Real-Time Studies in Process Systems Engineering

    ERIC Educational Resources Information Center

    Panjapornpon, Chanin; Fletcher, Nathan; Soroush, Masoud

    2006-01-01

    This manuscript describes a flexible, pilot-scale setup that can be used for training students and carrying out research in process systems engineering. The setup allows one to study a variety of process systems engineering concepts such as design feasibility, design flexibility, control configuration selection, parameter estimation, process and…

  7. Image processing for IMRT QA dosimetry.

    PubMed

    Zaini, Mehran R; Forest, Gary J; Loshek, David D

    2005-01-01

    We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.

  8. Selected algorithms for measurement data processing in impulse-radar-based system for monitoring of human movements

    NASA Astrophysics Data System (ADS)

    Miękina, Andrzej; Wagner, Jakub; Mazurek, Paweł; Morawski, Roman Z.

    2016-11-01

    The importance of research on new technologies that could be employed in care services for elderly and disabled persons is highlighted. Advantages of impulse-radar sensors, when applied for non-intrusive monitoring of such persons in their home environment, are indicated. Selected algorithms for the measurement data preprocessing - viz. the algorithms for clutter suppression and echo parameter estimation, as well as for estimation of the twodimensional position of a monitored person - are proposed. The capability of an impulse-radar- based system to provide some application-specific parameters, viz. the parameters characterising the patient's health condition, is also demonstrated.

  9. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  10. Data mining and statistical inference in selective laser melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, Chandrika

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  11. Data mining and statistical inference in selective laser melting

    DOE PAGES

    Kamath, Chandrika

    2016-01-11

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  12. Error propagation of partial least squares for parameters optimization in NIR modeling.

    PubMed

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  13. Error propagation of partial least squares for parameters optimization in NIR modeling

    NASA Astrophysics Data System (ADS)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  14. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  15. 3D Printing Optical Engine for Controlling Material Microstructure

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Chin; Chang, Kuang-Po; Wu, Ping-Han; Wu, Chih-Hsien; Lin, Ching-Chih; Chuang, Chuan-Sheng; Lin, De-Yau; Liu, Sung-Ho; Horng, Ji-Bin; Tsau, Fang-Hei

    Controlling the cooling rate of alloy during melting and resolidification is the most commonly used method for varying the material microstructure and consequently the resuling property. However, the cooling rate of a selective laser melting (SLM) production is restricted by a preset optimal parameter of a good dense product. The head room for locally manipulating material property in a process is marginal. In this study, we invent an Optical Engine for locally controlling material microstructure in a SLM process. It develops an invovative method to control and adjust thermal history of the solidification process to gain desired material microstucture and consequently drastically improving the quality. Process parameters selected locally for specific materials requirement according to designed characteristics by using thermal dynamic principles of solidification process. It utilize a technique of complex laser beam shape of adaptive irradiation profile to permit local control of material characteristics as desired. This technology could be useful for industrial application of medical implant, aerospace and automobile industries.

  16. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.

    2012-06-05

    An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  17. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL

    2011-11-08

    Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  18. TEM study of the FSW nugget in AA2195-T81

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.; Nunes, A. C., Jr.; Chen, P. S.; Steele, G.

    2004-01-01

    During fiiction stir welding (FSW) the material being joined is subjected to a thermal- mechanical process in which the temperature, strain and strain rates are not completely understood. To produce a defect fiee weld, process parameters for the weld and tool pin design must be chosen carefully. The ability to select the weld parameters based on the thermal processing requirements of the material, would allow optimization of mechanical properties in the weld region. In this study, an attempt is made to correlate the microstructure with the variation in thermal history the material experiences during the FSW process.

  19. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    PubMed

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Selection of site specific vibration equation by using analytic hierarchy process in a quarry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalayci, Ulku, E-mail: ukalayci@istanbul.edu.tr; Ozer, Umit, E-mail: uozer@istanbul.edu.tr

    This paper presents a new approach for the selection of the most accurate SSVA (Site Specific Vibration Attenuation) equation for blasting processes in a quarry located near settlements in Istanbul, Turkey. In this context, the SSVA equations obtained from the same study area in the literature were considered in terms of distance between the shot points and buildings and the amount of explosive charge. In this purpose, 11 different SSVA equations obtained from the study area in the past 12 years, forecasting capabilities according to designated new conditions, using 102 vibration records as test data obtained from the study areamore » was investigated. In this study, AHP (Analytic Hierarchy Process) was selected as an analysis method in order to determine the most accurate equation among 11 SSAV equations, and the parameters such as year, distance, charge, and r{sup 2} of the equations were used as criteria for AHP. Finally, the most appropriate equation was selected among the existing ones, and the process of selecting according to different target criteria was presented. Furthermore, it was noted that the forecasting results of the selected equation is more accurate than that formed using the test results. - Highlights: • The optimum Site Specific Vibration Attenuation equation for blasting in a quarry located near settlements was determined. • It is indicated that SSVA equations changing over the years don’t give always accurate estimates at changing conditions. • Selection of the blast induced SSVA equation was made using AHP. • Equation selection method was highlighted based on parameters such as charge, distance, and quarry geometry changes (year).« less

  1. Additives influence on spinning solution and nano web properties

    NASA Astrophysics Data System (ADS)

    Kukle, S.; Jegina, S.; Sutka, A.; Makovska, R.

    2017-10-01

    Needleless electrospinning operated as a one-stage process producing nanofibres webs from spinning solutions with the corresponding to the final use properties seems has a good future prospects. Complicated spinning solution designing started with the selection of composition and components proportion, pre-processing sequence and parameters establishing for every component and for their mixing. Spinning solution viscosity and electro conductivity together with the spinning distance and intensity of electromagnetic field are main parameters determined spin ability and properties of obtained nanofibers. Influence of some pre-processing parameters of components, combinations of organic and non-organic components and their concentration influence on spinning solution viscosity and conductivity, as well on fibres diameters are under discussion.

  2. A holistic approach towards defined product attributes by Maillard-type food processing.

    PubMed

    Davidek, Tomas; Illmann, Silke; Rytz, Andreas; Blank, Imre

    2013-07-01

    A fractional factorial experimental design was used to quantify the impact of process and recipe parameters on selected product attributes of extruded products (colour, viscosity, acrylamide, and the flavour marker 4-hydroxy-2,5-dimethyl-3(2H)-furanone, HDMF). The study has shown that recipe parameters (lysine, phosphate) can be used to modulate the HDMF level without changing the specific mechanical energy (SME) and consequently the texture of the product, while processing parameters (temperature, moisture) impact both HDMF and SME in parallel. Similarly, several parameters, including phosphate level, temperature and moisture, simultaneously impact both HDMF and acrylamide formation, while pH and addition of lysine showed different trends. Therefore, the latter two options can be used to mitigate acrylamide without a negative impact on flavour. Such a holistic approach has been shown as a powerful tool to optimize various product attributes upon food processing.

  3. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  4. Image processing methods in two and three dimensions used to animate remotely sensed data. [cloud cover

    NASA Technical Reports Server (NTRS)

    Hussey, K. J.; Hall, J. R.; Mortensen, R. A.

    1986-01-01

    Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.

  5. Deinking selectivity (Z-factor) : a new parameter to evaluate the performance of flotation deinking process

    Treesearch

    J.Y.: Tan Zhu; K.L. Scallon; Y.L. Zhao; Y. Deng

    2005-01-01

    This study proposes a deinking selectivity concept that considers both ink removal and fiber yield in determining the performance of deinking operations. The defined deinking selectivity. or Z -factor, is expressed by the ratio of ink removal expressed by the International Standards Organization (ISO) brightness gain or the reduction in relative effective residual ink...

  6. Deinking selectivity (Z-factor) : a new parameter to evaluate the performance of flotation deinking process

    Treesearch

    J.Y. Zhu; F. Tan; K.L. Scallon; Y. Zhao; Y. Deng

    2004-01-01

    Reducing fiber loss is also important to conserve resources and reduce the cost of secondary fibers. This study proposes a deinking selectivity concept that considers both ink removal and fiber yield in determining the performance of deinking operations. The defined deinking selectivity, or Z-factor, is expressed by the ratio of ink removal expressed by the...

  7. Disentangling the adult attention-deficit hyperactivity disorder endophenotype: parametric measurement of attention.

    PubMed

    Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2011-11-01

    Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.

  8. New horizons in selective laser sintering surface roughness characterization

    NASA Astrophysics Data System (ADS)

    Vetterli, M.; Schmid, M.; Knapp, W.; Wegener, K.

    2017-12-01

    Powder-based additive manufacturing of polymers and metals has evolved from a prototyping technology to an industrial process for the fabrication of small to medium series of complex geometry parts. Unfortunately due to the processing of powder as a basis material and the successive addition of layers to produce components, a significant surface roughness inherent to the process has been observed since the first use of such technologies. A novel characterization method based on an elastomeric pad coated with a reflective layer, the Gelsight, was found to be reliable and fast to characterize surfaces processed by selective laser sintering (SLS) of polymers. With help of this method, a qualitative and quantitative investigation of SLS surfaces is feasible. Repeatability and reproducibility investigations are performed for both 2D and 3D areal roughness parameters. Based on the good results, the Gelsight is used for the optimization of vertical SLS surfaces. A model built on laser scanning parameters is proposed and after confirmation could achieve a roughness reduction of 10% based on the S q parameter. The Gelsight could be successfully identified as a fast, reliable and versatile surface topography characterization method as it applies to all kind of surfaces.

  9. Influence of tool geometry and processing parameters on welding defects and mechanical properties for friction stir welding of 6061 Aluminium alloy

    NASA Astrophysics Data System (ADS)

    Daneji, A.; Ali, M.; Pervaiz, S.

    2018-04-01

    Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.

  10. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.

  11. Approach to in-process tool wear monitoring in drilling: Application of Kalman filter theory

    NASA Astrophysics Data System (ADS)

    He, Ning; Zhang, Youzhen; Pan, Liangxian

    1993-05-01

    The two parameters often used in adaptive control, tool wear and wear rate, are the important factors affecting machinability. In this paper, it is attempted to use the modern cybernetics to solve the in-process tool wear monitoring problem by applying the Kalman filter theory to monitor drill wear quantitatively. Based on the experimental results, a dynamic model, a measuring model and a measurement conversion model suitable for Kalman filter are established. It is proved that the monitoring system possesses complete observability but does not possess complete controllability. A discriminant for selecting the characteristic parameters is put forward. The thrust force Fz is selected as the characteristic parameter in monitoring the tool wear by this discriminant. The in-process Kalman filter drill wear monitoring system composed of force sensor microphotography and microcomputer is well established. The results obtained by the Kalman filter, the common indirect measuring method and the real drill wear measured by the aid of microphotography are compared. The result shows that the Kalman filter has high precision of measurement and the real time requirement can be satisfied.

  12. Application of taguchi method for selection parameter bleaching treatments against mechanical and physical properties of agave cantala fiber

    NASA Astrophysics Data System (ADS)

    Yudhanto, F.; Jamasri; Rochardjo, Heru S. B.

    2018-05-01

    The characterized agave cantala fiber in this research came from Sumenep, Madura, Indonesia was chemically processed using sodium hydroxide (NaOH) and hydrogen peroxide (H2O2) solution. The treatment with both solutions is called bleaching process. Tensile strength test of single fiber was used to get mechanical properties from selecting process of the various parameter are temperature, PH and concentration of H2O2 with an L9 orthogonal array by Taguchi method. The results indicate that PH is most significant parameter influencing the tensile strength followed by temperature and concentration H2O2. The influence of bleaching treatment on tensile strength showed increasing of crystallinity index of fiber by 21%. It showed by lost of hemicellulose and lignin layers of fiber can be seen from waveforms changes of 1735 (C=O), 1627 (OH), 1319 (CH2), 1250 (C-O) by FTIR graph. The photo SEM showed that the bleaching of fibers causes the fibers more roughly and clearly than untreated fibers.

  13. A Parameter Tuning Scheme of Sea-ice Model Based on Automatic Differentiation Technique

    NASA Astrophysics Data System (ADS)

    Kim, J. G.; Hovland, P. D.

    2001-05-01

    Automatic diferentiation (AD) technique was used to illustrate a new approach for parameter tuning scheme of an uncoupled sea-ice model. Atmospheric forcing field of 1992 obtained from NCEP data was used as enforcing variables in the study. The simulation results were compared with the observed ice movement provided by the International Arctic Buoy Programme (IABP). All of the numerical experiments were based on a widely used dynamic and thermodynamic model for simulating the seasonal sea-ice chnage of the main Arctic ocean. We selected five dynamic and thermodynamic parameters for the tuning process in which the cost function defined by the norm of the difference between observed and simulated ice drift locations was minimized. The selected parameters are the air and ocean drag coefficients, the ice strength constant, the turning angle at ice-air/ocean interface, and the bulk sensible heat transfer coefficient. The drag coefficients were the major parameters to control sea-ice movement and extent. The result of the study shows that more realistic simulations of ice thickness distribution was produced by tuning the simulated ice drift trajectories. In the tuning process, the L-BFCGS-B minimization algorithm of a quasi-Newton method was used. The derivative information required in the minimization iterations was provided by the AD processed Fortran code. Compared with a conventional approach, AD generated derivative code provided fast and robust computations of derivative information.

  14. QbD for pediatric oral lyophilisates development: risk assessment followed by screening and optimization.

    PubMed

    Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan

    2017-12-01

    This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.

  15. Beyond time and space: The effect of a lateralized sustained attention task and brain stimulation on spatial and selective attention.

    PubMed

    Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena

    2017-10-03

    The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  17. Influence of laser power on the penetration depth and geometry of scanning tracks in selective laser melting

    NASA Astrophysics Data System (ADS)

    Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward

    2016-12-01

    SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.

  18. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  19. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  20. Ballistic projectile trajectory determining system

    DOEpatents

    Karr, Thomas J.

    1997-01-01

    A computer controlled system determines the three-dimensional trajectory of a ballistic projectile. To initialize the system, predictions of state parameters for a ballistic projectile are received at an estimator. The estimator uses the predictions of the state parameters to estimate first trajectory characteristics of the ballistic projectile. A single stationary monocular sensor then observes the actual first trajectory characteristics of the ballistic projectile. A comparator generates an error value related to the predicted state parameters by comparing the estimated first trajectory characteristics of the ballistic projectile with the observed first trajectory characteristics of the ballistic projectile. If the error value is equal to or greater than a selected limit, the predictions of the state parameters are adjusted. New estimates for the trajectory characteristics of the ballistic projectile are made and are then compared with actual observed trajectory characteristics. This process is repeated until the error value is less than the selected limit. Once the error value is less than the selected limit, a calculator calculates trajectory characteristics such a the origin and destination of the ballistic projectile.

  1. The application of a multi-parameter analysis in choosing the location of a new solid waste landfill in Serbia.

    PubMed

    Milosevic, Igor; Naunovic, Zorana

    2013-10-01

    This article presents a process of evaluation and selection of the most favourable location for a sanitary landfill facility from three alternative locations, by applying a multi-criteria decision-making (MCDM) method. An incorrect choice of location for a landfill facility can have a significant negative economic and environmental impact, such as the pollution of air, ground and surface waters. The aim of this article is to present several improvements in the practical process of landfill site selection using the VIKOR MCDM compromise ranking method integrated with a fuzzy analytic hierarchy process approach for determining the evaluation criteria weighing coefficients. The VIKOR method focuses on ranking and selecting from a set of alternatives in the presence of conflicting and non-commensurable (different units) criteria, and on proposing a compromise solution that is closest to the ideal solution. The work shows that valuable site ranking lists can be obtained using the VIKOR method, which is a suitable choice when there is a large number of relevant input parameters.

  2. Nanoscale plasma chemistry enables fast, size-selective nanotube nucleation.

    PubMed

    Ostrikov, Kostya Ken; Mehdipour, Hamid

    2012-03-07

    The possibility of fast, narrow-size/chirality nucleation of thin single-walled carbon nanotubes (SWCNTs) at low, device-tolerant process temperatures in a plasma-enhanced chemical vapor deposition (CVD) is demonstrated using multiphase, multiscale numerical experiments. These effects are due to the unique nanoscale reactive plasma chemistry (NRPC) on the surfaces and within Au catalyst nanoparticles. The computed three-dimensional process parameter maps link the nanotube incubation times and the relative differences between the incubation times of SWCNTs of different sizes/chiralities to the main plasma- and precursor gas-specific parameters and explain recent experimental observations. It is shown that the unique NRPC leads not only to much faster nucleation of thin nanotubes at much lower process temperatures, but also to better selectivity between the incubation times of SWCNTs with different sizes and chiralities, compared to thermal CVD. These results are used to propose a time-programmed kinetic approach based on fast-responding plasmas which control the size-selective, narrow-chirality nucleation and growth of thin SWCNTs. This approach is generic and can be used for other nanostructure and materials systems. © 2012 American Chemical Society

  3. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  4. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  5. Tailoring Selective Laser Melting Process Parameters for NiTi Implants

    NASA Astrophysics Data System (ADS)

    Bormann, Therese; Schumacher, Ralf; Müller, Bert; Mertmann, Matthias; de Wild, Michael

    2012-12-01

    Complex-shaped NiTi constructions become more and more essential for biomedical applications especially for dental or cranio-maxillofacial implants. The additive manufacturing method of selective laser melting allows realizing complex-shaped elements with predefined porosity and three-dimensional micro-architecture directly out of the design data. We demonstrate that the intentional modification of the applied energy during the SLM-process allows tailoring the transformation temperatures of NiTi entities within the entire construction. Differential scanning calorimetry, x-ray diffraction, and metallographic analysis were employed for the thermal and structural characterizations. In particular, the phase transformation temperatures, the related crystallographic phases, and the formed microstructures of SLM constructions were determined for a series of SLM-processing parameters. The SLM-NiTi exhibits pseudoelastic behavior. In this manner, the properties of NiTi implants can be tailored to build smart implants with pre-defined micro-architecture and advanced performance.

  6. Visualization and processing of computed solid-state NMR parameters: MagresView and MagresPython.

    PubMed

    Sturniolo, Simone; Green, Timothy F G; Hanson, Robert M; Zilka, Miri; Refson, Keith; Hodgkinson, Paul; Brown, Steven P; Yates, Jonathan R

    2016-09-01

    We introduce two open source tools to aid the processing and visualisation of ab-initio computed solid-state NMR parameters. The Magres file format for computed NMR parameters (as implemented in CASTEP v8.0 and QuantumEspresso v5.0.0) is implemented. MagresView is built upon the widely used Jmol crystal viewer, and provides an intuitive environment to display computed NMR parameters. It can provide simple pictorial representation of one- and two-dimensional NMR spectra as well as output a selected spin-system for exact simulations with dedicated spin-dynamics software. MagresPython provides a simple scripting environment to manipulate large numbers of computed NMR parameters to search for structural correlations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  8. Inside the Mind of a Medicinal Chemist: The Role of Human Bias in Compound Prioritization during Drug Discovery

    PubMed Central

    Kutchukian, Peter S.; Vasilyeva, Nadya Y.; Xu, Jordan; Lindvall, Mika K.; Dillon, Michael P.; Glick, Meir; Coley, John D.; Brooijmans, Natasja

    2012-01-01

    Medicinal chemists’ “intuition” is critical for success in modern drug discovery. Early in the discovery process, chemists select a subset of compounds for further research, often from many viable candidates. These decisions determine the success of a discovery campaign, and ultimately what kind of drugs are developed and marketed to the public. Surprisingly little is known about the cognitive aspects of chemists’ decision-making when they prioritize compounds. We investigate 1) how and to what extent chemists simplify the problem of identifying promising compounds, 2) whether chemists agree with each other about the criteria used for such decisions, and 3) how accurately chemists report the criteria they use for these decisions. Chemists were surveyed and asked to select chemical fragments that they would be willing to develop into a lead compound from a set of ∼4,000 available fragments. Based on each chemist’s selections, computational classifiers were built to model each chemist’s selection strategy. Results suggest that chemists greatly simplified the problem, typically using only 1–2 of many possible parameters when making their selections. Although chemists tended to use the same parameters to select compounds, differing value preferences for these parameters led to an overall lack of consensus in compound selections. Moreover, what little agreement there was among the chemists was largely in what fragments were undesirable. Furthermore, chemists were often unaware of the parameters (such as compound size) which were statistically significant in their selections, and overestimated the number of parameters they employed. A critical evaluation of the problem space faced by medicinal chemists and cognitive models of categorization were especially useful in understanding the low consensus between chemists. PMID:23185259

  9. The study on injection parameters of selected alternative fuels used in diesel engines

    NASA Astrophysics Data System (ADS)

    Balawender, K.; Kuszewski, H.; Lejda, K.; Lew, K.

    2016-09-01

    The paper presents selected results concerning fuel charging and spraying process for selected alternative fuels, including regular diesel fuel, rape oil, FAME, blends of these fuels in various proportions, and blends of rape oil with diesel fuel. Examination of the process included the fuel charge measurements. To this end, a set-up for examination of Common Rail-type injection systems was used constructed on the basis of Bosch EPS-815 test bench, from which the high-pressure pump drive system was adopted. For tests concerning the spraying process, a visualisation chamber with constant volume was utilised. The fuel spray development was registered with the use of VisioScope (AVL).

  10. Multiphysics modeling of selective laser sintering/melting

    NASA Astrophysics Data System (ADS)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon cooling are calculated using the finite difference method. Different case studies are performed and general trends can be seen. This work concludes by discussing future extensions of this model and the need for a multi-scale approach to achieve comprehensive part-level models of the SLS/SLM process.

  11. Spacecraft orbit/earth scan derivations, associated APL program, and application to IMP-6

    NASA Technical Reports Server (NTRS)

    Smith, G. A.

    1971-01-01

    The derivation of a time shared, remote site, demand processed computer program is discussed. The computer program analyzes the effects of selected orbit, attitude, and spacecraft parameters on earth sensor detections of earth. For prelaunch analysis, the program may be used to simulate effects in nominal parameters which are used in preparing attitude data processing programs. After launch, comparison of results from a simulation and from satellite data will produce deviations helpful in isolating problems.

  12. Selective laser sintering: A qualitative and objective approach

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjay

    2003-10-01

    This article presents an overview of selective laser sintering (SLS) work as reported in various journals and proceedings. Selective laser sintering was first done mainly on polymers and nylon to create prototypes for audio-visual help and fit-to-form tests. Gradually it was expanded to include metals and alloys to manufacture functional prototypes and develop rapid tooling. The growth gained momentum with the entry of commercial entities such as DTM Corporation and EOS GmbH Electro Optical Systems. Computational modeling has been used to understand the SLS process, optimize the process parameters, and enhance the efficiency of the sintering machine.

  13. Genetic parameters for carcass traits and body weight using a Bayesian approach in the Canchim cattle.

    PubMed

    Meirelles, S L C; Mokry, F B; Espasandín, A C; Dias, M A D; Baena, M M; de A Regitano, L C

    2016-06-10

    Correlation between genetic parameters and factors such as backfat thickness (BFT), rib eye area (REA), and body weight (BW) were estimated for Canchim beef cattle raised in natural pastures of Brazil. Data from 1648 animals were analyzed using multi-trait (BFT, REA, and BW) animal models by the Bayesian approach. This model included the effects of contemporary group, age, and individual heterozygosity as covariates. In addition, direct additive genetic and random residual effects were also analyzed. Heritability estimated for BFT (0.16), REA (0.50), and BW (0.44) indicated their potential for genetic improvements and response to selection processes. Furthermore, genetic correlations between BW and the remaining traits were high (P > 0.50), suggesting that selection for BW could improve REA and BFT. On the other hand, genetic correlation between BFT and REA was low (P = 0.39 ± 0.17), and included considerable variations, suggesting that these traits can be jointly included as selection criteria without influencing each other. We found that REA and BFT responded to the selection processes, as measured by ultrasound. Therefore, selection for yearling weight results in changes in REA and BFT.

  14. The Influence of Friction Stir Weld Tool Form and Welding Parameters on Weld Structure and Properties: Nugget Bulge in Self-Reacting Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Schneider, Judy; Nunes, Arthur C., Jr.; Brendel, Michael S.

    2010-01-01

    Although friction stir welding (FSW) was patented in 1991, process development has been based upon trial and error and the literature still exhibits little understanding of the mechanisms determining weld structure and properties. New concepts emerging from a better understanding of these mechanisms enhance the ability of FSW engineers to think about the FSW process in new ways, inevitably leading to advances in the technology. A kinematic approach in which the FSW flow process is decomposed into several simple flow components has been found to explain the basic structural features of FSW welds and to relate them to tool geometry and process parameters. Using this modelling approach, this study reports on a correlation between the features of the weld nugget, process parameters, weld tool geometry, and weld strength. This correlation presents a way to select process parameters for a given tool geometry so as to optimize weld strength. It also provides clues that may ultimately explain why the weld strength varies within the sample population.

  15. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Study of Material Consolidation at Higher Throughput Parameters in Selective Laser Melting of Inconel 718

    NASA Technical Reports Server (NTRS)

    Prater, Tracie

    2016-01-01

    Selective Laser Melting (SLM) is a powder bed fusion additive manufacturing process used increasingly in the aerospace industry to reduce the cost, weight, and fabrication time for complex propulsion components. SLM stands poised to revolutionize propulsion manufacturing, but there are a number of technical questions that must be addressed in order to achieve rapid, efficient fabrication and ensure adequate performance of parts manufactured using this process in safety-critical flight applications. Previous optimization studies for SLM using the Concept Laser M1 and M2 machines at NASA Marshall Space Flight Center have centered on machine default parameters. The objective of this work is to characterize the impact of higher throughput parameters (a previously unexplored region of the manufacturing operating envelope for this application) on material consolidation. In phase I of this work, density blocks were analyzed to explore the relationship between build parameters (laser power, scan speed, hatch spacing, and layer thickness) and material consolidation (assessed in terms of as-built density and porosity). Phase II additionally considers the impact of post-processing, specifically hot isostatic pressing and heat treatment, as well as deposition pattern on material consolidation in the same higher energy parameter regime considered in the phase I work. Density and microstructure represent the "first-gate" metrics for determining the adequacy of the SLM process in this parameter range and, as a critical initial indicator of material quality, will factor into a follow-on DOE that assesses the impact of these parameters on mechanical properties. This work will contribute to creating a knowledge base (understanding material behavior in all ranges of the AM equipment operating envelope) that is critical to transitioning AM from the custom low rate production sphere it currently occupies to the world of mass high rate production, where parts are fabricated at a rapid rate with confidence that they will meet or exceed all stringent functional requirements for spaceflight hardware. These studies will also provide important data on the sensitivity of material consolidation to process parameters that will inform the design and development of future flight articles using SLM.

  17. Analytical network process based optimum cluster head selection in wireless sensor network.

    PubMed

    Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process.

  18. Analytical network process based optimum cluster head selection in wireless sensor network

    PubMed Central

    Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. PMID:28719616

  19. Selection of Levels of Dressing Process Parameters by Using TOPSIS Technique for Surface Roughness of En-31 Work piece in CNC Cylindrical Grinding Machine

    NASA Astrophysics Data System (ADS)

    Patil, Sanjay S.; Bhalerao, Yogesh J.

    2017-02-01

    Grinding is metal cutting process used for mainly finishing the automobile components. The grinding wheel performance becomes dull by using it most of times. So it should be reshaping for consistent performance. It is necessary to remove dull grains of grinding wheel which is known as dressing process. The surface finish produced on the work piece is dependent on the dressing parameters in sub-sequent grinding operation. Multi-point diamond dresser has four important parameters such as the dressing cross feed rate, dressing depth of cut, width of the diamond dresser and drag angle of the dresser. The range of cross feed rate level is from 80-100 mm/min, depth of cut varies from 10 - 30 micron, width of diamond dresser is from 0.8 - 1.10mm and drag angle is from 40o - 500, The relative closeness to ideal levels of dressing parameters are found for surface finish produced on the En-31 work piece during sub-sequent grinding operation by using Technique of Order Preference by Similarity to Ideal Solution (TOPSIS).In the present work, closeness to ideal solution i.e. levels of dressing parameters are found for Computer Numerical Control (CNC) cylindrical angular grinding machine. After the TOPSIS technique, it is found that the value of Level I is 0.9738 which gives better surface finish on the En-31 work piece in sub-sequent grinding operation which helps the user to select the correct levels (combinations) of dressing parameters.

  20. Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Anderson, B. J.; Justus, C. G.; Batts, G. W.

    2001-01-01

    Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.

  1. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    PubMed

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  2. Influence of Selective Laser Melting Processing Parameters of Co-Cr-W Powders on the Roughness of Exterior Surfaces

    NASA Astrophysics Data System (ADS)

    Baciu, M. A.; Baciu, E. R.; Bejinariu, C.; Toma, S. L.; Danila, A.; Baciu, C.

    2018-06-01

    Selective Laser Melting (SLM) represents an Additive Manufacturing method widely used in medical practice, mainly in dental medicine. The powder of 59% Co, 25% Cr, 2.5% W alloy (Starbond CoS Powder 55, S&S Scheftner C, Germany) was processed (SLM) on a Realizer SLM 50 device (SLM Solution, Germany). After laser processing and simple sanding with Al2O3 or two-phase sanding (Al2O3 and glass balls), measurements of surface roughness were conducted. This paper presents the influences exercised by laser power (P = 60 W, 80 W and 100 W), the scanning speed (vscan = 333 mm/s, 500 mm/s and 1000 mm/s) and exposure time (te = 20 µs, 40 µs and 60 µs) on the roughness of surfaces obtained by SLM processing. Based on the experimental results obtained for roughness (Ra), some recommendations regarding the choice of favorable combinations among the values of technological parameters under study in order to obtain the surface quality necessary for subsequent applications of the processed parts (SLM) have been made.

  3. Process Optimization and Microstructure Characterization of Ti6Al4V Manufactured by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    junfeng, Li; zhengying, Wei

    2017-11-01

    Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.

  4. Harmony search optimization in dimensional accuracy of die sinking EDM process using SS316L stainless steel

    NASA Astrophysics Data System (ADS)

    Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.

    2017-09-01

    Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.

  5. Color separation in forensic image processing using interactive differential evolution.

    PubMed

    Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

    2015-01-01

    Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.

  6. Investigation on Effect of Material Hardness in High Speed CNC End Milling Process.

    PubMed

    Dhandapani, N V; Thangarasu, V S; Sureshkannan, G

    2015-01-01

    This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results.

  7. Investigation on Effect of Material Hardness in High Speed CNC End Milling Process

    PubMed Central

    Dhandapani, N. V.; Thangarasu, V. S.; Sureshkannan, G.

    2015-01-01

    This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results. PMID:26881267

  8. Energy reduction for the spot welding process in the automotive industry

    NASA Astrophysics Data System (ADS)

    Cullen, J. D.; Athi, N.; Al-Jader, M. A.; Shaw, A.; Al-Shamma'a, A. I.

    2007-07-01

    When performing spot welding on galvanised metals, higher welding force and current are required than on uncoated steels. This has implications for the energy usage when creating each spot weld, of which there are approximately 4300 in each passenger car. The paper presented is an overview of electrode current selection and its variance over the lifetime of the electrode tip. This also describes the proposed analysis system for the selection of welding parameters for the spot welding process, as the electrode tip wears.

  9. Experimental investigation on selective laser melting of 17-4PH stainless steel

    NASA Astrophysics Data System (ADS)

    Hu, Zhiheng; Zhu, Haihong; Zhang, Hu; Zeng, Xiaoyan

    2017-01-01

    Selective laser melting (SLM) is an additive manufacturing (AM) technique that uses powders to fabricate 3Dparts directly. The objective of this paper is to perform an experimental investigation of selective laser melted 17-4PH stainless steel. The investigation involved the influence of separate processing parameters on the density, defect, microhardness and the influence of heat-treatment on the mechanical properties. The outcomes of this study show that scan velocity and slice thickness have significant effects on the density and the characteristics of pores of the SLMed parts. The effect of hatch spacing depends on scan velocity. The processing parameters, such as scan velocity, hatch spacing and slice thickness, have effect on microhardness. Compared to the samples with no heat-treatment, the yield strength of the heat-treated sample increases significantly and the elongation decreases due to the transformation of microstructure and the changes in the precipitation strengthening phases. By a combination of changes in composition and precipitation strengthening, microhardness improved.

  10. Genetics algorithm optimization of DWT-DCT based image Watermarking

    NASA Astrophysics Data System (ADS)

    Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan

    2017-01-01

    Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.

  11. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul; Lammlein, David; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David; Hartman, Daniel A.

    2010-12-14

    A friction stir system for processing at least a first workpiece includes a spindle actuator coupled to a rotary tool comprising a rotating member for contacting and processing the first workpiece. A detection system is provided for obtaining information related to a lateral alignment of the rotating member. The detection system comprises at least one sensor for measuring a force experienced by the rotary tool or a parameter related to the force experienced by the rotary tool during processing, wherein the sensor provides sensor signals. A signal processing system is coupled to receive and analyze the sensor signals and determine a lateral alignment of the rotating member relative to a selected lateral position, a selected path, or a direction to decrease a lateral distance relative to the selected lateral position or selected path. In one embodiment, the friction stir system can be embodied as a closed loop tracking system, such as a robot-based tracked friction stir welding (FSW) or friction stir processing (FSP) system.

  12. Multi-Criteria selection of technology for processing ore raw materials

    NASA Astrophysics Data System (ADS)

    Gorbatova, E. A.; Emelianenko, E. A.; Zaretckii, M. V.

    2017-10-01

    The development of Computer-Aided Process Planning (CAPP) for the Ore Beneficiation process is considered. The set of parameters to define the quality of the Ore Beneficiation process is identified. The ontological model of CAPP for the Ore Beneficiation process is described. The hybrid choice method of the most appropriate variant of the Ore Beneficiation process based on the Logical Conclusion Rules and the Fuzzy Multi-Criteria Decision Making (MCDM) approach is proposed.

  13. Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by Taguchi grey relational theory

    NASA Astrophysics Data System (ADS)

    Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj

    2015-12-01

    Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.

  14. The role of the electrolyte in the selective dissolution of metal alloys

    NASA Astrophysics Data System (ADS)

    Policastro, Steven A.

    Dealloying plays an important role in several corrosion processes, including pitting corrosion through the formation of local cathodes from the selective dissolution of intermetallic particles and stress-corrosion cracking in which it is responsible for injecting cracks from the surface into the undealloyed bulk material. Additionally, directed dealloying in the laboratory to form nanoporous structures has been the subject of much recent study because of the unique structural properties that the porous layer provides. In order to better understand the physical reasons for dealloying as well as understand the parameters that influence the evolution of the microstructure, several models have been proposed. Current theoretical descriptions of dealloying have been very successful in explaining some features of selective dissolution but additional behaviors can be included into the model to improve understanding of the dealloying process. In the present work, the effects of electrolyte component interactions, temperature, alloy cohesive energies, and applied potential on the development of nanoporosity via the selective dissolution of the less-noble component from binary and ternary alloys are considered. Both a kinetic Monte-Carlo (KMC) model of the behavior of the metal atoms and the electrolyte ions at the metal-solution interface and a phase-yield model of ligament coarsening are developed. By adding these additional parameters into the KMC model, a rich set of behaviors is observed in the simulation results. From the simulation results, it is suggested that selectively dissolving a binary alloy in a very aggressive electrolyte that targeted the LN atoms could provide a porous microstructure that retained a higher concentration of the LN atoms in its ligaments and thus retain more of the mechanical properties of the bulk alloy. In addition, by adding even a small fraction of a third, noble component to form a ternary alloy the dissolution kinetics of the least noble component can be dramatically altered, providing a means of controlling dealloying depth. Some molecular dynamics calculations are used to justify the assumptions of metal atom motion in the KMC model. A recently developed parameter-space exploration technique, COERCE, is employed to optimize the process of obtaining meaningful parameter values from the KMC simulation.

  15. Machinery Bearing Fault Diagnosis Using Variational Mode Decomposition and Support Vector Machine as a Classifier

    NASA Astrophysics Data System (ADS)

    Rama Krishna, K.; Ramachandran, K. I.

    2018-02-01

    Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.

  16. Software and Hardware System for Fast Processes Study When Preparing Foundation Beds of Oil and Gas Facilities

    NASA Astrophysics Data System (ADS)

    Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.

    2018-04-01

    Analysis of existing technologies for preparing foundation beds of oil and gas buildings and structures has revealed the lack of reasoned recommendations on the selection of rational technical and technological parameters of compaction. To study the nature of the dynamics of fast processes during compaction of foundation beds of oil and gas facilities, a specialized software and hardware system was developed. The method of calculating the basic technical parameters of the equipment for recording fast processes is presented, as well as the algorithm for processing the experimental data. The performed preliminary studies confirmed the accuracy of the decisions made and the calculations performed.

  17. Optimization of Coolant Technique Conditions for Machining A319 Aluminium Alloy Using Response Surface Method (RSM)

    NASA Astrophysics Data System (ADS)

    Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.

    2018-03-01

    Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.

  18. Influence of ionospheric disturbances onto long-baseline relative positioning in kinematic mode

    NASA Astrophysics Data System (ADS)

    Wezka, Kinga; Herrera, Ivan; Cokrlic, Marija; Galas, Roman

    2013-04-01

    Ionospheric disturbances are fast and random variabilities in the ionosphere and they are difficult to detect and model. Some strong disturbances can cause, among others, interruption of GNSS signal or even lead to loss of signal lock. These phenomena are especially harmful for kinematic real-time applications, where the system availability is one of the most important parameters influencing positioning reliability. Our investigations were conducted using long time series of GNSS observations gathered at high latitude, where ionospheric disturbances more frequently occur. Selected processing strategy was used to monitor ionospheric signatures in time series of the coordinates. Quality of the data of input and of the processing results were examined and described by a set of proposed parameters. Variations in the coordinates were compared with available information about the state of ionosphere derived from Neustrelitz TEC Model (NTCM) and with the time series of raw observations. Some selected parameters were also calculated with the "iono-tools" module of the TUB-NavSolutions software developed by the Precise Navigation and Positioning Group at Technische Universitaet Berlin. The paper presents very first results of evaluation of the robustness of positioning algorithms with respect to ionospheric anomalies using the NTCM model and our calculated ionospheric parameters.

  19. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties

    PubMed Central

    Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073

  20. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties.

    PubMed

    Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.

  1. Polishing tool and the resulting TIF for three variable machine parameters as input for the removal simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Robert; Haberl, Alexander; Rascher, Rolf

    2017-06-01

    The trend in the optic industry shows, that it is increasingly important to be able to manufacture complex lens geometries on a high level of precision. From a certain limit on the required shape accuracy of optical workpieces, the processing is changed from the two-dimensional to point-shaped processing. It is very important that the process is as stable as possible during the in point-shaped processing. To ensure stability, usually only one process parameter is varied during processing. It is common that this parameter is the feed rate, which corresponds to the dwell time. In the research project ArenA-FOi (Application-oriented analysis of resource-saving and energy-efficient design of industrial facilities for the optical industry), a touching procedure is used in the point-attack, and in this case a close look is made as to whether a change of several process parameters is meaningful during a processing. The ADAPT tool in size R20 from Satisloh AG is used, which is also available for purchase. The behavior of the tool is tested under constant conditions in the MCP 250 CNC by OptoTech GmbH. A series of experiments should enable the TIF (tool influence function) to be determined using three variable parameters. Furthermore, the maximum error frequency that can be processed is calculated as an example for one parameter set and serves as an outlook for further investigations. The test results serve as the basic for the later removal simulation, which must be able to deal with a variable TIF. This topic has already been successfully implemented in another research project of the Institute for Precision Manufacturing and High-Frequency Technology (IPH) and thus this algorithm can be used. The next step is the useful implementation of the collected knowledge. The TIF must be selected on the basis of the measured data. It is important to know the error frequencies to select the optimal TIF. Thus, it is possible to compare the simulated results with real measurement data and to carry out a revision. From this point onwards, it is possible to evaluate the potential of this approach, and in the ideal case it will be further researched and later found in the production.

  2. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  3. Ballistic projectile trajectory determining system

    DOEpatents

    Karr, T.J.

    1997-05-20

    A computer controlled system determines the three-dimensional trajectory of a ballistic projectile. To initialize the system, predictions of state parameters for a ballistic projectile are received at an estimator. The estimator uses the predictions of the state parameters to estimate first trajectory characteristics of the ballistic projectile. A single stationary monocular sensor then observes the actual first trajectory characteristics of the ballistic projectile. A comparator generates an error value related to the predicted state parameters by comparing the estimated first trajectory characteristics of the ballistic projectile with the observed first trajectory characteristics of the ballistic projectile. If the error value is equal to or greater than a selected limit, the predictions of the state parameters are adjusted. New estimates for the trajectory characteristics of the ballistic projectile are made and are then compared with actual observed trajectory characteristics. This process is repeated until the error value is less than the selected limit. Once the error value is less than the selected limit, a calculator calculates trajectory characteristics such a the origin and destination of the ballistic projectile. 8 figs.

  4. PERSEUS QC: preparing statistic data sets

    NASA Astrophysics Data System (ADS)

    Belokopytov, Vladimir; Khaliulin, Alexey; Ingerov, Andrey; Zhuk, Elena; Gertman, Isaac; Zodiatis, George; Nikolaidis, Marios; Nikolaidis, Andreas; Stylianou, Stavros

    2017-09-01

    The Desktop Oceanographic Data Processing Module was developed for visual analysis of interdisciplinary cruise measurements. The program provides the possibility of data selection based on different criteria, map plotting, sea horizontal sections, and sea depth vertical profiles. The data selection in the area of interest can be specified according to a set of different physical and chemical parameters complimented by additional parameters, such as the cruise number, ship name, and time period. The visual analysis of a set of vertical profiles in the selected area allows to determine the quality of the data, their location and the time of the in-situ measurements and to exclude any questionable data from the statistical analysis. For each selected set of profiles, the average vertical profile, the minimal and maximal values of the parameter under examination and the root mean square (r.m.s.) are estimated. These estimates are compared with the parameter ranges, set for each sub-region by MEDAR/MEDATLAS-II and SeaDataNet2 projects. In the framework of the PERSEUS project, certain parameters which lacked a range were calculated from scratch, while some of the previously used ranges were re-defined using more comprehensive data sets based on SeaDataNet2, SESAME and PERSEUS projects. In some cases we have used additional sub- regions to redefine the ranges ore precisely. The recalculated ranges are used to improve the PERSEUS Data Quality Control.

  5. Application of the Taguchi analytical method for optimization of effective parameters of the chemical vapor deposition process controlling the production of nanotubes/nanobeads.

    PubMed

    Sharon, Maheshwar; Apte, P R; Purandare, S C; Zacharia, Renju

    2005-02-01

    Seven variable parameters of the chemical vapor deposition system have been optimized with the help of the Taguchi analytical method for getting a desired product, e.g., carbon nanotubes or carbon nanobeads. It is observed that almost all selected parameters influence the growth of carbon nanotubes. However, among them, the nature of precursor (racemic, R or Technical grade camphor) and the carrier gas (hydrogen, argon and mixture of argon/hydrogen) seem to be more important parameters affecting the growth of carbon nanotubes. Whereas, for the growth of nanobeads, out of seven parameters, only two, i.e., catalyst (powder of iron, cobalt, and nickel) and temperature (1023 K, 1123 K, and 1273 K), are the most influential parameters. Systematic defects or islands on the substrate surface enhance nucleation of novel carbon materials. Quantitative contributions of process parameters as well as optimum factor levels are obtained by performing analysis of variance (ANOVA) and analysis of mean (ANOM), respectively.

  6. Selection of process parameters for producing high quality defatted sesame flour at pilot scale.

    PubMed

    Manikantan, M R; Sharma, Rajiv; Yadav, D N; Gupta, R K

    2015-03-01

    The present work was undertaken to study the effect of pearling duration, soaking time, steaming duration and drying temperature on the quality of sesame seeds and mechanically extracted partially defatted sesame cake. On the basis of quality attributes i.e. high protein, low crude fibre, low residual oil and low oxalic acid, the optimum process parameters were selected. The combination of 20 min of pearling duration, 15 min of soaking, 15 min of steaming at 100 kPa pressure and drying at 50 °C yielded high quality partially defatted protein rich sesame flour as compared to untreated defatted sesame flour. The developed high quality partially defatted protein rich sesame flour may be used in various food applications as a vital ingredient to increase the nutritional significance of the prepared foodstuffs.

  7. Development of a fuzzy logic expert system for pile selection. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulshafer, M.L.

    1989-01-01

    This thesis documents the development of prototype expert system for pile selection for use on microcomputers. It concerns the initial selection of a pile foundation taking into account the parameters such as soil condition, pile length, loading scenario, material availability, contractor experience, and noise or vibration constraints. The prototype expert system called Pile Selection, version 1 (PS1) was developed using an expert system shell FLOPS. FLOPS is a shell based on the AI language OPS5 with many unique features. The system PS1 utilizes all of these unique features. Among the features used are approximate reasoning with fuzzy set theory, themore » blackboard architecture, and the emulated parallel processing of fuzzy production rules. A comprehensive review of the parameters used in selecting a pile was made, and the effects of the uncertainties associated with the vagueness of these parameters was examined in detail. Fuzzy set theory was utilized to deal with such uncertainties and provides the basis for developing a method for determining the best possible choice of piles for a given situation. Details of the development of PS1, including documenting and collating pile information for use in the expert knowledge data bases, are discussed.« less

  8. Iron-Based Amorphous Coatings Produced by HVOF Thermal Spray Processing-Coating Structure and Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beardsley, M B

    2008-03-26

    The feasibility to coat large SNF/HLW containers with a structurally amorphous material (SAM) was demonstrated on sub-scale models fabricated from Type 316L stainless steel. The sub-scale model were coated with SAM 1651 material using kerosene high velocity oxygen fuel (HVOF) torch to thicknesses ranging from 1 mm to 2 mm. The process parameters such as standoff distance, oxygen flow, and kerosene flow, were optimized in order to improve the corrosion properties of the coatings. Testing in an electrochemical cell and long-term exposure to a salt spray environment were used to guide the selection of process parameters.

  9. Vapor hydrogen peroxide as alternative to dry heat microbial reduction

    NASA Astrophysics Data System (ADS)

    Chung, S.; Kern, R.; Koukol, R.; Barengoltz, J.; Cash, H.

    2008-09-01

    The Jet Propulsion Laboratory (JPL), in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal was to include this technique, with an appropriate specification, in NASA Procedural Requirements 8020.12 as a low-temperature complementary technique to the dry heat sterilization process. The VHP process is widely used by the medical industry to sterilize surgical instruments and biomedical devices, but high doses of VHP may degrade the performance of flight hardware, or compromise material compatibility. The goal for this study was to determine the minimum VHP process conditions for planetary protection acceptable microbial reduction levels. Experiments were conducted by the STERIS Corporation, under contract to JPL, to evaluate the effectiveness of vapor hydrogen peroxide for the inactivation of the standard spore challenge, Geobacillus stearothermophilus. VHP process parameters were determined that provide significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. In addition to the obvious process parameters of interest: hydrogen peroxide concentration, number of injection cycles, and exposure duration, the investigation also considered the possible effect on lethality of environmental parameters: temperature, absolute humidity, and material substrate. This study delineated a range of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D-value may be imposed, a process humidity range for which the worst case D-value may be imposed, and the dependence on selected spacecraft material substrates. The derivation of D-values from the lethality data permitted conservative planetary protection recommendations.

  10. Analysis of the shrinkage at the thick plate part using response surface methodology

    NASA Astrophysics Data System (ADS)

    Hatta, N. M.; Azlan, M. Z.; Shayfull, Z.; Roselina, S.; Nasir, S. M.

    2017-09-01

    Injection moulding is well known for its manufacturing process especially in producing plastic products. To measure the final product quality, there are lots of precautions to be taken into such as parameters setting at the initial stage of the process. Sometimes, if these parameters were set up wrongly, defects may be occurred and one of the well-known defects in the injection moulding process is a shrinkage. To overcome this problem, a maximisation at the precaution stage by making an optimal adjustment on the parameter setting need to be done and this paper focuses on analysing the shrinkage by optimising the parameter at thick plate part with the help of Response Surface Methodology (RSM) and ANOVA analysis. From the previous study, the outstanding parameter gained from the optimisation method in minimising the shrinkage at the moulded part was packing pressure. Therefore, with the reference from the previous literature, packing pressure was selected as the parameter setting for this study with other three parameters which are melt temperature, cooling time and mould temperature. The analysis of the process was obtained from the simulation by Autodesk Moldflow Insight (AMI) software and the material used for moulded part was Acrylonitrile Butadiene Styrene (ABS). The analysis and result were obtained and it found that the shrinkage can be minimised and the significant parameters were found as packing pressure, mould temperature and melt temperature.

  11. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  12. Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.

    PubMed

    Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A

    2017-04-15

    Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongzhang, R.; Xiao, B.; Lardner, T.

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less

  14. The predictive consequences of parameterization

    NASA Astrophysics Data System (ADS)

    White, J.; Hughes, J. D.; Doherty, J. E.

    2013-12-01

    In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.

  15. Advantages and disadvantages of an objective selection process for early intervention in employees at risk for sickness absence

    PubMed Central

    Duijts, Saskia FA; Kant, IJmert; Swaen, Gerard MH

    2007-01-01

    Background It is unclear if objective selection of employees, for an intervention to prevent sickness absence, is more effective than subjective 'personal enlistment'. We hypothesize that objectively selected employees are 'at risk' for sickness absence and eligible to participate in the intervention program. Methods The dispatch of 8603 screening instruments forms the starting point of the objective selection process. Different stages of this process, throughout which employees either dropped out or were excluded, were described and compared with the subjective selection process. Characteristics of ineligible and ultimately selected employees, for a randomized trial, were described and quantified using sickness absence data. Results Overall response rate on the screening instrument was 42.0%. Response bias was found for the parameters sex and age, but not for sickness absence. Sickness absence was higher in the 'at risk' (N = 212) group (42%) compared to the 'not at risk' (N = 2503) group (25%) (OR 2.17 CI 1.63–2.89; p = 0.000). The selection process ended with the successful inclusion of 151 eligible, i.e. 2% of the approached employees in the trial. Conclusion The study shows that objective selection of employees for early intervention is effective. Despite methodological and practical problems, selected employees are actually those at risk for sickness absence, who will probably benefit more from the intervention program than others. PMID:17474980

  16. Warpage analysis on thin shell part using glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    The Autodesk Moldflow Insight (AMI) software was used in this study to focuses on the analysis in plastic injection moulding process associate the input parameter and output parameter. The material used in this study is Acrylonitrile Butadiene Styrene (ABS) as the moulded material to produced the plastic part. The MATLAB sortware is a method was used to find the best setting parameter. The variables was selected in this study were melt temperature, packing pressure, coolant temperature and cooling time.

  17. Investigation of Springback Associated with Composite Material Component Fabrication (MSFC Center Director's Discretionary Fund Final Report, Project 94-09)

    NASA Technical Reports Server (NTRS)

    Benzie, M. A.

    1998-01-01

    The objective of this research project was to examine processing and design parameters in the fabrication of composite components to obtain a better understanding and attempt to minimize springback associated with composite materials. To accomplish this, both processing and design parameters were included in a Taguchi-designed experiment. Composite angled panels were fabricated, by hand layup techniques, and the fabricated panels were inspected for springback effects. This experiment yielded several significant results. The confirmation experiment validated the reproducibility of the factorial effects, error recognized, and experiment as reliable. The material used in the design of tooling needs to be a major consideration when fabricating composite components, as expected. The factors dealing with resin flow, however, raise several potentially serious material and design questions. These questions must be dealt with up front in order to minimize springback: viscosity of the resin, vacuum bagging of the part for cure, and the curing method selected. These factors directly affect design, material selection, and processing methods.

  18. Direct injection analysis of fatty and resin acids in papermaking process waters by HPLC/MS.

    PubMed

    Valto, Piia; Knuutinen, Juha; Alén, Raimo

    2011-04-01

    A novel HPLC-atmospheric pressure chemical ionization/MS (HPLC-APCI/MS) method was developed for the rapid analysis of selected fatty and resin acids typically present in papermaking process waters. A mixture of palmitic, stearic, oleic, linolenic, and dehydroabietic acids was separated by a commercial HPLC column (a modified stationary C(18) phase) using gradient elution with methanol/0.15% formic acid (pH 2.5) as a mobile phase. The internal standard (myristic acid) method was used to calculate the correlation coefficients and in the quantitation of the results. In the thorough quality parameters measurement, a mixture of these model acids in aqueous media as well as in six different paper machine process waters was quantitatively determined. The measured quality parameters, such as selectivity, linearity, precision, and accuracy, clearly indicated that, compared with traditional gas chromatographic techniques, the simple method developed provided a faster chromatographic analysis with almost real-time monitoring of these acids. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Design of Backpack to Aid Elderly for the Mazu Touring Procession in Taiwan

    NASA Astrophysics Data System (ADS)

    Chao, F. L.; Huang, Y. C.; Su, J. Y.; Sun, C. L.; Chen, C. C.

    2017-09-01

    The Dajia Mazu Touring Procession is a 9-day long religious event held annually. However, for the elderly participants, it is a big burden especially in regards to physical strength. The goal of designing backpack is to reduce the physiological stress of elderly during the procession. Firstly, physical parameters were measured to explore the dimension parameters by testing. The height of the chair is different from that of the kneeling pad; a smooth curve was chosen to coordinate the two as the main outline of the backpack. Secondly, material selections based on following limits were considered: (1) acceptable weight and size, (2) intermediate price and (3) a design that is fitting to the Dajia event. The material and structural strength were evaluated for wood, bamboo, stainless steel. Two design concept were proposed, wood is selected for construction and testing by users. The texture of the backpack is Rush grass, it was built successfully to cover the backpack’s external surface to meet local culture features.

  20. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    NASA Astrophysics Data System (ADS)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  1. Study of thermo-fluidic behavior of micro-droplet in inkjet-based micro manufacturing processes

    NASA Astrophysics Data System (ADS)

    Das, Raju; Mahapatra, Abhijit; Ball, Amit Kumar; Roy, Shibendu Shekhar; Murmu, Naresh Chandra

    2017-06-01

    Inkjet printing technology, a maskless, non-contact patterning operation, which has been a revelation in the field of micro and nano manufacturing for its use in the selective deposition of desired materials. It is becoming an exciting alternative technology such as lithography to print functional material on to a substrate. Selective deposition of functional materials on desired substrates is a basic requirement in many of the printing based micro and nano manufacturing operations like the fabrication of microelectronic devices, solar cell, Light-emitting Diode (LED) research fields like pharmaceutical industries for drug discovery purposes and in biotechnology to make DNA microarrays. In this paper, an attempt has been made to design and develop an indigenous Electrohydrodynamic Inkjet printing system for micro fabrication and to study the interrelationships between various thermos-fluidic parameters of the ink material in the printing process. The effect of printing process parameters on printing performance characteristics has also been studied. And the applicability of the process has also been experimentally demonstrated. The experimentally found results were quite satisfactory and accordance to its applicability.

  2. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  3. Using FEP's List and a PA Methodology for Evaluating Suitable Areas for the LLW Repository in Italy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risoluti, P.; Ciabatti, P.; Mingrone, G.

    2002-02-26

    In Italy following a referendum held in 1987, nuclear energy has been phased out. Since 1998, a general site selection process covering the whole Italian territory has been under way. A GIS (Geographic Information System) methodology was implemented in three steps using the ESRI Arc/Info and Arc/View platforms. The screening identified approximately 0.8% of the Italian territory as suitable for locating the LLW Repository. 200 areas have been identified as suitable for the location of the LLW Repository, using a multiple exclusion criteria procedure (1:500,000), regional scale (1:100.000) and local scale (1:25,000-1:10,000). A methodology for evaluating these areas has beenmore » developed allowing, along with the evaluation of the long term efficiency of the engineered barrier system (EBS), the characterization of the selected areas in terms of physical and safety factors and planning factors. The first step was to identify, on a referenced FEPs list, a group of geomorphological, geological, hydrogeological, climatic and human behavior caused process and/or events, which were considered of importance for the site evaluation, taking into account the Italian situation. A site evaluation system was established ascribing weighted scores to each of these processes and events, which were identified as parameters of the new evaluation system. The score of each parameter is ranging from 1 (low suitability) to 3 (high suitability). The corresponding weight is calculated considering the effect of the parameter in terms of total dose to the critical group, using an upgraded AMBER model for PA calculation. At the end of the process an index obtained by a score weighted sum gives the degree of suitability of the selected areas for the LLW Repository location. The application of the methodology to two selected sites is given in the paper.« less

  4. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  5. Method for Predicting and Optimizing System Parameters for Electrospinning System

    NASA Technical Reports Server (NTRS)

    Wincheski, Russell A. (Inventor)

    2011-01-01

    An electrospinning system using a spinneret and a counter electrode is first operated for a fixed amount of time at known system and operational parameters to generate a fiber mat having a measured fiber mat width associated therewith. Next, acceleration of the fiberizable material at the spinneret is modeled to determine values of mass, drag, and surface tension associated with the fiberizable material at the spinneret output. The model is then applied in an inversion process to generate predicted values of an electric charge at the spinneret output and an electric field between the spinneret and electrode required to fabricate a selected fiber mat design. The electric charge and electric field are indicative of design values for system and operational parameters needed to fabricate the selected fiber mat design.

  6. Evidence accumulation as a model for lexical selection.

    PubMed

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Partial Transient Liquid-Phase Bonding, Part II: A Filtering Routine for Determining All Possible Interlayer Combinations

    NASA Astrophysics Data System (ADS)

    Cook, Grant O.; Sorensen, Carl D.

    2013-12-01

    Partial transient liquid-phase (PTLP) bonding is currently an esoteric joining process with limited applications. However, it has preferable advantages compared with typical joining techniques and is the best joining technique for certain applications. Specifically, it can bond hard-to-join materials as well as dissimilar material types, and bonding is performed at comparatively low temperatures. Part of the difficulty in applying PTLP bonding is finding suitable interlayer combinations (ICs). A novel interlayer selection procedure has been developed to facilitate the identification of ICs that will create successful PTLP bonds and is explained in a companion article. An integral part of the selection procedure is a filtering routine that identifies all possible ICs for a given application. This routine utilizes a set of customizable parameters that are based on key characteristics of PTLP bonding. These parameters include important design considerations such as bonding temperature, target remelting temperature, bond solid type, and interlayer thicknesses. The output from this routine provides a detailed view of each candidate IC along with a broad view of the entire candidate set, greatly facilitating the selection of ideal ICs. This routine provides a new perspective on the PTLP bonding process. In addition, the use of this routine, by way of the accompanying selection procedure, will expand PTLP bonding as a viable joining process.

  8. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Task 1 and Quality Control Sample; Error-Prone Modeling Analysis Plan.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; And Others

    Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…

  9. Part weight verification between simulation and experiment of plastic part in injection moulding process

    NASA Astrophysics Data System (ADS)

    Amran, M. A. M.; Idayu, N.; Faizal, K. M.; Sanusi, M.; Izamshah, R.; Shahir, M.

    2016-11-01

    In this study, the main objective is to determine the percentage difference of part weight between experimental and simulation work. The effect of process parameters on weight of plastic part is also investigated. The process parameters involved were mould temperature, melt temperature, injection time and cooling time. Autodesk Simulation Moldflow software was used to run the simulation of the plastic part. Taguchi method was selected as Design of Experiment to conduct the experiment. Then, the simulation result was validated with the experimental result. It was found that the minimum and maximum percentage of differential of part weight between simulation and experimental work are 0.35 % and 1.43 % respectively. In addition, the most significant parameter that affected part weight is the mould temperature, followed by melt temperature, injection time and cooling time.

  10. Commercialization of the Conversion of Bagasse to Ethanol. Summary quarterly report for the period January-September 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2000-02-01

    These studies were intended to further refine sugar yield parameters which effect sugar yield such as feedstock particle size, debris, acid soak time, temperature, dewatering, and pretreatment conditions (such as temperature, reaction time, percentage solids concentration, acid concentration), liquid-solids separation, and detoxification parameters (such as time temperature and mixing of detoxification ingredients). Validate and refine parameters, which affect ethanol yield such as detoxification conditions mentioned above, and to fermenter conditions such as temperature, pH adjustment, aeration, nutrients, and charging sequence. Materials of construction will be evaluated also. Evaluate stillage to determine clarification process and suitability for recycle; evaluate lignocellulosic cakemore » for thermal energy recovery to produce heat and electricity for the process; and Support Studies at UF - Toxin Amelioration and Fermentation; TVA work will provide pre-hydroylsates for the evaluation of BCI proprietary methods of toxin amelioration. Pre-hydrolysates from batch studies will allow the determination of the range of allowable hydrolyze conditions that can be used to produce a fermentable sugar stream. This information is essential to guide selection of process parameters for refinement and validation in the continuous pretreatment reactor, and for overall process design. Additional work will be conducted at UFRFI to develop improved strains that are resistant to inhibitors. The authors are quite optimistic about the long-term prospects for this advancement having recently developed strains with a 25%--50% increase in ethanol production. The biocatalyst platform selected originally, genetically engineered Escherichia coli B, has proven to be quite robust and adaptable.« less

  11. Stabilometric parameters are affected by anthropometry and foot placement.

    PubMed

    Chiari, Lorenzo; Rocchi, Laura; Cappello, Angelo

    2002-01-01

    To recognize and quantify the influence of biomechanical factors, namely anthropometry and foot placement, on the more common measures of stabilometric performance, including new-generation stochastic parameters. Fifty normal-bodied young adults were selected in order to cover a sufficiently wide range of anthropometric properties. They were allowed to choose their preferred side-by-side foot position and their quiet stance was recorded with eyes open and closed by a force platform. biomechanical factors are known to influence postural stability but their impact on stabilometric parameters has not been extensively explored yet. Principal component analysis was used for feature selection among several biomechanical factors. A collection of 55 stabilometric parameters from the literature was estimated from the center-of-pressure time series. Linear relations between stabilometric parameters and selected biomechanical factors were investigated by robust regression techniques. The feature selection process returned height, weight, maximum foot width, base-of-support area, and foot opening angle as the relevant biomechanical variables. Only eleven out of the 55 stabilometric parameters were completely immune from a linear dependence on these variables. The remaining parameters showed a moderate to high dependence that was strengthened upon eye closure. For these parameters, a normalization procedure was proposed, to remove what can well be considered, in clinical investigations, a spurious source of between-subject variability. Care should be taken when quantifying postural sway through stabilometric parameters. It is suggested as a good practice to include some anthropometric measurements in the experimental protocol, and to standardize or trace foot position. Although the role of anthropometry and foot placement has been investigated in specific studies, there are no studies in the literature that systematically explore the relationship between such BF and stabilometric parameters. This knowledge may contribute to better defining the experimental protocol and improving the functional evaluation of postural sway for clinical purposes, e.g. by removing through normalization the spurious effects of body properties and foot position on postural performance.

  12. Dual ant colony operational modal analysis parameter estimation method

    NASA Astrophysics Data System (ADS)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  13. A hybrid artificial neural network as a software sensor for optimal control of a wastewater treatment process.

    PubMed

    Choi, D J; Park, H

    2001-11-01

    For control and automation of biological treatment processes, lack of reliable on-line sensors to measure water quality parameters is one of the most important problems to overcome. Many parameters cannot be measured directly with on-line sensors. The accuracy of existing hardware sensors is also not sufficient and maintenance problems such as electrode fouling often cause trouble. This paper deals with the development of software sensor techniques that estimate the target water quality parameter from other parameters using the correlation between water quality parameters. We focus our attention on the preprocessing of noisy data and the selection of the best model feasible to the situation. Problems of existing approaches are also discussed. We propose a hybrid neural network as a software sensor inferring wastewater quality parameter. Multivariate regression, artificial neural networks (ANN), and a hybrid technique that combines principal component analysis as a preprocessing stage are applied to data from industrial wastewater processes. The hybrid ANN technique shows an enhancement of prediction capability and reduces the overfitting problem of neural networks. The result shows that the hybrid ANN technique can be used to extract information from noisy data and to describe the nonlinearity of complex wastewater treatment processes.

  14. Multi Response Optimization of Process Parameters Using Grey Relational Analysis for Turning of Al-6061

    NASA Astrophysics Data System (ADS)

    Deepak, Doreswamy; Beedu, Rajendra

    2017-08-01

    Al-6061 is one among the most useful material used in manufacturing of products. The major qualities of Aluminium are reasonably good strength, corrosion resistance and thermal conductivity. These qualities have made it a suitable material for various applications. While manufacturing these products, companies strive for reducing the production cost by increasing Material Removal Rate (MRR). Meanwhile, the quality of surface need to be ensured at an acceptable value. This paper aims at bringing a compromise between high MRR and low surface roughness requirement by applying Grey Relational Analysis (GRA). This article presents the selection of controllable parameters like longitudinal feed, cutting speed and depth of cut to arrive at optimum values of MRR and surface roughness (Ra). The process parameters for experiments were selected based on Taguchi’s L9 array with two replications. Grey relation analysis being most suited method for multi response optimization, the same is adopted for the optimization. The result shows that feed rate is the most significant factor that influences MRR and Surface finish.

  15. Evolutionary algorithm for vehicle driving cycle generation.

    PubMed

    Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott

    2011-09-01

    Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.

  16. Experimental design of a twin-column countercurrent gradient purification process.

    PubMed

    Steinebach, Fabian; Ulmer, Nicole; Decker, Lara; Aumann, Lars; Morbidelli, Massimo

    2017-04-07

    As typical for separation processes, single unit batch chromatography exhibits a trade-off between purity and yield. The twin-column MCSGP (multi-column countercurrent solvent gradient purification) process allows alleviating such trade-offs, particularly in the case of difficult separations. In this work an efficient and reliable procedure for the design of the twin-column MCSGP process is developed. This is based on a single batch chromatogram, which is selected as the design chromatogram. The derived MCSGP operation is not intended to provide optimal performance, but it provides the target product in the selected fraction of the batch chromatogram, but with higher yield. The design procedure is illustrated for the isolation of the main charge isoform of a monoclonal antibody from Protein A eluate with ion-exchange chromatography. The main charge isoform was obtained at a purity and yield larger than 90%. At the same time process related impurities such as HCP and leached Protein A as well as aggregates were at least equally well removed. Additionally, the impact of several design parameters on the process performance in terms of purity, yield, productivity and buffer consumption is discussed. The obtained results can be used for further fine-tuning of the process parameters so as to improve its performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  18. Evolution with Stochastic Fitness and Stochastic Migration

    PubMed Central

    Rice, Sean H.; Papadopoulos, Anthony

    2009-01-01

    Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580

  19. Determination of material distribution in heading process of small bimetallic bar

    NASA Astrophysics Data System (ADS)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  20. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    NASA Astrophysics Data System (ADS)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  1. Effects of process parameters on solid self-microemulsifying particles in a laboratory scale fluid bed.

    PubMed

    Mukherjee, Tusharmouli; Plakogiannis, Fotios M

    2012-01-01

    The purpose of this study was to select the critical process parameters of the fluid bed processes impacting the quality attribute of a solid self-microemulsifying (SME) system of albendazole (ABZ). A fractional factorial design (2(4-1)) with four parameters (spray rate, inlet air temperature, inlet air flow, and atomization air pressure) was created by MINITAB software. Batches were manufactured in a laboratory top-spray fluid bed at 625-g scale. Loss on drying (LOD) samples were taken throughout each batch to build the entire moisture profiles. All dried granulation were sieved using mesh 20 and analyzed for particle size distribution (PSD), morphology, density, and flow. It was found that as spray rate increased, sauter-mean diameter (D(s)) also increased. The effect of inlet air temperature on the peak moisture which is directly related to the mean particle size was found to be significant. There were two-way interactions between studied process parameters. The main effects of inlet air flow rate and atomization air pressure could not be found as the data were inconclusive. The partial least square (PLS) regression model was found significant (P < 0.01) and predictive for optimization. This study established a design space for the parameters for solid SME manufacturing process.

  2. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  3. Friction Stir Welding at MSFC: Kinematics

    NASA Technical Reports Server (NTRS)

    Nunes, A. C., Jr.

    2001-01-01

    In 1991 The Welding Institute of the United Kingdom patented the Friction Stir Welding (FSW) process. In FSW a rotating pin-tool is inserted into a weld seam and literally stirs the faying surfaces together as it moves up the seam. By April 2000 the American Welding Society International Welding and Fabricating Exposition featured several exhibits of commercial FSW processes and the 81st Annual Convention devoted a technical session to the process. The FSW process is of interest to Marshall Space Flight Center (MSFC) as a means of avoiding hot-cracking problems presented by the 2195 aluminum-lithium alloy, which is the primary constituent of the Lightweight Space Shuttle External Tank. The process has been under development at MSFC for External Tank applications since the early 1990's. Early development of the FSW process proceeded by cut-and-try empirical methods. A substantial and complex body of data resulted. A theoretical model was wanted to deal with the complexity and reduce the data to concepts serviceable for process diagnostics, optimization, parameter selection, etc. A first step in understanding the FSW process is to determine the kinematics, i.e., the flow field in the metal in the vicinity of the pin-tool. Given the kinematics, the dynamics, i.e., the forces, can be targeted. Given a completed model of the FSW process, attempts at rational design of tools and selection of process parameters can be made.

  4. The use of database management systems and artificial intelligence in automating the planning of optical navigation pictures

    NASA Technical Reports Server (NTRS)

    Davis, Robert P.; Underwood, Ian M.

    1987-01-01

    The use of database management systems (DBMS) and AI to minimize human involvement in the planning of optical navigation pictures for interplanetary space probes is discussed, with application to the Galileo mission. Parameters characterizing the desirability of candidate pictures, and the program generating them, are described. How these parameters automatically build picture records in a database, and the definition of the database structure, are then discussed. The various rules, priorities, and constraints used in selecting pictures are also described. An example is provided of an expert system, written in Prolog, for automatically performing the selection process.

  5. Threshold parameters of the mechanisms of selective nanophotothermolysis with gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Pustovalov, Victor; Zharov, Vladimir

    2008-02-01

    Photothermal-based effects in and around gold nanoparticles under action of short (nano, pico- and femtosecond) laser pulses are analyzed with focus on photoacoustic effects due to the thermal expansion of nanoparticles and liquid around them, thermal protein denaturation, explosive liquid vaporization, melting and evaporation of nanoparticle, optical breakdown initiated by nanoparticles and accompanied to shock waves and explosion (fragmentation) of gold nanoparticles. Characteristic parameters for these processes such as the temperature and laser intensity thresholds are summarized to provide basis for comparison of different mechanisms of selective nanophotothermolysis of different targets (e.g., cancer cells, bacteria, viruses, fungi, and helminths).

  6. Landfill site selection using combination of GIS and fuzzy AHP, a case study: Iranshahr, Iran.

    PubMed

    Torabi-Kaveh, M; Babazadeh, R; Mohammadi, S D; Zaresefat, M

    2016-03-09

    One of the most important recent challenges in solid waste management throughout the world is site selection of sanitary landfill. Commonly, because of simultaneous effects of social, environmental, and technical parameters on suitability of a landfill site, landfill site selection is a complex process and depends on several criteria and regulations. This study develops a multi-criteria decision analysis (MCDA) process, which combines geographic information system (GIS) analysis with a fuzzy analytical hierarchy process (FAHP), to determine suitable sites for landfill construction in Iranshahr County, Iran. The GIS was used to calculate and classify selected criteria and FAHP was used to assess the criteria weights based on their effectiveness on selection of potential landfill sites. Finally, a suitability map was prepared by overlay analyses and suitable areas were identified. Four suitability classes within the study area were separated, including high, medium, low, and very low suitability areas, which represented 18%, 15%, 55%, and 12% of the study area, respectively. © The Author(s) 2016.

  7. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  8. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  9. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.

    PubMed

    White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K

    2016-12-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.

  10. The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems

    PubMed Central

    Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.

    2016-01-01

    We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060

  11. Influence of Wire Electrical Discharge Machining (WEDM) process parameters on surface roughness

    NASA Astrophysics Data System (ADS)

    Yeakub Ali, Mohammad; Banu, Asfana; Abu Bakar, Mazilah

    2018-01-01

    In obtaining the best quality of engineering components, the quality of machined parts surface plays an important role. It improves the fatigue strength, wear resistance, and corrosion of workpiece. This paper investigates the effects of wire electrical discharge machining (WEDM) process parameters on surface roughness of stainless steel using distilled water as dielectric fluid and brass wire as tool electrode. The parameters selected are voltage open, wire speed, wire tension, voltage gap, and off time. Empirical model was developed for the estimation of surface roughness. The analysis revealed that off time has a major influence on surface roughness. The optimum machining parameters for minimum surface roughness were found to be at a 10 V open voltage, 2.84 μs off time, 12 m/min wire speed, 6.3 N wire tension, and 54.91 V voltage gap.

  12. Characterizing a porous road pavement using surface impedance measurement: a guided numerical inversion procedure.

    PubMed

    Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel

    2013-12-01

    This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.

  13. Study of the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape

    NASA Astrophysics Data System (ADS)

    Kaldunski, Pawel; Kukielka, Leon; Patyk, Radoslaw; Kulakowska, Agnieszka; Bohdal, Lukasz; Chodor, Jaroslaw; Kukielka, Krzysztof

    2018-05-01

    In this paper, the numerical analysis and computer simulation of deep drawing process has been presented. The incremental model of the process in updated Lagrangian formulation with the regard of the geometrical and physical nonlinearity has been evaluated by variational and the finite element methods. The Frederic Barlat's model taking into consideration the anisotropy of materials in three main and six tangents directions has been used. The work out application in Ansys/Ls-Dyna program allows complex step by step analysis and prognoses: the shape, dimensions and state stress and strains of drawpiece. The paper presents the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape, which includes: height, sheet thickness and maximum drawing force. The important factors determining the proper formation of drawpiece and the ways of their determination have been described.

  14. The change of steel surface chemistry regarding oxygen partial pressure and dew point

    NASA Astrophysics Data System (ADS)

    Norden, Martin; Blumenau, Marc; Wuttke, Thiemo; Peters, Klaus-Josef

    2013-04-01

    By investigating the surface state of a Ti-IF, TiNb-IF and a MnCr-DP after several series of intercritical annealing, the impact of the annealing gas composition on the selective oxidation process is discussed. On behalf of the presented results, it can be concluded that not the general oxygen partial pressure in the annealing furnace, which is a result of the equilibrium reaction of water and hydrogen, is the main driving force for the selective oxidation process. It is shown that the amounts of adsorbed gases at the strip surface and the effective oxygen partial pressure resulting from the adsorbed gases, which is mainly dependent on the water content of the annealing furnace, is driving the selective oxidation processes occurring during intercritical annealing. Thus it is concluded, that for industrial applications the dew point must be the key parameter value for process control.

  15. Manufacturing Feasibility and Forming Properties of Cu-4Sn in Selective Laser Melting.

    PubMed

    Mao, Zhongfa; Zhang, David Z; Wei, Peitang; Zhang, Kaifei

    2017-03-24

    Copper alloys, combined with selective laser melting (SLM) technology, have attracted increasing attention in aerospace engineering, automobile, and medical fields. However, there are some difficulties in SLM forming owing to low laser absorption and excellent thermal conductivity. It is, therefore, necessary to explore a copper alloy in SLM. In this research, manufacturing feasibility and forming properties of Cu-4Sn in SLM were investigated through a systematic experimental approach. Single-track experiments were used to narrow down processing parameter windows. A Greco-Latin square design with orthogonal parameter arrays was employed to control forming qualities of specimens. Analysis of variance was applied to establish statistical relationships, which described the effects of different processing parameters (i.e., laser power, scanning speed, and hatch space) on relative density (RD) and Vickers hardness of specimens. It was found that Cu-4Sn specimens were successfully manufactured by SLM for the first time and both its RD and Vickers hardness were mainly determined by the laser power. The maximum value of RD exceeded 93% theoretical density and the maximum value of Vickers hardness reached 118 HV 0.3/5. The best tensile strength of 316-320 MPa is inferior to that of pressure-processed Cu-4Sn and can be improved further by reducing defects.

  16. Stochastic evolutionary voluntary public goods game with punishment in a Quasi-birth-and-death process.

    PubMed

    Quan, Ji; Liu, Wei; Chu, Yuqing; Wang, Xianjia

    2017-11-23

    Traditional replication dynamic model and the corresponding concept of evolutionary stable strategy (ESS) only takes into account whether the system can return to the equilibrium after being subjected to a small disturbance. In the real world, due to continuous noise, the ESS of the system may not be stochastically stable. In this paper, a model of voluntary public goods game with punishment is studied in a stochastic situation. Unlike the existing model, we describe the evolutionary process of strategies in the population as a generalized quasi-birth-and-death process. And we investigate the stochastic stable equilibrium (SSE) instead. By numerical experiments, we get all possible SSEs of the system for any combination of parameters, and investigate the influence of parameters on the probabilities of the system to select different equilibriums. It is found that in the stochastic situation, the introduction of the punishment and non-participation strategies can change the evolutionary dynamics of the system and equilibrium of the game. There is a large range of parameters that the system selects the cooperative states as its SSE with a high probability. This result provides us an insight and control method for the evolution of cooperation in the public goods game in stochastic situations.

  17. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  18. The nearly neutral and selection theories of molecular evolution under the fisher geometrical framework: substitution rate, population size, and complexity.

    PubMed

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A

    2012-06-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models.

  19. The Nearly Neutral and Selection Theories of Molecular Evolution Under the Fisher Geometrical Framework: Substitution Rate, Population Size, and Complexity

    PubMed Central

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A.

    2012-01-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population’s phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models. PMID:22426879

  20. Description, characteristics and testing of the NASA airborne radar

    NASA Technical Reports Server (NTRS)

    Jones, W. R.; Altiz, O.; Schaffner, P.; Schrader, J. H.; Blume, H. J. C.

    1991-01-01

    Presented here is a description of a coherent radar scattermeter and its associated signal processing hardware, which have been specifically designed to detect microbursts and record their radar characteristics. Radar parameters, signal processing techniques and detection algorithms, all under computer control, combine to sense and process reflectivity, clutter, and microburst data. Also presented is the system's high density, high data rate recording system. This digital system is capable of recording many minutes of the in-phase and quadrature components and corresponding receiver gains of the scattered returns for selected spatial regions, as well as other aircraft and hardware related parameters of interest for post-flight analysis. Information is given in viewgraph form.

  1. AN APPROACH TO COST EFFECTIVENESS OF A SELECTIVE MECHANIZED DOCUMENT PROCESSING SYSTEM. ARMY TECHNICAL LIBRARY IMPROVEMENT STUDIES (ATLIS), REPORT NO. 12.

    ERIC Educational Resources Information Center

    SEGARRA, CARLOS O.

    THE PURPOSE OF THE PROJECT WAS TO IDENTIFY AND DEFINE THE PARAMETERS OF AN ECONOMICAL AND PRACTICAL INFORMATION SYSTEM FOR THE U.S. ARMY ENGINEER RESEARCH AND DEVELOPMENT LABORATORIES. THE PROGRAM INCLUDED FOUR PHASES--(1) DATA REQUIREMENTS DEFINITION, (2) COST ANALYSIS AND SYSTEM DEFINITION, (3) HARDWARE SELECTION, SYSTEM TEST AND EVALUATION, AND…

  2. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  3. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  4. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  5. Laser Processing of Multilayered Thermal Spray Coatings: Optimal Processing Parameters

    NASA Astrophysics Data System (ADS)

    Tewolde, Mahder; Zhang, Tao; Lee, Hwasoo; Sampath, Sanjay; Hwang, David; Longtin, Jon

    2017-12-01

    Laser processing offers an innovative approach for the fabrication and transformation of a wide range of materials. As a rapid, non-contact, and precision material removal technology, lasers are natural tools to process thermal spray coatings. Recently, a thermoelectric generator (TEG) was fabricated using thermal spray and laser processing. The TEG device represents a multilayer, multimaterial functional thermal spray structure, with laser processing serving an essential role in its fabrication. Several unique challenges are presented when processing such multilayer coatings, and the focus of this work is on the selection of laser processing parameters for optimal feature quality and device performance. A parametric study is carried out using three short-pulse lasers, where laser power, repetition rate and processing speed are varied to determine the laser parameters that result in high-quality features. The resulting laser patterns are characterized using optical and scanning electron microscopy, energy-dispersive x-ray spectroscopy, and electrical isolation tests between patterned regions. The underlying laser interaction and material removal mechanisms that affect the feature quality are discussed. Feature quality was found to improve both by using a multiscanning approach and an optional assist gas of air or nitrogen. Electrically isolated regions were also patterned in a cylindrical test specimen.

  6. Global sensitivity analysis for identifying important parameters of nitrogen nitrification and denitrification under model uncertainty and scenario uncertainty

    NASA Astrophysics Data System (ADS)

    Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong

    2018-06-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.

  7. Calibration by Hydrological Response Unit of a National Hydrologic Model to Improve Spatial Representation and Distribution of Parameters

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II

    2015-12-01

    The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.

  8. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method

  9. Multi-locus analysis of genomic time series data from experimental evolution.

    PubMed

    Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S

    2015-04-01

    Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.

  10. Production Process of Biocompatible Magnesium Alloy Tubes Using Extrusion and Dieless Drawing Processes

    NASA Astrophysics Data System (ADS)

    Kustra, Piotr; Milenin, Andrij; Płonka, Bartłomiej; Furushima, Tsuyoshi

    2016-06-01

    Development of technological production process of biocompatible magnesium tubes for medical applications is the subject of the present paper. The technology consists of two stages—extrusion and dieless drawing process, respectively. Mg alloys for medical applications such as MgCa0.8 are characterized by low technological plasticity during deformation that is why optimization of production parameters is necessary to obtain good quality product. Thus, authors developed yield stress and ductility model for the investigated Mg alloy and then used the numerical simulations to evaluate proper manufacturing conditions. Grid Extrusion3d software developed by authors was used to determine optimum process parameters for extrusion—billet temperature 400 °C and extrusion velocity 1 mm/s. Based on those parameters the tube with external diameter 5 mm without defects was manufactured. Then, commercial Abaqus software was used for modeling dieless drawing. It was shown that the reduction in the area of 60% can be realized for MgCa0.8 magnesium alloy. Tubes with the final diameter of 3 mm were selected as a case study, to present capabilities of proposed processes.

  11. Digital analysis of changes by Plasmodium vivax malaria in erythrocytes.

    PubMed

    Edison, Maombi; Jeeva, J B; Singh, Megha

    2011-01-01

    Blood samples of malaria patients (n = 30), selected based on the severity of parasitemia, were divided into low (LP), medium (MP) and high (HP) parasitemia, which represent increasing levels of the disease severity. Healthy subjects (n = 10) without any history of disease were selected as a control group. By processing of erythrocytes images their contours were obtained and from these the shape parameters area, perimeter and form factor were obtained. The gray level intensity was determined by scanning of erythrocyte along its largest diameter. A comparison of these with that of normal cells showed a significant change in shape parameters. The gray level intensity decreases with the increase of severity of the disease. The changes in shape parameters directly and gray level intensity variation inversely are correlated with the increase in parasite density due to the disease.

  12. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  13. Laser confocal microscope for analysis of 3013 inner container closure weld region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Rodriguez, M. J.

    As part of the protocol to investigate the corrosion in the inner container closure weld region (ICCWR) a laser confocal microscope (LCM) was used to perform close visual examination of the surface and measurements of corrosion features on the surface. However, initial analysis of selected destructively evaluated (DE) containers using the LCM revealed several challenges for acquiring, processing and interpreting the data. These challenges include topography of the ICCWR sample, surface features, and the amount of surface area for collecting data at high magnification conditions. In FY17, the LCM parameters were investigated to identify the appropriate parameter values for datamore » acquisition and identification of regions of interest. Using these parameter values, selected DE containers were analyzed to determine the extent of the ICCWR to be examined.« less

  14. Multi-Response Optimization of WEDM Process Parameters Using Taguchi Based Desirability Function Analysis

    NASA Astrophysics Data System (ADS)

    Majumder, Himadri; Maity, Kalipada

    2018-03-01

    Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.

  15. EVALUATION OF SIMULTANEOUS SO2/NOX CONTROL TECHNOLOGY

    EPA Science Inventory

    The report gives results of work concentrating on characterizing three process operational parameters of a technology that combines sorbent injection and selective non-catalytic reduction for simultaneous sulfur dioxide/nitrogen oxide (SO2/NOx) removal from coal-fired industrial ...

  16. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  17. Characteristics of Friction Stir Processed UHMW Polyethylene Based Composite

    NASA Astrophysics Data System (ADS)

    Hussain, G.; Khan, I.

    2018-01-01

    Ultra-high molecular weight polyethylene (UHMWPE) based composites are widely used in biomedical and food industries because of their biocompatibility and enhanced properties. The aim of this study was to fabricate UHMWPE / nHA composite through heat assisted Friction Stir Processing. The rotational speed (ω), feed rate (f), volume fraction of nHA (v) and shoulder temperature (T) were selected as the process parameters. Macroscopic and microscopic analysis revealed that these parameters have significant effects on the distribution of reinforcing material, defects formation and material mixing. Defects were observed especially at low levels of (ω, T) and high levels of (f, v). Low level of v with medium levels of other parameters resulted in better mixing and minimum defects. A 10% increase in strength with only 1% reduction in Percent Elongation was observed at the above set of conditions. Moreover, the resulted hardness of the composite was higher than that of the parent material.

  18. Gene flow from domesticated species to wild relatives: migration load in a model of multivariate selection.

    PubMed

    Tufto, Jarle

    2010-01-01

    Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.

  19. Rapid permeation measurement system for the production control of monolayer and multilayer films

    NASA Astrophysics Data System (ADS)

    Botos, J.; Müller, K.; Heidemeyer, P.; Kretschmer, K.; Bastian, M.; Hochrein, T.

    2014-05-01

    Plastics have been used for packaging films for a long time. Until now the development of new formulations for film applications, including process optimization, has been a time-consuming and cost-intensive process for gases like oxygen (O2) or carbon dioxide (CO2). By using helium (He) the permeation measurement can be accelerated from hours or days to a few minutes. Therefore a manometric measuring system for tests according to ISO 15105-1 is coupled with a mass spectrometer to determine the helium flow rate and to calculate the helium permeation rate. Due to the accelerated determination the permeation quality of monolayer and multilayer films can be measured atline. Such a system can be used to predict for example the helium permeation rate of filled polymer films. Defined quality limits for the permeation rate can be specified as well as the prompt correction of process parameters if the results do not meet the specification. This method for process control was tested on a pilot line with a corotating twin-screw extruder for monolayer films. Selected process parameters were varied iteratively without changing the material formulation to obtain the best process parameter set and thus the lowest permeation rate. Beyond that the influence of different parameters on the helium permeation rate was examined on monolayer films. The results were evaluated conventional as well as with artificial neuronal networks in order to determine the non-linear correlation between all process parameters.

  20. A Design of Experiment approach to predict product and process parameters for a spray dried influenza vaccine.

    PubMed

    Kanojia, Gaurav; Willems, Geert-Jan; Frijlink, Henderik W; Kersten, Gideon F A; Soema, Peter C; Amorij, Jean-Pierre

    2016-09-25

    Spray dried vaccine formulations might be an alternative to traditional lyophilized vaccines. Compared to lyophilization, spray drying is a fast and cheap process extensively used for drying biologicals. The current study provides an approach that utilizes Design of Experiments for spray drying process to stabilize whole inactivated influenza virus (WIV) vaccine. The approach included systematically screening and optimizing the spray drying process variables, determining the desired process parameters and predicting product quality parameters. The process parameters inlet air temperature, nozzle gas flow rate and feed flow rate and their effect on WIV vaccine powder characteristics such as particle size, residual moisture content (RMC) and powder yield were investigated. Vaccine powders with a broad range of physical characteristics (RMC 1.2-4.9%, particle size 2.4-8.5μm and powder yield 42-82%) were obtained. WIV showed no significant loss in antigenicity as revealed by hemagglutination test. Furthermore, descriptive models generated by DoE software could be used to determine and select (set) spray drying process parameter. This was used to generate a dried WIV powder with predefined (predicted) characteristics. Moreover, the spray dried vaccine powders retained their antigenic stability even after storage for 3 months at 60°C. The approach used here enabled the generation of a thermostable, antigenic WIV vaccine powder with desired physical characteristics that could be potentially used for pulmonary administration. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  2. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  3. Friction spinning - Twist phenomena and the capability of influencing them

    NASA Astrophysics Data System (ADS)

    Lossen, Benjamin; Homberg, Werner

    2016-10-01

    The friction spinning process can be allocated to the incremental forming techniques. The process consists of process elements from both metal spinning and friction welding. The selective combination of process elements from these two processes results in the integration of friction sub-processes in a spinning process. This implies self-induced heat generation with the possibility of manufacturing functionally graded parts from tube and sheets. Compared with conventional spinning processes, this in-process heat treatment permits the extension of existing forming limits and also the production of more complex geometries. Furthermore, the defined adjustment of part properties like strength, grain size/orientation and surface conditions can be achieved through the appropriate process parameter settings and consequently by setting a specific temperature profile in combination with the degree of deformation. The results presented from tube forming start with an investigation into the resulting twist phenomena in flange processing. In this way, the influence of the main parameters, such as rotation speed, feed rate, forming paths and tool friction surface, and their effects on temperature, forces and finally the twist behavior are analyzed. Following this, the significant correlations with the parameters and a new process strategy are set out in order to visualize the possibility of achieving a defined grain texture orientation.

  4. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.

  5. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  6. A comparative study of electrochemical machining process parameters by using GA and Taguchi method

    NASA Astrophysics Data System (ADS)

    Soni, S. K.; Thomas, B.

    2017-11-01

    In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.

  7. Image data-processing system for solar astronomy

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.

    1977-01-01

    The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.

  8. Radiation dosimetry for quality control of food preservation and disinfestation

    NASA Astrophysics Data System (ADS)

    McLaughlin, W. L.; Miller, A.; Uribe, R. M.

    In the use of x and gamma rays and scanned electron beams to extend the shelf life of food by delay of sprouting and ripening, killing of microbes, and control of insect population, quality assurance is provided by standardized radiation dosimetry. By strategic placement of calibrated dosimeters that are sufficiently stable and reproducible, it is possible to monitor minimum and maximum radiation absorbed dose levels and dose uniformity for a given processed foodstuff. The dosimetry procedure is especially important in the commisioning of a process and in making adjustments of process parameters (e.g. conveyor speed) to meet changes that occur in product and source parameters (e.g. bulk density and radiation spectrum). Routine dosimetry methods and certain corrections of dosimetry data may be selected for the radiations used in typical food processes.

  9. Characterization of the interfacial heat transfer coefficient for hot stamping processes

    NASA Astrophysics Data System (ADS)

    Luan, Xi; Liu, Xiaochuan; Fang, Haomiao; Ji, Kang; El Fakir, Omer; Wang, LiLiang

    2016-08-01

    In hot stamping processes, the interfacial heat transfer coefficient (IHTC) between the forming tools and hot blank is an essential parameter which determines the quenching rate of the process and hence the resulting material microstructure. The present work focuses on the characterization of the IHTC between an aluminium alloy 7075-T6 blank and two different die materials, cast iron (G3500) and H13 die steel, at various contact pressures. It was found that the IHTC between AA7075 and cast iron had values 78.6% higher than that obtained between AA7075 and H13 die steel. Die materials and contact pressures had pronounced effects on the IHTC, suggesting that the IHTC can be used to guide the selection of stamping tool materials and the precise control of processing parameters.

  10. Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics

    NASA Astrophysics Data System (ADS)

    Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.

    2018-01-01

    Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.

  11. Targeted versus statistical approaches to selecting parameters for modelling sediment provenance

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick

    2017-04-01

    One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.

  12. Experimental comparison of residual stresses for a thermomechanical model for the simulation of selective laser melting

    DOE PAGES

    Hodge, N. E.; Ferencz, R. M.; Vignes, R. M.

    2016-05-30

    Selective laser melting (SLM) is an additive manufacturing process in which multiple, successive layers of metal powders are heated via laser in order to build a part. Modeling of SLM requires consideration of the complex interaction between heat transfer and solid mechanics. Here, the present work describes the authors initial efforts to validate their first generation model. In particular, the comparison of model-generated solid mechanics results, including both deformation and stresses, is presented. Additionally, results of various perturbations of the process parameters and modeling strategies are discussed.

  13. Process to evaluate hematological parameters that reflex to manual differential cell counts in a pediatric institution.

    PubMed

    Guarner, Jeannette; Atuan, Maria Ana; Nix, Barbara; Mishak, Christopher; Vejjajiva, Connie; Curtis, Cheri; Park, Sunita; Mullins, Richard

    2010-01-01

    Each institution sets specific parameters obtained by automated hematology analyzers to trigger manual counts. We designed a process to decrease the number of manual differential cell counts without impacting patient care. We selected new criteria that prompt manual counts and studied the impact these changes had in 2 days of work and in samples of patients with newly diagnosed leukemia, sickle cell disease, and presence of left shift. By using fewer parameters and expanding our ranges we decreased the number of manual counts by 20%. The parameters that prompted manual counts most frequently were the presence of blast flags and nucleated red blood cells, 2 parameters that were not changed. The parameters that accounted for a decrease in the number of manual counts were the white blood cell count and large unstained cells. Eight of 32 patients with newly diagnosed leukemia did not show blast flags; however, other parameters triggered manual counts. In 47 patients with sickle cell disease, nucleated red cells and red cell variability prompted manual review. Bands were observed in 18% of the specimens and 4% would not have been counted manually with the new criteria, for the latter the mean band count was 2.6%. The process we followed to evaluate hematological parameters that reflex to manual differential cell counts increased efficiency without compromising patient care in our hospital system.

  14. Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.

  15. A single scaling parameter as a first approximation to describe the rainfall pattern of a place: application on Catalonia

    NASA Astrophysics Data System (ADS)

    Casas-Castillo, M. Carmen; Llabrés-Brustenga, Alba; Rius, Anna; Rodríguez-Solà, Raúl; Navarro, Xavier

    2018-02-01

    As well as in other natural processes, it has been frequently observed that the phenomenon arising from the rainfall generation process presents fractal self-similarity of statistical type, and thus, rainfall series generally show scaling properties. Based on this fact, there is a methodology, simple scaling, which is used quite broadly to find or reproduce the intensity-duration-frequency curves of a place. In the present work, the relationship of the simple scaling parameter with the characteristic rainfall pattern of the area of study has been investigated. The calculation of this scaling parameter has been performed from 147 daily rainfall selected series covering the temporal period between 1883 and 2016 over the Catalonian territory (Spain) and its nearby surroundings, and a discussion about the relationship between the scaling parameter spatial distribution and rainfall pattern, as well as about trends of this scaling parameter over the past decades possibly due to climate change, has been presented.

  16. Kinetics modelling of color deterioration during thermal processing of tomato paste with the use of response surface methodology

    NASA Astrophysics Data System (ADS)

    Ganje, Mohammad; Jafari, Seid Mahdi; Farzaneh, Vahid; Malekjani, Narges

    2018-06-01

    To study the kinetics of color degradation, the tomato paste was designed to be processed at three different temperatures including 60, 70 and 80 °C for 25, 50, 75 and 100 min. a/b ratio, total color difference, saturation index and hue angle were calculated with the use of three main color parameters including L (lightness), a (redness-greenness) and b (yellowness-blueness) values. Kinetics of color degradation was developed by Arrhenius equation and the alterations were modelled with the use of response surface methodology (RSM). It was detected that all of the studied responses followed a first order reaction kinetics with an exception in TCD parameter (zeroth order). TCD and a/b respectively with the highest and lowest activation energy presented the highest sensitivity to the temperature alterations. The maximum and minimum rates of alterations were observed by TCD and b parameters, respectively. It was obviously determined that all of the studied parameters (responses) were affected by the selected independent parameters.

  17. Scheduling on the basis of the research of dependences among the construction process parameters

    NASA Astrophysics Data System (ADS)

    Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga

    2017-10-01

    The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.

  18. Adaptive convex combination approach for the identification of improper quaternion processes.

    PubMed

    Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P

    2014-01-01

    Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).

  19. Algorithm For Solution Of Subset-Regression Problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, Michel

    1991-01-01

    Reliable and flexible algorithm for solution of subset-regression problem performs QR decomposition with new column-pivoting strategy, enables selection of subset directly from originally defined regression parameters. This feature, in combination with number of extensions, makes algorithm very flexible for use in analysis of subset-regression problems in which parameters have physical meanings. Also extended to enable joint processing of columns contaminated by noise with those free of noise, without using scaling techniques.

  20. Theoretical study of hydrogen absorption-desorption on LaNi3.8Al1.2-xMnx using statistical physics treatment

    NASA Astrophysics Data System (ADS)

    Bouaziz, Nadia; Ben Manaa, Marwa; Ben Lamine, Abdelmottaleb

    2017-11-01

    The hydrogen absorption-desorption isotherms on LaNi3.8Al1.2-xMnx alloy at temperature T = 433 K is studied through various theoretical models. The analytical expressions of these models were deduced exploiting the grand canonical ensemble in statistical physics by taking some simplifying hypotheses. Among these models an adequate model which presents a good correlation with the experimental curves has been selected. The physicochemical parameters intervening in the absorption-desorption processes and involved in the model expressions could be directly deduced from the experimental isotherms by numerical simulation. Six parameters of the model are adjusted, namely the numbers of hydrogen atoms per site n1 and n2, the receptor site densities N1m and N2m, and the energetic parameters P1 and P2. The behaviors of these parameters are discussed in relation with absorption and desorption processes to better understand and compare these phenomena. Thanks to the energetic parameters, we calculated the sorption energies which are typically ranged between 266 and 269.4 KJ/mol for absorption process and between 267 and 269.5 KJ/mol for desorption process comparable to usual chemical bond energies. Using the adopted model expression, the thermodynamic potential functions which govern the absorption/desorption process such as internal energy Eint, free enthalpy of Gibbs G and entropy Sa are derived.

  1. Multi-surface topography targeted plateau honing for the processing of cylinder liner surfaces of automotive engines

    NASA Astrophysics Data System (ADS)

    Lawrence, K. Deepak; Ramamoorthy, B.

    2016-03-01

    Cylinder bores of automotive engines are 'engineered' surfaces that are processed using multi-stage honing process to generate multiple layers of micro geometry for meeting the different functional requirements of the piston assembly system. The final processed surfaces should comply with several surface topographic specifications that are relevant for the good tribological performance of the engine. Selection of the process parameters in three stages of honing to obtain multiple surface topographic characteristics simultaneously within the specification tolerance is an important module of the process planning and is often posed as a challenging task for the process engineers. This paper presents a strategy by combining the robust process design and gray-relational analysis to evolve the operating levels of honing process parameters in rough, finish and plateau honing stages targeting to meet multiple surface topographic specifications on the final running surface of the cylinder bores. Honing experiments were conducted in three stages namely rough, finish and plateau honing on cast iron cylinder liners by varying four honing process parameters such as rotational speed, oscillatory speed, pressure and honing time. Abbott-Firestone curve based functional parameters (Rk, Rpk, Rvk, Mr1 and Mr2) coupled with mean roughness depth (Rz, DIN/ISO) and honing angle were measured and identified as the surface quality performance targets to be achieved. The experimental results have shown that the proposed approach is effective to generate cylinder liner surface that would simultaneously meet the explicit surface topographic specifications currently practiced by the industry.

  2. A process for quantifying aesthetic and functional breast surgery: I. Quantifying optimal nipple position and vertical and horizontal skin excess for mastopexy and breast reduction.

    PubMed

    Tebbetts, John B

    2013-07-01

    This article defines a comprehensive process using quantified parameters for objective decision making, operative planning, technique selection, and outcomes analysis in mastopexy and breast reduction, and defines quantified parameters for nipple position and vertical and horizontal skin excess. Future submissions will detail application of the processes for skin envelope design and address composite, three-dimensional parenchyma modification options. Breast base width was used to define a proportional, desired nipple-to-inframammary fold distance for optimal aesthetics. Vertical and horizontal skin excess were measured, documented, and used for technique selection and skin envelope design in mastopexy and breast reduction. This method was applied in 124 consecutive mastopexy and 122 consecutive breast reduction cases. Average follow-up was 4.6 years (range, 6 to 14 years). No changes were made to the basic algorithm of the defined process during the study period. No patient required nipple repositioning. Complications included excessive lower pole restretch (4 percent), periareolar scar hypertrophy (0.8 percent), hematoma (1.2 percent), and areola shape irregularities (1.6 percent). Delayed healing at the junction of vertical and horizontal scars occurred in two of 124 reduction patients (1.6 percent), neither of whom required revision. The overall reoperation rate was 6.5 percent (16 of 246). This study defines the first steps of a comprehensive process for using objectively defined parameters that surgeons can apply to skin envelope design for mastopexy and breast reduction. The method can be used in conjunction with, or in lieu of, other described methods to determine nipple position.

  3. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  4. Fixation probabilities on superstars, revisited and revised.

    PubMed

    Jamieson-Lane, Alastair; Hauert, Christoph

    2015-10-07

    Population structures can be crucial determinants of evolutionary processes. For the Moran process on graphs certain structures suppress selective pressure, while others amplify it (Lieberman et al., 2005). Evolutionary amplifiers suppress random drift and enhance selection. Recently, some results for the most powerful known evolutionary amplifier, the superstar, have been invalidated by a counter example (Díaz et al., 2013). Here we correct the original proof and derive improved upper and lower bounds, which indicate that the fixation probability remains close to 1-1/(r(4)H) for population size N→∞ and structural parameter H⪢1. This correction resolves the differences between the two aforementioned papers. We also confirm that in the limit N,H→∞ superstars remain capable of eliminating random drift and hence of providing arbitrarily strong selective advantages to any beneficial mutation. In addition, we investigate the robustness of amplification in superstars and find that it appears to be a fragile phenomenon with respect to changes in the selection or mutation processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Characterisation of Aronia powders obtained by different drying processes.

    PubMed

    Horszwald, Anna; Julien, Heritier; Andlauer, Wilfried

    2013-12-01

    Nowadays, food industry is facing challenges connected with the preservation of the highest possible quality of fruit products obtained after processing. Attention has been drawn to Aronia fruits due to numerous health promoting properties of their products. However, processing of Aronia, like other berries, leads to difficulties that stem from the preparation process, as well as changes in the composition of bioactive compounds. Consequently, in this study, Aronia commercial juice was subjected to different drying techniques: spray drying, freeze drying and vacuum drying with the temperature range of 40-80 °C. All powders obtained had a high content of total polyphenols. Powders gained by spray drying had the highest values which corresponded to a high content of total flavonoids, total monomeric anthocyanins, cyaniding-3-glucoside and total proanthocyanidins. Analysis of the results exhibited a correlation between selected bioactive compounds and their antioxidant capacity. In conclusion, drying techniques have an impact on selected quality parameters, and different drying techniques cause changes in the content of bioactives analysed. Spray drying can be recommended for preservation of bioactives in Aronia products. Powder quality depends mainly on the process applied and parameters chosen. Therefore, Aronia powders production should be adapted to the requirements and design of the final product. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Breeding of Acrocomia aculeata using genetic diversity parameters and correlations to select accessions based on vegetative, phenological, and reproductive characteristics.

    PubMed

    Coser, S M; Motoike, S Y; Corrêa, T R; Pires, T P; Resende, M D V

    2016-10-17

    Macaw palm (Acrocomia aculeata) is a promising species for use in biofuel production, and establishing breeding programs is important for the development of commercial plantations. The aim of the present study was to analyze genetic diversity, verify correlations between traits, estimate genetic parameters, and select different accessions of A. aculeata in the Macaw Palm Germplasm Bank located in Universidade Federal de Viçosa, to develop a breeding program for this species. Accessions were selected based on precocity (PREC), total spathe (TS), diameter at breast height (DBH), height of the first spathe (HFS), and canopy area (CA). The traits were evaluated in 52 accessions during the 2012/2013 season and analyzed by restricted estimation maximum likelihood/best linear unbiased predictor procedures. Genetic diversity resulted in the formation of four groups by Tocher's clustering method. The correlation analysis showed it was possible to have indirect and early selection for the traits PREC and DBH. Estimated genetic parameters strengthened the genetic variability verified by cluster analysis. Narrow-sense heritability was classified as moderate (PREC, TS, and CA) to high (HFS and DBH), resulting in strong genetic control of the traits and success in obtaining genetic gains by selection. Accuracy values were classified as moderate (PREC and CA) to high (TS, HFS, and DBH), reinforcing the success of the selection process. Selection of accessions for PREC, TS, and HFS by the rank-average method permits selection gains of over 100%, emphasizing the successful use of the accessions in breeding programs and obtaining superior genotypes for commercial plantations.

  7. Two-degree-of-freedom fractional order-PID controllers design for fractional order processes with dead-time.

    PubMed

    Li, Mingjie; Zhou, Ping; Zhao, Zhicheng; Zhang, Jinggang

    2016-03-01

    Recently, fractional order (FO) processes with dead-time have attracted more and more attention of many researchers in control field, but FO-PID controllers design techniques available for the FO processes with dead-time suffer from lack of direct systematic approaches. In this paper, a simple design and parameters tuning approach of two-degree-of-freedom (2-DOF) FO-PID controller based on internal model control (IMC) is proposed for FO processes with dead-time, conventional one-degree-of-freedom control exhibited the shortcoming of coupling of robustness and dynamic response performance. 2-DOF control can overcome the above weakness which means it realizes decoupling of robustness and dynamic performance from each other. The adjustable parameter η2 of FO-PID controller is directly related to the robustness of closed-loop system, and the analytical expression is given between the maximum sensitivity specification Ms and parameters η2. In addition, according to the dynamic performance requirement of the practical system, the parameters η1 can also be selected easily. By approximating the dead-time term of the process model with the first-order Padé or Taylor series, the expressions for 2-DOF FO-PID controller parameters are derived for three classes of FO processes with dead-time. Moreover, compared with other methods, the proposed method is simple and easy to implement. Finally, the simulation results are given to illustrate the effectiveness of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding

    NASA Astrophysics Data System (ADS)

    Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong

    2018-05-01

    Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.

  9. Selective laser melting of Inconel super alloy-a review

    NASA Astrophysics Data System (ADS)

    Karia, M. C.; Popat, M. A.; Sangani, K. B.

    2017-07-01

    Additive manufacturing is a relatively young technology that uses the principle of layer by layer addition of material in solid, liquid or powder form to develop a component or product. The quality of additive manufactured part is one of the challenges to be addressed. Researchers are continuously working at various levels of additive manufacturing technologies. One of the significant powder bed processes for met als is Selective Laser Melting (SLM). Laser based processes are finding more attention of researchers and industrial world. The potential of this technique is yet to be fully explored. Due to very high strength and creep resistance Inconel is extensively used nickel based super alloy for manufacturing components for aerospace, automobile and nuclear industries. Due to law content of Aluminum and Titanium, it exhibits good fabricability too. Therefore the alloy is ideally suitable for selective laser melting to manufacture intricate components with high strength requirements. The selection of suitable process for manufacturing for a specific component depends on geometrical complexity, production quantity, and cost and required strength. There are numerous researchers working on various aspects like metallurgical and micro structural investigations and mechanical properties, geometrical accuracy, effects of process parameters and its optimization and mathematical modeling etc. The present paper represents a comprehensive overview of selective laser melting process for Inconel group of alloys.

  10. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm

    PubMed Central

    Tamjidy, Mehran; Baharudin, B. T. Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-01-01

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon’s entropy. PMID:28772893

  11. Selection of Wire Electrical Discharge Machining Process Parameters on Stainless Steel AISI Grade-304 using Design of Experiments Approach

    NASA Astrophysics Data System (ADS)

    Lingadurai, K.; Nagasivamuni, B.; Muthu Kamatchi, M.; Palavesam, J.

    2012-06-01

    Wire electrical discharge machining (WEDM) is a specialized thermal machining process capable of accurately machining parts of hard materials with complex shapes. Parts having sharp edges that pose difficulties to be machined by the main stream machining processes can be easily machined by WEDM process. Design of Experiments approach (DOE) has been reported in this work for stainless steel AISI grade-304 which is used in cryogenic vessels, evaporators, hospital surgical equipment, marine equipment, fasteners, nuclear vessels, feed water tubing, valves, refrigeration equipment, etc., is machined by WEDM with brass wire electrode. The DOE method is used to formulate the experimental layout, to analyze the effect of each parameter on the machining characteristics, and to predict the optimal choice for each WEDM parameter such as voltage, pulse ON, pulse OFF and wire feed. It is found that these parameters have a significant influence on machining characteristic such as metal removal rate (MRR), kerf width and surface roughness (SR). The analysis of the DOE reveals that, in general the pulse ON time significantly affects the kerf width and the wire feed rate affects SR, while, the input voltage mainly affects the MRR.

  12. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm.

    PubMed

    Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-05-15

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.

  13. Effect of Thermal Budget on the Electrical Characterization of Atomic Layer Deposited HfSiO/TiN Gate Stack MOSCAP Structure

    PubMed Central

    Khan, Z. N.; Ahmed, S.; Ali, M.

    2016-01-01

    Metal Oxide Semiconductor (MOS) capacitors (MOSCAP) have been instrumental in making CMOS nano-electronics realized for back-to-back technology nodes. High-k gate stacks including the desirable metal gate processing and its integration into CMOS technology remain an active research area projecting the solution to address the requirements of technology roadmaps. Screening, selection and deposition of high-k gate dielectrics, post-deposition thermal processing, choice of metal gate structure and its post-metal deposition annealing are important parameters to optimize the process and possibly address the energy efficiency of CMOS electronics at nano scales. Atomic layer deposition technique is used throughout this work because of its known deposition kinetics resulting in excellent electrical properties and conformal structure of the device. The dynamics of annealing greatly influence the electrical properties of the gate stack and consequently the reliability of the process as well as manufacturable device. Again, the choice of the annealing technique (migration of thermal flux into the layer), time-temperature cycle and sequence are key parameters influencing the device’s output characteristics. This work presents a careful selection of annealing process parameters to provide sufficient thermal budget to Si MOSCAP with atomic layer deposited HfSiO high-k gate dielectric and TiN gate metal. The post-process annealing temperatures in the range of 600°C -1000°C with rapid dwell time provide a better trade-off between the desirable performance of Capacitance-Voltage hysteresis and the leakage current. The defect dynamics is thought to be responsible for the evolution of electrical characteristics in this Si MOSCAP structure specifically designed to tune the trade-off at low frequency for device application. PMID:27571412

  14. Physical properties and microstructure study of stainless steel 316L alloy fabricated by selective laser melting

    NASA Astrophysics Data System (ADS)

    Islam, Nurul Kamariah Md Saiful; Harun, Wan Sharuzi Wan; Ghani, Saiful Anwar Che; Omar, Mohd Asnawi; Ramli, Mohd Hazlen; Ismail, Muhammad Hussain

    2017-12-01

    Selective Laser Melting (SLM) demonstrates the 21st century's manufacturing infrastructure in which powdered raw material is melted by a high energy focused laser, and built up layer-by-layer until it forms three-dimensional metal parts. SLM process involves a variation of process parameters which affects the final material properties. 316L stainless steel compacts through the manipulation of building orientation and powder layer thickness parameters were manufactured by SLM. The effect of the manipulated parameters on the relative density and dimensional accuracy of the 316L stainless steel compacts, which were in the as-build condition, were experimented and analysed. The relationship between the microstructures and the physical properties of fabricated 316L stainless steel compacts was investigated in this study. The results revealed that 90° building orientation has higher relative density and dimensional accuracy than 0° building orientation. Building orientation was found to give more significant effect in terms of dimensional accuracy, and relative density of SLM compacts compare to build layer thickness. Nevertheless, the existence of large number and sizes of pores greatly influences the low performances of the density.

  15. Thermal dynamic behavior during selective laser melting of K418 superalloy: numerical simulation and experimental verification

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Xiang, Yu; Wei, Zhengying; Wei, Pei; Lu, Bingheng; Zhang, Lijuan; Du, Jun

    2018-04-01

    During selective laser melting (SLM) of K418 powder, the influence of the process parameters, such as laser power P and scanning speed v, on the dynamic thermal behavior and morphology of the melted tracks was investigated numerically. A 3D finite difference method was established to predict the dynamic thermal behavior and flow mechanism of K418 powder irradiated by a Gaussian laser beam. A three-dimensional randomly packed powder bed composed of spherical particles was established by discrete element method. The powder particle information including particle size distribution and packing density were taken into account. The volume shrinkage and temperature-dependent thermophysical parameters such as thermal conductivity, specific heat, and other physical properties were also considered. The volume of fluid method was applied to reconstruct the free surface of the molten pool during SLM. The geometrical features, continuity boundaries, and irregularities of the molten pool were proved to be largely determined by the laser energy density. The numerical results are in good agreement with the experiments, which prove to be reasonable and effective. The results provide us some in-depth insight into the complex physical behavior during SLM and guide the optimization of process parameters.

  16. IN718 Additive Manufacturing Properties and Influences

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.

    2015-01-01

    The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data have been "sanitized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range that was centered about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. Tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable with SLM-produced IN718. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.

  17. IN718 Additive Manufacturing Properties and Influences

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.

    2015-01-01

    The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data has been "generalized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. SLM-produced IN718, tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of-experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.

  18. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  19. Space processing of crystalline materials: A study of known methods of electrical characterization of semiconductors: Bibliography

    NASA Technical Reports Server (NTRS)

    Castle, J. G.

    1976-01-01

    A selective bibliography is given on electrical characterization techniques for semiconductors. Emphasis is placed on noncontacting techniques for the standard electrical parameters for monitoring crystal growth in space, preferably in real time with high resolution.

  20. Bayesian inference for OPC modeling

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.

    2016-03-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.

  1. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    NASA Astrophysics Data System (ADS)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  2. Methodological development for selection of significant predictors explaining fatal road accidents.

    PubMed

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  3. The principle of sufficiency and the evolution of control: using control analysis to understand the design principles of biological systems.

    PubMed

    Brown, Guy C

    2010-10-01

    Control analysis can be used to try to understand why (quantitatively) systems are the way that they are, from rate constants within proteins to the relative amount of different tissues in organisms. Many biological parameters appear to be optimized to maximize rates under the constraint of minimizing space utilization. For any biological process with multiple steps that compete for control in series, evolution by natural selection will tend to even out the control exerted by each step. This is for two reasons: (i) shared control maximizes the flux for minimum protein concentration, and (ii) the selection pressure on any step is proportional to its control, and selection will, by increasing the rate of a step (relative to other steps), decrease its control over a pathway. The control coefficient of a parameter P over fitness can be defined as (∂N/N)/(∂P/P), where N is the number of individuals in the population, and ∂N is the change in that number as a result of the change in P. This control coefficient is equal to the selection pressure on P. I argue that biological systems optimized by natural selection will conform to a principle of sufficiency, such that the control coefficient of all parameters over fitness is 0. Thus in an optimized system small changes in parameters will have a negligible effect on fitness. This principle naturally leads to (and is supported by) the dominance of wild-type alleles over null mutants.

  4. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  5. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  6. Parameter learning for performance adaptation

    NASA Technical Reports Server (NTRS)

    Peek, Mark D.; Antsaklis, Panos J.

    1990-01-01

    A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.

  7. Predictive modeling, simulation, and optimization of laser processing techniques: UV nanosecond-pulsed laser micromachining of polymers and selective laser melting of powder metals

    NASA Astrophysics Data System (ADS)

    Criales Escobar, Luis Ernesto

    One of the most frequently evolving areas of research is the utilization of lasers for micro-manufacturing and additive manufacturing purposes. The use of laser beam as a tool for manufacturing arises from the need for flexible and rapid manufacturing at a low-to-mid cost. Laser micro-machining provides an advantage over mechanical micro-machining due to the faster production times of large batch sizes and the high costs associated with specific tools. Laser based additive manufacturing enables processing of powder metals for direct and rapid fabrication of products. Therefore, laser processing can be viewed as a fast, flexible, and cost-effective approach compared to traditional manufacturing processes. Two types of laser processing techniques are studied: laser ablation of polymers for micro-channel fabrication and selective laser melting of metal powders. Initially, a feasibility study for laser-based micro-channel fabrication of poly(dimethylsiloxane) (PDMS) via experimentation is presented. In particular, the effectiveness of utilizing a nanosecond-pulsed laser as the energy source for laser ablation is studied. The results are analyzed statistically and a relationship between process parameters and micro-channel dimensions is established. Additionally, a process model is introduced for predicting channel depth. Model outputs are compared and analyzed to experimental results. The second part of this research focuses on a physics-based FEM approach for predicting the temperature profile and melt pool geometry in selective laser melting (SLM) of metal powders. Temperature profiles are calculated for a moving laser heat source to understand the temperature rise due to heating during SLM. Based on the predicted temperature distributions, melt pool geometry, i.e. the locations at which melting of the powder material occurs, is determined. Simulation results are compared against data obtained from experimental Inconel 625 test coupons fabricated at the National Institute for Standards & Technology via response surface methodology techniques. The main goal of this research is to develop a comprehensive predictive model with which the effect of powder material properties and laser process parameters on the built quality and integrity of SLM-produced parts can be better understood. By optimizing process parameters, SLM as an additive manufacturing technique is not only possible, but also practical and reproducible.

  8. Surveillance system and method having parameter estimation and operating mode partitioning

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2003-01-01

    A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.

  9. MicroCT parameters for multimaterial elements assessment

    NASA Astrophysics Data System (ADS)

    de Araújo, Olga M. O.; Silva Bastos, Jaqueline; Machado, Alessandra S.; dos Santos, Thaís M. P.; Ferreira, Cintia G.; Rosifini Alves Claro, Ana Paula; Lopes, Ricardo T.

    2018-03-01

    Microtomography is a non-destructive testing technique for quantitative and qualitative analysis. The investigation of multimaterial elements with great difference of density can result in artifacts that degrade image quality depending on combination of additional filter. The aim of this study is the selection of parameters most appropriate for analysis of bone tissue with metallic implant. The results show the simulation with MCNPX code for the distribution of energy without additional filter, with use of aluminum, copper and brass filters and their respective reconstructed images showing the importance of the choice of these parameters in image acquisition process on computed microtomography.

  10. Method for depositing layers of high quality semiconductor material

    DOEpatents

    Guha, Subhendu; Yang, Chi C.

    2001-08-14

    Plasma deposition of substantially amorphous semiconductor materials is carried out under a set of deposition parameters which are selected so that the process operates near the amorphous/microcrystalline threshold. This threshold varies as a function of the thickness of the depositing semiconductor layer; and, deposition parameters, such as diluent gas concentrations, must be adjusted as a function of layer thickness. Also, this threshold varies as a function of the composition of the depositing layer, and in those instances where the layer composition is profiled throughout its thickness, deposition parameters must be adjusted accordingly so as to maintain the amorphous/microcrystalline threshold.

  11. Decision support for operations and maintenance (DSOM) system

    DOEpatents

    Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA

    2006-03-21

    A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.

  12. Hybrid approach of selecting hyperparameters of support vector machine for regression.

    PubMed

    Jeng, Jin-Tsong

    2006-06-01

    To select the hyperparameters of the support vector machine for regression (SVR), a hybrid approach is proposed to determine the kernel parameter of the Gaussian kernel function and the epsilon value of Vapnik's epsilon-insensitive loss function. The proposed hybrid approach includes a competitive agglomeration (CA) clustering algorithm and a repeated SVR (RSVR) approach. Since the CA clustering algorithm is used to find the nearly "optimal" number of clusters and the centers of clusters in the clustering process, the CA clustering algorithm is applied to select the Gaussian kernel parameter. Additionally, an RSVR approach that relies on the standard deviation of a training error is proposed to obtain an epsilon in the loss function. Finally, two functions, one real data set (i.e., a time series of quarterly unemployment rate for West Germany) and an identification of nonlinear plant are used to verify the usefulness of the hybrid approach.

  13. Optimising Drug Solubilisation in Amorphous Polymer Dispersions: Rational Selection of Hot-melt Extrusion Processing Parameters.

    PubMed

    Li, Shu; Tian, Yiwei; Jones, David S; Andrews, Gavin P

    2016-02-01

    The aim of this article was to construct a T-ϕ phase diagram for a model drug (FD) and amorphous polymer (Eudragit® EPO) and to use this information to understand the impact of how temperature-composition coordinates influenced the final properties of the extrudate. Defining process boundaries and understanding drug solubility in polymeric carriers is of utmost importance and will help in the successful manufacture of new delivery platforms for BCS class II drugs. Physically mixed felodipine (FD)-Eudragit(®) EPO (EPO) binary mixtures with pre-determined weight fractions were analysed using DSC to measure the endset of melting and glass transition temperature. Extrudates of 10 wt% FD-EPO were processed using temperatures (110°C, 126°C, 140°C and 150°C) selected from the temperature-composition (T-ϕ) phase diagrams and processing screw speed of 20, 100 and 200rpm. Extrudates were characterised using powder X-ray diffraction (PXRD), optical, polarised light and Raman microscopy. To ensure formation of a binary amorphous drug dispersion (ADD) at a specific composition, HME processing temperatures should at least be equal to, or exceed, the corresponding temperature value on the liquid-solid curve in a F-H T-ϕ phase diagram. If extruded between the spinodal and liquid-solid curve, the lack of thermodynamic forces to attain complete drug amorphisation may be compensated for through the use of an increased screw speed. Constructing F-H T-ϕ phase diagrams are valuable not only in the understanding drug-polymer miscibility behaviour but also in rationalising the selection of important processing parameters for HME to ensure miscibility of drug and polymer.

  14. Effect of key parameters on the selective acid leach of nickel from mixed nickel-cobalt hydroxide

    NASA Astrophysics Data System (ADS)

    Byrne, Kelly; Hawker, William; Vaughan, James

    2017-01-01

    Mixed nickel-cobalt hydroxide precipitate (MHP) is a relatively recent intermediate product in primary nickel production. The material is now being produced on a large scale (approximately 60,000 t/y Ni as MHP) at facilities in Australia (Ravensthorpe, First Quantum Minerals) and Papua New Guinea (Ramu, MCC/Highlands Pacific). The University of Queensland Hydrometallurgy research group developed a new processing technology to refine MHP based on a selective acid leach. This process provides a streamlined route to obtaining a high purity nickel product compared with conventional leaching / solvent extraction processes. The selective leaching of nickel from MHP involves stabilising manganese and cobalt into the solid phase using an oxidant. This paper describes a batch reactor study investigating the timing of acid and oxidant addition on the rate and extent of nickel, cobalt, manganese leached from industrial MHP. For the conditions studied, it is concluded that the simultaneous addition of acid and oxidant provide the best process outcomes.

  15. Effect of insecticide resistance on development, longevity and reproduction of field or laboratory selected Aedes aegypti populations.

    PubMed

    Martins, Ademir Jesus; Ribeiro, Camila Dutra e Mello; Bellinato, Diogo Fernandes; Peixoto, Alexandre Afranio; Valle, Denise; Lima, José Bento Pereira

    2012-01-01

    Aedes aegypti dispersion is the major reason for the increase in dengue transmission in South America. In Brazil, control of this mosquito strongly relies on the use of pyrethroids and organophosphates against adults and larvae, respectively. In consequence, many Ae. aegypti field populations are resistant to these compounds. Resistance has a significant adaptive value in the presence of insecticide treatment. However some selected mechanisms can influence important biological processes, leading to a high fitness cost in the absence of insecticide pressure. We investigated the dynamics of insecticide resistance and its potential fitness cost in five field populations and in a lineage selected for deltamethrin resistance in the laboratory, for nine generations. For all populations the life-trait parameters investigated were larval development, sex ratio, adult longevity, relative amount of ingested blood, rate of ovipositing females, size of egglaying and eggs viability. In the five natural populations, the effects on the life-trait parameters were discrete but directly proportional to resistance level. In addition, several viability parameters were strongly affected in the laboratory selected population compared to its unselected control. Our results suggest that mechanisms selected for organophosphate and pyrethroid resistance caused the accumulation of alleles with negative effects on different life-traits and corroborate the hypothesis that insecticide resistance is associated with a high fitness cost.

  16. Effect of Insecticide Resistance on Development, Longevity and Reproduction of Field or Laboratory Selected Aedes aegypti Populations

    PubMed Central

    Bellinato, Diogo Fernandes; Peixoto, Alexandre Afranio; Valle, Denise; Lima, José Bento Pereira

    2012-01-01

    Aedes aegypti dispersion is the major reason for the increase in dengue transmission in South America. In Brazil, control of this mosquito strongly relies on the use of pyrethroids and organophosphates against adults and larvae, respectively. In consequence, many Ae. aegypti field populations are resistant to these compounds. Resistance has a significant adaptive value in the presence of insecticide treatment. However some selected mechanisms can influence important biological processes, leading to a high fitness cost in the absence of insecticide pressure. We investigated the dynamics of insecticide resistance and its potential fitness cost in five field populations and in a lineage selected for deltamethrin resistance in the laboratory, for nine generations. For all populations the life-trait parameters investigated were larval development, sex ratio, adult longevity, relative amount of ingested blood, rate of ovipositing females, size of egglaying and eggs viability. In the five natural populations, the effects on the life-trait parameters were discrete but directly proportional to resistance level. In addition, several viability parameters were strongly affected in the laboratory selected population compared to its unselected control. Our results suggest that mechanisms selected for organophosphate and pyrethroid resistance caused the accumulation of alleles with negative effects on different life-traits and corroborate the hypothesis that insecticide resistance is associated with a high fitness cost. PMID:22431967

  17. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  18. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  19. Seeing the unseen: Complete volcano deformation fields by recursive filtering of satellite radar interferograms

    NASA Astrophysics Data System (ADS)

    Gonzalez, Pablo J.

    2017-04-01

    Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033

  20. Statistical Development and Application of Cultural Consensus Theory

    DTIC Science & Technology

    2012-03-31

    Bulletin & Review , 17, 275-286. Schmittmann, V.D., Dolan, C.V., Raijmakers, M.E.J., and Batchelder, W.H. (2010). Parameter identification in...Wu, H., Myung, J.I., and Batchelder, W.H. (2010). Minimum description length model selection of multinomial processing tree models. Psychonomic

  1. Efficiency of the Inertia Friction Welding Process and Its Dependence on Process Parameters

    NASA Astrophysics Data System (ADS)

    Senkov, O. N.; Mahaffey, D. W.; Tung, D. J.; Zhang, W.; Semiatin, S. L.

    2017-07-01

    It has been widely assumed, but never proven, that the efficiency of the inertia friction welding (IFW) process is independent of process parameters and is relatively high, i.e., 70 to 95 pct. In the present work, the effect of IFW parameters on process efficiency was established. For this purpose, a series of IFW trials was conducted for the solid-state joining of two dissimilar nickel-base superalloys (LSHR and Mar-M247) using various combinations of initial kinetic energy ( i.e., the total weld energy, E o), initial flywheel angular velocity ( ω o), flywheel moment of inertia ( I), and axial compression force ( P). The kinetics of the conversion of the welding energy to heating of the faying sample surfaces ( i.e., the sample energy) vs parasitic losses to the welding machine itself were determined by measuring the friction torque on the sample surfaces ( M S) and in the machine bearings ( M M). It was found that the rotating parts of the welding machine can consume a significant fraction of the total energy. Specifically, the parasitic losses ranged from 28 to 80 pct of the total weld energy. The losses increased (and the corresponding IFW process efficiency decreased) as P increased (at constant I and E o), I decreased (at constant P and E o), and E o (or ω o) increased (at constant P and I). The results of this work thus provide guidelines for selecting process parameters which minimize energy losses and increase process efficiency during IFW.

  2. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  3. Antioxidant defense parameters as predictive biomarkers for fermentative capacity of active dried wine yeast.

    PubMed

    Gamero-Sandemetrio, Esther; Gómez-Pastor, Rocío; Matallana, Emilia

    2014-08-01

    The production of active dried yeast (ADY) is a common practice in industry for the maintenance of yeast starters and as a means of long term storage. The process, however, causes multiple cell injuries, with oxidative damage being one of the most important stresses. Consequentially, dehydration tolerance is a highly appreciated property in yeast for ADY production. In this study we analyzed the cellular redox environment in three Saccharomyces cerevisiae wine strains, which show markedly different fermentative capacities after dehydration. To measure/quantify the effect of dehydration on the S. cerevisiae strains, we used: (i) fluorescent probes; (ii) antioxidant enzyme activities; (ii) intracellular damage; (iii) antioxidant metabolites; and (iv) gene expression, to select a minimal set of biochemical parameters capable of predicting desiccation tolerance in wine yeasts. Our results show that naturally enhanced antioxidant defenses prevent oxidative damage after wine yeast biomass dehydration and improve fermentative capacity. Based on these results we chose four easily assayable parameters/biomarkers for the selection of industrial yeast strains of interest for ADY production: trehalose and glutathione levels, and glutathione reductase and catalase enzymatic activities. Yeast strains selected in accordance with this process display high levels of trehalose, low levels of oxidized glutathione, a high induction of glutathione reductase activity, as well as a high basal level and sufficient induction of catalase activity, which are properties inherent in superior ADY strains. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Optimizing pulsed Nd:YAG laser beam welding process parameters to attain maximum ultimate tensile strength for thin AISI316L sheet using response surface methodology and simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Torabi, Amir; Kolahan, Farhad

    2018-07-01

    Pulsed laser welding is a powerful technique especially suitable for joining thin sheet metals. In this study, based on experimental data, pulsed laser welding of thin AISI316L austenitic stainless steel sheet has been modeled and optimized. The experimental data required for modeling are gathered as per Central Composite Design matrix in Response Surface Methodology (RSM) with full replication of 31 runs. Ultimate Tensile Strength (UTS) is considered as the main quality measure in laser welding. Furthermore, the important process parameters including peak power, pulse duration, pulse frequency and welding speed are selected as input process parameters. The relation between input parameters and the output response is established via full quadratic response surface regression with confidence level of 95%. The adequacy of the regression model was verified using Analysis of Variance technique results. The main effects of each factor and the interactions effects with other factors were analyzed graphically in contour and surface plot. Next, to maximum joint UTS, the best combinations of parameters levels were specified using RSM. Moreover, the mathematical model is implanted into a Simulated Annealing (SA) optimization algorithm to determine the optimal values of process parameters. The results obtained by both SA and RSM optimization techniques are in good agreement. The optimal parameters settings for peak power of 1800 W, pulse duration of 4.5 ms, frequency of 4.2 Hz and welding speed of 0.5 mm/s would result in a welded joint with 96% of the base metal UTS. Computational results clearly demonstrate that the proposed modeling and optimization procedures perform quite well for pulsed laser welding process.

  5. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  6. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, Noah G.; Veirs, Douglas K.; Claytor, Thomas N.

    1994-01-01

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure.

  7. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, N.G.; Veirs, D.K.; Claytor, T.N.

    1994-10-25

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast Fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure. 7 figs.

  8. Improving the quality of extracting dynamics from interspike intervals via a resampling approach

    NASA Astrophysics Data System (ADS)

    Pavlova, O. N.; Pavlov, A. N.

    2018-04-01

    We address the problem of improving the quality of characterizing chaotic dynamics based on point processes produced by different types of neuron models. Despite the presence of embedding theorems for non-uniformly sampled dynamical systems, the case of short data analysis requires additional attention because the selection of algorithmic parameters may have an essential influence on estimated measures. We consider how the preliminary processing of interspike intervals (ISIs) can increase the precision of computing the largest Lyapunov exponent (LE). We report general features of characterizing chaotic dynamics from point processes and show that independently of the selected mechanism for spike generation, the performed preprocessing reduces computation errors when dealing with a limited amount of data.

  9. Simulation and Experimental Studies on Grain Selection and Structure Design of the Spiral Selector for Casting Single Crystal Ni-Based Superalloy.

    PubMed

    Zhang, Hang; Xu, Qingyan

    2017-10-27

    Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter ( d w ), the spiral pitch ( h b ) and the spiral diameter ( h s ), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure.

  10. Simulation and Experimental Studies on Grain Selection and Structure Design of the Spiral Selector for Casting Single Crystal Ni-Based Superalloy

    PubMed Central

    Zhang, Hang; Xu, Qingyan

    2017-01-01

    Grain selection is an important process in single crystal turbine blades manufacturing. Selector structure is a control factor of grain selection, as well as directional solidification (DS). In this study, the grain selection and structure design of the spiral selector were investigated through experimentation and simulation. A heat transfer model and a 3D microstructure growth model were established based on the Cellular automaton-Finite difference (CA-FD) method for the grain selector. Consequently, the temperature field, the microstructure and the grain orientation distribution were simulated and further verified. The average error of the temperature result was less than 1.5%. The grain selection mechanisms were further analyzed and validated through simulations. The structural design specifications of the selector were suggested based on the two grain selection effects. The structural parameters of the spiral selector, namely, the spiral tunnel diameter (dw), the spiral pitch (hb) and the spiral diameter (hs), were studied and the design criteria of these parameters were proposed. The experimental and simulation results demonstrated that the improved selector could accurately and efficiently produce a single crystal structure. PMID:29077067

  11. New Possibilities in the Fabrication of Hybrid Components with Big Dimensions by Means of Selective Laser Melting (SLM)

    NASA Astrophysics Data System (ADS)

    Ascari, A.; Fortunato, A.; Liverani, E.; Gamberoni, A.; Tomesani, L.

    The application of laser technology to welding of dissimilar AISI316 stainless steel components manufactured with selective laser melting (SLM) and traditional methods has been investigated. The role of laser parameters on weld bead formation has been studied experimentally, with particular attention placed on effects occurring at the interface between the two parts. In order to assess weld bead characteristics, standardised tensile tests were carried out on suitable specimens and the fracture zone was analysed. The results highlighted the possibility of exploiting suitable process parameters to appropriately shape the heat affected and fusion zones in order to maximise the mechanical performance of the component and minimise interactions between the two parent metals in the weld bead.

  12. Parameter optimization for selective laser melting of TiAl6V4 alloy by CO2 laser

    NASA Astrophysics Data System (ADS)

    Baitimerov, R. M.; Lykov, P. A.; Radionova, L. V.; Safonov, E. V.

    2017-10-01

    TiAl6V4 alloy is one of the widely used materials in powder bed fusion additive manufacturing technologies. In recent years selective laser melting (SLM) of TiAl6V4 alloy by fiber laser has been well studied, but SLM by CO2-lasers has not. SLM of TiAl6V4 powder by CO2-laser was studied in this paper. Nine 10×10×10 mm cubic specimens were fabricated using different SLM process parameters. All of the fabricated specimens have a good dense structure and a good surface finish quality without dimensional distortion. The lowest porosity that was achieved was about 0.5%.

  13. Parametric behaviors of CLUBB in simulations of low clouds in the Community Atmosphere Model (CAM)

    DOE PAGES

    Guo, Zhun; Wang, Minghuai; Qian, Yun; ...

    2015-07-03

    In this study, we investigate the sensitivity of simulated low clouds to 14 selected tunable parameters of Cloud Layers Unified By Binormals (CLUBB), a higher order closure (HOC) scheme, and 4 parameters of the Zhang-McFarlane (ZM) deep convection scheme in the Community Atmosphere Model version 5 (CAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is applied to study the responses of simulated cloud fields to tunable parameters. Our results show that the variance in simulated low-cloud properties (cloud fraction and liquid water path) can be explained bymore » the selected tunable parameters in two different ways: macrophysics itself and its interaction with microphysics. First, the parameters related to dynamic and thermodynamic turbulent structure and double Gaussians closure are found to be the most influential parameters for simulating low clouds. The spatial distributions of the parameter contributions show clear cloud-regime dependence. Second, because of the coupling between cloud macrophysics and cloud microphysics, the coefficient of the dissipation term in the total water variance equation is influential. This parameter affects the variance of in-cloud cloud water, which further influences microphysical process rates, such as autoconversion, and eventually low-cloud fraction. Furthermore, this study improves understanding of HOC behavior associated with parameter uncertainties and provides valuable insights for the interaction of macrophysics and microphysics.« less

  14. Defect Detection in Arc-Welding Processes by Means of the Line-to-Continuum Method and Feature Selection.

    PubMed

    Garcia-Allende, P Beatriz; Mirapeix, Jesus; Conde, Olga M; Cobo, Adolfo; Lopez-Higuera, Jose M

    2009-01-01

    Plasma optical spectroscopy is widely employed in on-line welding diagnostics. The determination of the plasma electron temperature, which is typically selected as the output monitoring parameter, implies the identification of the atomic emission lines. As a consequence, additional processing stages are required with a direct impact on the real time performance of the technique. The line-to-continuum method is a feasible alternative spectroscopic approach and it is particularly interesting in terms of its computational efficiency. However, the monitoring signal highly depends on the chosen emission line. In this paper, a feature selection methodology is proposed to solve the uncertainty regarding the selection of the optimum spectral band, which allows the employment of the line-to-continuum method for on-line welding diagnostics. Field test results have been conducted to demonstrate the feasibility of the solution.

  15. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  16. Physical evaluations of Co-Cr-Mo parts processed using different additive manufacturing techniques

    NASA Astrophysics Data System (ADS)

    Ghani, Saiful Anwar Che; Mohamed, Siti Rohaida; Harun, Wan Sharuzi Wan; Noar, Nor Aida Zuraimi Md

    2017-12-01

    In recent years, additive manufacturing with highly design customization has gained an important technique for fabrication in aerospace and medical fields. Despite the ability of the process to produce complex components with highly controlled architecture geometrical features, maintaining the part's accuracy, ability to fabricate fully functional high density components and inferior surfaces quality are the major obstacles in producing final parts using additive manufacturing for any selected application. This study aims to evaluate the physical properties of cobalt chrome molybdenum (Co-Cr-Mo) alloys parts fabricated by different additive manufacturing techniques. The full dense Co-Cr-Mo parts were produced by Selective Laser Melting (SLM) and Direct Metal Laser Sintering (DMLS) with default process parameters. The density and relative density of samples were calculated using Archimedes' principle while the surface roughness on the top and side surface was measured using surface profiler. The roughness average (Ra) for top surface for SLM produced parts is 3.4 µm while 2.83 µm for DMLS produced parts. The Ra for side surfaces for SLM produced parts is 4.57 µm while 9.0 µm for DMLS produced parts. The higher Ra values on side surfaces compared to the top faces for both manufacturing techniques was due to the balling effect phenomenon. The yield relative density for both Co-Cr-Mo parts produced by SLM and DMLS are 99.3%. Higher energy density has influence the higher density of produced samples by SLM and DMLS processes. The findings of this work demonstrated that SLM and DMLS process with default process parameters have effectively produced full dense parts of Co-Cr-Mo with high density, good agreement of geometrical accuracy and better surface finish. Despite of both manufacturing process yield that produced components with higher density, the current finding shows that SLM technique could produce components with smoother surface quality compared to DMLS process with default parameters.

  17. Development Of Simulation Model For Fluid Catalytic Cracking

    NASA Astrophysics Data System (ADS)

    Ghosh, Sobhan

    2010-10-01

    Fluid Catalytic Cracking (FCC) is the most widely used secondary conversion process in the refining industry, for producing gasoline, olefins, and middle distillate from heavier petroleum fractions. There are more than 500 units in the world with a total processing capacity of about 17 to 20% of the crude capacity. FCC catalyst is the highest consumed catalyst in the process industry. On one hand, FCC is quite flexible with respect to it's ability to process wide variety of crudes with a flexible product yield pattern, and on the other hand, the interdependence of the major operating parameters makes the process extremely complex. An operating unit is self balancing and some fluctuations in the independent parameters are automatically adjusted by changing the temperatures and flow rates at different sections. However, a good simulation model is very useful to the refiner to get the best out of the process, in terms of selection of the best catalyst, to cope up with the day to day changing of the feed quality and the demands of different products from FCC unit. In addition, a good model is of great help in designing the process units and peripherals. A simple empirical model is often adequate to monitor the day to day operations, but they are not of any use in handling the other problems such as, catalyst selection or, design / modification of the plant. For this, a kinetic based rigorous model is required. Considering the complexity of the process, large number of chemical species undergoing "n" number of parallel and consecutive reactions, it is virtually impossible to develop a simulation model based on the kinetic parameters. The most common approach is to settle for a semi empirical model. We shall take up the key issues for developing a FCC model and the contribution of such models in the optimum operation of the plant.

  18. Intensification of the Reverse Cationic Flotation of Hematite Ores with Optimization of Process and Hydrodynamic Parameters of Flotation Cell

    NASA Astrophysics Data System (ADS)

    Poperechnikova, O. Yu; Filippov, L. O.; Shumskaya, E. N.; Filippova, I. V.

    2017-07-01

    The demand of high grade iron ore concentrates is a major issue due to the depletion of rich iron-bearing ores and high competitiveness in the iron ore market. Iron ore production is forced out to upgrade flowsheets to decrease the silica content in the pelettes. Different types of ore have different mineral composition and texture-structural features which require different mineral processing methods and technologies. The paper presents a comparative study of the cationic and anionic flotation routes to process a fine-grain oxidized iron ore. The modified carboxymethyl cellulose was found as the most efficient depressant in reverse cationic flotation. The results of flotation optimization of hematite ores using matrix of second-order center rotatable uniform design allowed to define the collector concentration, impeller rotation speed and air flowrate as the main flotation parameters impacting on the iron ore concentrate quality and iron recovery in a laboratory flotation machine. These parameters have been selected as independent during the experiments.

  19. Crystallization using reverse micelles and water-in-oil microemulsion systems: the highly selective tool for the purification of organic compounds from complex mixtures.

    PubMed

    Kljajic, Alen; Bester-Rogac, Marija; Klobcar, Andrej; Zupet, Rok; Pejovnik, Stane

    2013-02-01

    The active pharmaceutical ingredient orlistat is usually manufactured using a semi-synthetic procedure, producing crude product and complex mixtures of highly related impurities with minimal side-chain structure variability. It is therefore crucial for the overall success of industrial/pharmaceutical application to develop an effective purification process. In this communication, we present the newly developed water-in-oil reversed micelles and microemulsion system-based crystallization process. Physiochemical properties of the presented crystallization media were varied through surfactants and water composition, and the impact on efficiency was measured through final variation of these two parameters. Using precisely defined properties of the dispersed water phase in crystallization media, a highly efficient separation process in terms of selectivity and yield was developed. Small-angle X-ray scattering, high-performance liquid chromatography, mass spectrometry, and scanning electron microscopy were used to monitor and analyze the separation processes and orlistat products obtained. Typical process characteristics, especially selectivity and yield in regard to reference examples, were compared and discussed. Copyright © 2012 Wiley Periodicals, Inc.

  20. Determination of the performance of vermicomposting process applied to sewage sludge by monitoring of the compost quality and immune responses in three earthworm species: Eisenia fetida, Eisenia andrei and Dendrobaena veneta.

    PubMed

    Suleiman, Hanine; Rorat, Agnieszka; Grobelak, Anna; Grosser, Anna; Milczarek, Marcin; Płytycz, Barbara; Kacprzak, Małgorzata; Vandenbulcke, Franck

    2017-10-01

    The aim of this study was to assess the effectiveness of vermicomposting process applied on three different sewage sludge (precomposted with grass clippings, sawdust and municipal solid wastes) using three different earthworm species. Selected immune parameters, namely biomarkers of stress and metal body burdens, have been used to biomonitor the vermicomposting process and to assess the impact of contaminants on earthworm's physiology. Biotic and abiotic parameters were also used in order to monitor the process and the quality of the final product. Dendrobaena veneta exhibited much lower resistance in all experimental conditions, as the bodyweight and the total number of circulating immune cells decreased in the most contaminated conditions. All earthworm species accumulated heavy metals as follows Cd>Co>Cu>Zn>Ni>Pb>Cr: Eisenia sp. worms exhibited the highest ability to accumulate several heavy metals. Vermicompost obtained after 45days was acceptable according to agronomic parameters and to compost quality norms in France and Poland. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Technology Estimating 2: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.

    2014-01-01

    As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.

  2. Combining super-ensembles and statistical emulation to improve a regional climate and vegetation model

    NASA Astrophysics Data System (ADS)

    Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.

    2017-12-01

    Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.

  3. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    NASA Technical Reports Server (NTRS)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  4. Analysis of acoustic emission signals and monitoring of machining processes

    PubMed

    Govekar; Gradisek; Grabec

    2000-03-01

    Monitoring of a machining process on the basis of sensor signals requires a selection of informative inputs in order to reliably characterize and model the process. In this article, a system for selection of informative characteristics from signals of multiple sensors is presented. For signal analysis, methods of spectral analysis and methods of nonlinear time series analysis are used. With the aim of modeling relationships between signal characteristics and the corresponding process state, an adaptive empirical modeler is applied. The application of the system is demonstrated by characterization of different parameters defining the states of a turning machining process, such as: chip form, tool wear, and onset of chatter vibration. The results show that, in spite of the complexity of the turning process, the state of the process can be well characterized by just a few proper characteristics extracted from a representative sensor signal. The process characterization can be further improved by joining characteristics from multiple sensors and by application of chaotic characteristics.

  5. Robustness. [in space systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    1993-01-01

    The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.

  6. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  7. ASRM propellant and igniter propellant development and process scale-up

    NASA Technical Reports Server (NTRS)

    Landers, L. C.; Booth, D. W.; Stanley, C. B.; Ricks, D. W.

    1993-01-01

    A program of formulation and process development for ANB-3652 motor propellant was conducted to validate design concepts and screen critical propellant composition and process parameters. Design experiments resulted in the selection of a less active grade of ferric oxide to provide better burning rate control, the establishment of AP fluidization conditions that minimized the adverse effects of particle attrition, and the selection of a higher mix temperature to improve mechanical properties. It is shown that the propellant can be formulated with AP and aluminum powder from various producers. An extended duration pilot plant run demonstrated stable equipment operation and excellent reproducibility of propellant properties. A similar program of formulation and process optimization culminating in large batch scaleup was conducted for ANB-3672 igniter propellant. The results for both ANB-3652 and ANB 37672 confirmed that their processing characteristics are compatible with full-scale production.

  8. Enantio-selective molecular dynamics of (±)-o,p-DDT uptake and degradation in water-sediment system.

    PubMed

    Ali, Imran; Alharbi, Omar M L; Alothman, Zeid A; Alwarthan, Abdulrahman

    2018-01-01

    Enantio-selective molecular dynamics of (±)-o,p-DDT uptake and degradation in water-sediment system is described. Both uptake and degradation processes of (-)-o,p-DDT were slightly higher than (+)-o,p-DDT enantiomer. The optimized parameters for uptake were 7.0μgL -1 concentration of o,p-DDT, 60min contact time, 5.0pH, 6.0gL -1 amount of reverine sediment and 25°C temperature. The maximum degradation of both (-)- and (+)-o,p-DDT was obtained with 16 days, 0.4μgL -1 concentration of o,p-DDT, pH 7 and 35°C temperature. Both uptake and degraded process followed first order rate reaction. Thermodynamic parameters indicated exothermic nature of uptake and degradation processes. Both uptake and degradation were slightly higher for (-)-enantiomer in comparison to (+)-enantiomer of o,p-DDT. It was concluded that both uptake and degradation processes are responsible for the removal of o,p-DDT from nature but uptake plays a crucial role. The percentage degradations of (-)- and (+)-o,p-DDT were 30.1 and 29.5, respectively. This study may be useful to manage o,p-DDT contamination of our earth's ecosystem. Copyright © 2017. Published by Elsevier Inc.

  9. Survey of selective solar absorbers and their limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattox, D.M.; Sowell, R.R.

    1980-01-01

    A number of selective absorber coating systems with high solar absorptance exist which may be used in the mid-temperature range. Some of the systems are more chemically and thermally stable than others. Unfortunately, there are large gaps in the stability data for a large number of the systems. In an inert environment, the principle degradation mechanisms are interdiffusion between the layers or phases and changes in surface morphology. These degradation mechanisms would be minimized by using refractory metals and compounds for the absorbing layer and using refractory materials or diffusion barriers for the underlayer. For use in a reactive environment,more » the choice of materials is much more restrictive since internal chemical reactions can change phase compositions and interfacial reactions can lead to loss of adhesion. For a coating process to be useful, it is necessary to determine what parameters influence the performance of the coating and the limits to these parameters. This process sensitivity has a direct influence on the production process controls necessary to produce a good product. Experience with electroplated black chrome has been rather disappointing. Electroplating should be a low cost deposition process but the extensive bath analysis and optical monitoring necessary to produce a thermally stable produce for use to 320/sup 0/C has increased cost signficantly. 49 references.« less

  10. Formation of manganese nanoclusters in a sputtering/aggregation source and the roles of individual operating parameters

    NASA Astrophysics Data System (ADS)

    Khojasteh, Malak; Kresin, Vitaly V.

    2016-12-01

    We describe the production of size selected manganese nanoclusters using a dc magnetron sputtering/aggregation source. Since nanoparticle production is sensitive to a range of overlapping operating parameters (in particular, the sputtering discharge power, the inert gas flow rates, and the aggregation length) we focus on a detailed map of the influence of each parameter on the average nanocluster size. In this way it is possible to identify the main contribution of each parameter to the physical processes taking place within the source. The discharge power and argon flow supply the atomic vapor, and argon also plays the crucial role in the formation of condensation nuclei via three-body collisions. However, neither the argon flow nor the discharge power have a strong effect on the average nanocluster size in the exiting beam. Here the defining role is played by the source residence time, which is governed by the helium supply and the aggregation path length. The size of mass selected nanoclusters was verified by atomic force microscopy of deposited particles.

  11. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  12. Membrane processes in biotechnology: an overview.

    PubMed

    Charcosset, Catherine

    2006-01-01

    Membrane processes are increasingly reported for various applications in both upstream and downstream technology, such as the established ultrafiltration and microfiltration, and emerging processes as membrane bioreactors, membrane chromatography, and membrane contactors for the preparation of emulsions and particles. Membrane systems exploit the inherent properties of high selectivity, high surface-area-per-unit-volume, and their potential for controlling the level of contact and/or mixing between two phases. This review presents these various membrane processes by focusing more precisely on membrane materials, module design, operating parameters and the large range of possible applications.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knudsen, J.K.; Smith, C.L.

    The steps involved to incorporate parameter uncertainty into the Nuclear Regulatory Commission (NRC) accident sequence precursor (ASP) models is covered in this paper. Three different uncertainty distributions (i.e., lognormal, beta, gamma) were evaluated to Determine the most appropriate distribution. From the evaluation, it was Determined that the lognormal distribution will be used for the ASP models uncertainty parameters. Selection of the uncertainty parameters for the basic events is also discussed. This paper covers the process of determining uncertainty parameters for the supercomponent basic events (i.e., basic events that are comprised of more than one component which can have more thanmore » one failure mode) that are utilized in the ASP models. Once this is completed, the ASP model is ready to be utilized to propagate parameter uncertainty for event assessments.« less

  14. Retrieval of Dry Snow Parameters from Radiometric Data Using a Dense Medium Model and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Kim, Edward J.

    2005-01-01

    In this paper, GA-based techniques are used to invert the equations of an electromagnetic model based on Dense Medium Radiative Transfer Theory (DMRT) under the Quasi Crystalline Approximation with Coherent Potential to retrieve snow depth, mean grain size and fractional volume from microwave brightness temperatures. The technique is initially tested on both noisy and not-noisy simulated data. During this phase, different configurations of genetic algorithm parameters are considered to quantify how their change can affect the algorithm performance. A configuration of GA parameters is then selected and the algorithm is applied to experimental data acquired during the NASA Cold Land Process Experiment. Snow parameters retrieved with the GA-DMRT technique are then compared with snow parameters measured on field.

  15. Warpage investigation on side arms using response surface methodology (RSM) and glow-worm swarm optimizations (GSO)

    NASA Astrophysics Data System (ADS)

    Sow, C. K.; Fathullah, M.; Nasir, S. M.; Shayfull, Z.; Shazzuan, S.

    2017-09-01

    This paper discusses on an analysis run via injection moulding process in determination of the optimum processing parameters used for manufacturing side arms of catheters in minimizing the warpage issues. The optimization method used was RSM. Moreover, in this research tries to find the most significant factor affecting the warpage. From the previous literature review,4 most significant parameters on warpage defect was selected. Those parameters were melt temperature, packing time, packing pressure, mould temperature and cooling time. At the beginning, side arm was drawn using software of CATIA V5. Then, software Mouldflow and Design Expert were employed to analyses on the popular warpage issues. After that, GSO artificial intelligence was apply using the mathematical model from Design Expert for more optimization on RSM result. Recommended parameter settings from the simulation work were then compared with the optimization work of RSM and GSO. The result show that the warpage on the side arm was improved by 3.27 %

  16. Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis

    NASA Astrophysics Data System (ADS)

    Perttu, A. B.; Taisne, B.

    2016-12-01

    Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.

  17. Improvements in the malaxation process to enhance the aroma quality of extra virgin olive oils.

    PubMed

    Reboredo-Rodríguez, P; González-Barreiro, C; Cancho-Grande, B; Simal-Gándara, J

    2014-09-01

    The influence of olive paste preparation conditions on the standard quality parameters, as well as volatile profiles of extra virgin olive oils (EVOOs) from Morisca and Manzanilla de Sevilla cultivars produced in an emerging olive growing area in north-western Spain and processed in an oil mill plant were investigated. For this purpose, two malaxation temperatures (20/30 °C), and two malaxation times (30/90 min) selected in accordance with the customs of the area producers were tested. The volatile profile of the oils underwent a substantial change in terms of odorant series when different malaxation parameters were applied. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. HIGH-SHEAR GRANULATION PROCESS: INFLUENCE OF PROCESSING PARAMETERS ON CRITICAL QUALITY ATTRIBUTES OF ACETAMINOPHEN GRANULES AND TABLETS USING DESIGN OF EXPERIMENT APPROACH.

    PubMed

    Fayed, Mohamed H; Abdel-Rahman, Sayed I; Alanazi, Fars K; Ahmed, Mahrous O; Tawfeek, Hesham M; Al-Shedfat, Ramadan I

    2017-01-01

    Application of quality by design (QbD) in high shear granulation process is critical and need to recognize the correlation between the granulation process parameters and the properties of intermediate (granules) and corresponding final product (tablets). The present work examined the influence of water amount (X,) and wet massing time (X2) as independent process variables on the critical quality attributes of granules and corresponding tablets using design of experiment (DoE) technique. A two factor, three level (32) full factorial design was performed; each of these variables was investigated at three levels to characterize their strength and interaction. The dried granules have been analyzed for their size distribution, density and flow pattern. Additionally, the produced tablets have been investigated for weight uniformity, crushing strength, friability and percent capping, disintegration time and drug dissolution. Statistically significant impact (p < 0.05) of water amount was identified for granule growth, percent fines and distribution width and flow behavior. Granule density and compressibility were found to be significantly influenced (p < 0.05) by the two operating conditions. Also, water amount has significant effect (p < 0.05) on tablet weight unifornity, friability and percent capping. Moreover, tablet disintegration time and drug dissolution appears to be significantly influenced (p < 0.05) by the two process variables. On the other hand, the relationship of process parameters with critical quality attributes of granule and final product tablet was identified and correlated. Ultimately, a judicious selection of process parameters in high shear granulation process will allow providing product of desirable quality.

  19. Numerical Study of the Features of Ti-Nb Alloy Crystallization during Selective Laser Sintering

    NASA Astrophysics Data System (ADS)

    Dmitriev, A. I.; Nikonov, A. Y.

    2016-07-01

    The demand for implants with individual shape requires the development of new methods and approaches to their production. The obvious advantages of additive technologies and selective laser sintering are the capabilities to form both the external shape of the product and its internal structure. Recently appeared and attractive from the perspective of biomechanical compatibility are beta alloys of titanium-niobium that have similar mechanical properties to those of cortical bone. This paper studies the processes occurring at different stages of laser sintering using computer simulation on atomic scale. The effect of cooling rate on the resulting crystal structure of Ti-Nb alloy was analysed. Also, the dependence of tensile strength of sintered particles on heating time and cooling rate was studied. It was shown that the main parameter, which determines the adhesive properties of sintered particles, is the contact area obtained during sintering process. The simulation results can both help defining the technological parameters of the process to provide the desired mechanical properties of the resulting products and serve as a necessary basis for calculations on large scale levels in order to study the behaviour of actually used implants.

  20. Laser Peening Process and Its Impact on Materials Properties in Comparison with Shot Peening and Ultrasonic Impact Peening

    PubMed Central

    Gujba, Abdullahi K.; Medraj, Mamoun

    2014-01-01

    The laser shock peening (LSP) process using a Q-switched pulsed laser beam for surface modification has been reviewed. The development of the LSP technique and its numerous advantages over the conventional shot peening (SP) such as better surface finish, higher depths of residual stress and uniform distribution of intensity were discussed. Similar comparison with ultrasonic impact peening (UIP)/ultrasonic shot peening (USP) was incorporated, when possible. The generation of shock waves, processing parameters, and characterization of LSP treated specimens were described. Special attention was given to the influence of LSP process parameters on residual stress profiles, material properties and structures. Based on the studies so far, more fundamental understanding is still needed when selecting optimized LSP processing parameters and substrate conditions. A summary of the parametric studies of LSP on different materials has been presented. Furthermore, enhancements in the surface micro and nanohardness, elastic modulus, tensile yield strength and refinement of microstructure which translates to increased fatigue life, fretting fatigue life, stress corrosion cracking (SCC) and corrosion resistance were addressed. However, research gaps related to the inconsistencies in the literature were identified. Current status, developments and challenges of the LSP technique were discussed. PMID:28788284

  1. Review on innovative techniques in oil sludge bioremediation

    NASA Astrophysics Data System (ADS)

    Mahdi, Abdullah M. El; Aziz, Hamidi Abdul; Eqab, Eqab Sanoosi

    2017-10-01

    Petroleum hydrocarbon waste is produced in worldwide refineries in significant amount. In Libya, approximately 10,000 tons of oil sludge is generated in oil refineries (hydrocarbon waste mixtures) annually. Insufficient treatment of those wastes can threaten the human health and safety as well as our environment. One of the major challenges faced by petroleum refineries is the safe disposal of oil sludge generated during the cleaning and refining process stages of crude storage facilities. This paper reviews the hydrocarbon sludge characteristics and conventional methods for remediation of oil hydrocarbon from sludge. This study intensively focuses on earlier literature to describe the recently selected innovation technology in oily hydrocarbon sludge bioremediation process. Conventional characterization parameters or measurable factors can be gathered in chemical, physical, and biological parameters: (1) Chemical parameters are consequently necessary in the case of utilization of topsoil environment when they become relevant to the presence of nutrients and toxic compounds; (2) Physical parameters provide general data on sludge process and hand ability; (3) Biological parameters provide data on microbial activity and organic matter presence, which will be used to evaluate the safety of the facilities. The objective of this research is to promote the bioremediating oil sludge feasibility from Marsa El Hariga Terminal and Refinery (Tobruk).

  2. Determination of melt pool dimensions using DOE-FEM and RSM with process window during SLM of Ti6Al4V powder

    NASA Astrophysics Data System (ADS)

    Zhuang, Jyun-Rong; Lee, Yee-Ting; Hsieh, Wen-Hsin; Yang, An-Shik

    2018-07-01

    Selective laser melting (SLM) shows a positive prospect as an additive manufacturing (AM) technique for fabrication of 3D parts with complicated structures. A transient thermal model was developed by the finite element method (FEM) to simulate the thermal behavior for predicting the time evolution of temperature field and melt pool dimensions of Ti6Al4V powder during SLM. The FEM predictions were then compared with published experimental measurements and calculation results for model validation. This study applied the design of experiment (DOE) scheme together with the response surface method (RSM) to conduct the regression analysis based on four processing parameters (exactly, the laser power, scanning speed, preheating temperature and hatch space) for predicting the dimensions of the melt pool in SLM. The preliminary RSM results were used to quantify the effects of those parameters on the melt pool size. The process window was further implemented via two criteria of the width and depth of the molten pool to screen impractical conditions of four parameters for including the practical ranges of processing parameters. The FEM simulations confirmed the good accuracy of the critical RSM models in the predictions of melt pool dimensions for three typical SLM working scenarios.

  3. Effect of internal and external conditions on ionization processes in the FAPA ambient desorption/ionization source.

    PubMed

    Orejas, Jaime; Pfeuffer, Kevin P; Ray, Steven J; Pisonero, Jorge; Sanz-Medel, Alfredo; Hieftje, Gary M

    2014-11-01

    Ambient desorption/ionization (ADI) sources coupled to mass spectrometry (MS) offer outstanding analytical features: direct analysis of real samples without sample pretreatment, combined with the selectivity and sensitivity of MS. Since ADI sources typically work in the open atmosphere, ambient conditions can affect the desorption and ionization processes. Here, the effects of internal source parameters and ambient humidity on the ionization processes of the flowing atmospheric pressure afterglow (FAPA) source are investigated. The interaction of reagent ions with a range of analytes is studied in terms of sensitivity and based upon the processes that occur in the ionization reactions. The results show that internal parameters which lead to higher gas temperatures afforded higher sensitivities, although fragmentation is also affected. In the case of humidity, only extremely dry conditions led to higher sensitivities, while fragmentation remained unaffected.

  4. Geometric correction of synchronous scanned Operational Modular Imaging Spectrometer II hyperspectral remote sensing images using spatial positioning data of an inertial navigation system

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao

    2015-01-01

    The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.

  5. Mutagenicity of fume particles from metal arc welding on stainless steel in the Salmonella/microsome test.

    PubMed

    Maxild, J; Andersen, M; Kiel, P

    1978-01-01

    Mutagenic activity of fume particles produced by metal arc welding on stainless steel (ss) is demonstrated by using the Salmonella/microsome mutagenicity test described by Ames et al., with strain TA100 (base-pair substitution) and TA98 (frame-shift reversion). Results of a representative but limited selection of processes and materials show that mutagenic activity is a function of process and process parameters. Welding on stainless steel produces particles that are mutagenic, whereas welding on mild steel (ms) produces particles that are not. Manual metal arc (MMA) welding on stainless steel produces particles of higher mutagenic activity than does metal inert gas (MIG) welding, and fume particles produced by MIG welding under short-arc transfer. Further studies of welding fumes (both particles and gases) must be performed to determine process parameters of significance for the mutagenic activity.

  6. The Friction Force Determination of Large-Sized Composite Rods in Pultrusion

    NASA Astrophysics Data System (ADS)

    Grigoriev, S. N.; Krasnovskii, A. N.; Kazakov, I. A.

    2014-08-01

    Nowadays, the simple pull-force models of pultrusion process are not suitable for large sized rods because they are not considered a chemical shrinkage and thermal expansion acting in cured material inside the die. But the pulling force of the resin-impregnated fibers as they travels through the heated die is essential factor in the pultrusion process. In order to minimize the number of trial-and-error experiments a new mathematical approach to determine the frictional force is presented. The governing equations of the model are stated in general terms and various simplifications are implemented in order to obtain solutions without extensive numerical efforts. The influence of different pultrusion parameters on the frictional force value is investigated. The results obtained by the model can establish a foundation by which process control parameters are selected to achieve an appropriate pull-force and can be used for optimization pultrusion process.

  7. The interaction of host genetics and disease processes in chronic livestock disease: a simulation model of ovine footrot.

    PubMed

    Russell, V N L; Green, L E; Bishop, S C; Medley, G F

    2013-03-01

    A stochastic, individual-based, simulation model of footrot in a flock of 200 ewes was developed that included flock demography, disease processes, host genetic variation for traits influencing infection and disease processes, and bacterial contamination of the environment. Sensitivity analyses were performed using ANOVA to examine the contribution of unknown parameters to outcome variation. The infection rate and bacterial death rate were the most significant factors determining the observed prevalence of footrot, as well as the heritability of resistance. The dominance of infection parameters in determining outcomes implies that observational data cannot be used to accurately estimate the strength of genetic control of underlying traits describing the infection process, i.e. resistance. Further work will allow us to address the potential for genetic selection to control ovine footrot. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. The Influence of Selective Laser Melting (SLM) Process Parameters on In-Vitro Cell Response.

    PubMed

    Wysocki, Bartłomiej; Idaszek, Joanna; Zdunek, Joanna; Rożniatowski, Krzysztof; Pisarek, Marcin; Yamamoto, Akiko; Święszkowski, Wojciech

    2018-05-30

    The use of laser 3D printers is very perspective in the fabrication of solid and porous implants made of various polymers, metals, and its alloys. The Selective Laser Melting (SLM) process, in which consolidated powders are fully melted on each layer, gives the possibility of fabrication personalized implants based on the Computer Aid Design (CAD) model. During SLM fabrication on a 3D printer, depending on the system applied, there is a possibility for setting the amount of energy density (J/mm³) transferred to the consolidated powders, thus controlling its porosity, contact angle and roughness. In this study, we have controlled energy density in a range 8⁻45 J/mm³ delivered to titanium powder by setting various levels of laser power (25⁻45 W), exposure time (20⁻80 µs) and distance between exposure points (20⁻60 µm). The growing energy density within studied range increased from 63 to 90% and decreased from 31 to 13 µm samples density and Ra parameter, respectively. The surface energy 55⁻466 mN/m was achieved with contact angles in range 72⁻128° and 53⁻105° for water and formamide, respectively. The human mesenchymal stem cells (hMSCs) adhesion after 4 h decreased with increasing energy density delivered during processing within each parameter group. The differences in cells proliferation were clearly seen after a 7-day incubation. We have observed that proliferation was decreasing with increasing density of energy delivered to the samples. This phenomenon was explained by chemical composition of oxide layers affecting surface energy and internal stresses. We have noticed that TiO₂, which is the main oxide of raw titanium powder, disintegrated during selective laser melting process and oxygen was transferred into metallic titanium. The typical for 3D printed parts post-processing methods such as chemical polishing in hydrofluoric (HF) or hydrofluoric/nitric (HF/HNO₃) acid solutions and thermal treatments were used to restore surface chemistry of raw powders and improve surface.

  9. Optimizing MRI Logistics: Focused Process Improvements Can Increase Throughput in an Academic Radiology Department.

    PubMed

    O'Brien, Jeremy J; Stormann, Jeremy; Roche, Kelli; Cabral-Goncalves, Ines; Monks, Annamarie; Hallett, Donna; Mortele, Koenraad J

    2017-02-01

    The purpose of this study was to describe and evaluate the effect of focused process improvements on protocol selection and scheduling in the MRI division of a busy academic medical center, as measured by examination and room times, magnet fill rate, and potential revenue increases and cost savings to the department. Focused process improvements, led by a multidisciplinary team at a large academic medical center, were directed at streamlining MRI protocols and optimizing matching protocol ordering to scheduling while maintaining or improving image quality. Data were collected before (June 2013) and after (March 2015) implementation of focused process improvements and divided by subspecialty on type of examination, allotted examination time, actual examination time, and MRI parameters. Direct and indirect costs were compiled and analyzed in consultation with the business department. Data were compared with evaluated effects on selected outcome and efficiency measures, as well as revenue and cost considerations. Statistical analysis was performed using a t test. During the month of June 2013, 2145 MRI examinations were performed at our center; 2702 were performed in March 2015. Neuroradiology examinations were the most common (59% in June 2013, 56% in March 2015), followed by body examinations (25% and 27%). All protocols and parameters were analyzed and streamlined for each examination, with slice thickness, TR, and echo train length among the most adjusted parameters. Mean time per examination decreased from 43.4 minutes to 36.7 minutes, and mean room time per patient decreased from 46.3 to 43.6 minutes (p = 0.009). Potential revenue from increased throughput may yield up to $3 million yearly (at $800 net revenue per scan) or produce cost savings if the facility can reduce staffed scanner hours or the number of scanners in its fleet. Actual revenue and expense impacts depend on the facility's fixed and variable cost structure, payer contracts, MRI fleet composition, and unmet MRI demand. Focused process improvements in selecting MRI protocols and scheduling examinations significantly increased throughput in the MRI division, thereby increasing capacity and revenue. Shorter scan and department times may also improve patient experience.

  10. Optimal design and experimental validation of a simulated moving bed chromatography for continuous recovery of formic acid in a model mixture of three organic acids from Actinobacillus bacteria fermentation.

    PubMed

    Park, Chanhun; Nam, Hee-Geun; Lee, Ki Bong; Mun, Sungyong

    2014-10-24

    The economically-efficient separation of formic acid from acetic acid and succinic acid has been a key issue in the production of formic acid with the Actinobacillus bacteria fermentation. To address this issue, an optimal three-zone simulated moving bed (SMB) chromatography for continuous separation of formic acid from acetic acid and succinic acid was developed in this study. As a first step for this task, the adsorption isotherm and mass-transfer parameters of each organic acid on the qualified adsorbent (Amberchrom-CG300C) were determined through a series of multiple frontal experiments. The determined parameters were then used in optimizing the SMB process for the considered separation. During such optimization, the additional investigation for selecting a proper SMB port configuration, which could be more advantageous for attaining better process performances, was carried out between two possible configurations. It was found that if the properly selected port configuration was adopted in the SMB of interest, the throughout and the formic-acid product concentration could be increased by 82% and 181% respectively. Finally, the optimized SMB process based on the properly selected port configuration was tested experimentally using a self-assembled SMB unit with three zones. The SMB experimental results and the relevant computer simulation verified that the developed process in this study was successful in continuous recovery of formic acid from a ternary organic-acid mixture of interest with high throughput, high purity, high yield, and high product concentration. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Application of response surface methodology to optimize microwave-assisted extraction of silymarin from milk thistle seeds

    USDA-ARS?s Scientific Manuscript database

    Several parameters of Microwave-assisted extraction (MAE) including extraction time, extraction temperature, ethanol concentration and solid-liquid ratio were selected to describe the MAE processing. The silybin content, measured by an UV-Vis spectrophotometry, was considered as the silymarin yield....

  12. Effects of Processing Parameters on Surface Roughness of Additive Manufactured Ti-6Al-4V via Electron Beam Melting

    PubMed Central

    Sin, Wai Jack; Nai, Mui Ling Sharon; Wei, Jun

    2017-01-01

    As one of the powder bed fusion additive manufacturing technologies, electron beam melting (EBM) is gaining more and more attention due to its near-net-shape production capacity with low residual stress and good mechanical properties. These characteristics also allow EBM built parts to be used as produced without post-processing. However, the as-built rough surface introduces a detrimental influence on the mechanical properties of metallic alloys. Thereafter, understanding the effects of processing parameters on the part’s surface roughness, in turn, becomes critical. This paper has focused on varying the processing parameters of two types of contouring scanning strategies namely, multispot and non-multispot, in EBM. The results suggest that the beam current and speed function are the most significant processing parameters for non-multispot contouring scanning strategy. While for multispot contouring scanning strategy, the number of spots, spot time, and spot overlap have greater effects than focus offset and beam current. The improved surface roughness has been obtained in both contouring scanning strategies. Furthermore, non-multispot contouring scanning strategy gives a lower surface roughness value and poorer geometrical accuracy than the multispot counterpart under the optimized conditions. These findings could be used as a guideline for selecting the contouring type used for specific industrial parts that are built using EBM. PMID:28937638

  13. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  14. Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction

    NASA Technical Reports Server (NTRS)

    Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.

    2006-01-01

    The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.

  15. [Development of an analyzing system for soil parameters based on NIR spectroscopy].

    PubMed

    Zheng, Li-Hua; Li, Min-Zan; Sun, Hong

    2009-10-01

    A rapid estimation system for soil parameters based on spectral analysis was developed by using object-oriented (OO) technology. A class of SOIL was designed. The instance of the SOIL class is the object of the soil samples with the particular type, specific physical properties and spectral characteristics. Through extracting the effective information from the modeling spectral data of soil object, a map model was established between the soil parameters and its spectral data, while it was possible to save the mapping model parameters in the database of the model. When forecasting the content of any soil parameter, the corresponding prediction model of this parameter can be selected with the same soil type and the similar soil physical properties of objects. And after the object of target soil samples was carried into the prediction model and processed by the system, the accurate forecasting content of the target soil samples could be obtained. The system includes modules such as file operations, spectra pretreatment, sample analysis, calibrating and validating, and samples content forecasting. The system was designed to run out of equipment. The parameters and spectral data files (*.xls) of the known soil samples can be input into the system. Due to various data pretreatment being selected according to the concrete conditions, the results of predicting content will appear in the terminal and the forecasting model can be stored in the model database. The system reads the predicting models and their parameters are saved in the model database from the module interface, and then the data of the tested samples are transferred into the selected model. Finally the content of soil parameters can be predicted by the developed system. The system was programmed with Visual C++6.0 and Matlab 7.0. And the Access XP was used to create and manage the model database.

  16. Toward Understanding Drug Incorporation and Delivery from Biocompatible Metal-Organic Frameworks in View of Cutaneous Administration.

    PubMed

    Rojas, Sara; Colinet, Isabel; Cunha, Denise; Hidalgo, Tania; Salles, Fabrice; Serre, Christian; Guillou, Nathalie; Horcajada, Patricia

    2018-03-31

    Although metal-organic frameworks (MOFs) have widely demonstrated their convenient performances as drug-delivery systems, there is still work to do to fully understand the drug incorporation/delivery processes from these materials. In this work, a combined experimental and computational investigation of the main structural and physicochemical parameters driving drug adsorption/desorption kinetics was carried out. Two model drugs (aspirin and ibuprofen) and three water-stable, biocompatible MOFs (MIL-100(Fe), UiO-66(Zr), and MIL-127(Fe)) have been selected to obtain a variety of drug-matrix couples with different structural and physicochemical characteristics. This study evidenced that the drug-loading and drug-delivery processes are mainly governed by structural parameters (accessibility of the framework and drug volume) as well as the MOF/drug hydrophobic/hydrophilic balance. As a result, the delivery of the drug under simulated cutaneous conditions (aqueous media at 37 °C) demonstrated that these systems fulfill the requirements to be used as topical drug-delivery systems, such as released payload between 1 and 7 days. These results highlight the importance of the rational selection of MOFs, evidencing the effect of geometrical and chemical parameters of both the MOF and the drug on the drug adsorption and release.

  17. Toward Understanding Drug Incorporation and Delivery from Biocompatible Metal–Organic Frameworks in View of Cutaneous Administration

    PubMed Central

    2018-01-01

    Although metal–organic frameworks (MOFs) have widely demonstrated their convenient performances as drug-delivery systems, there is still work to do to fully understand the drug incorporation/delivery processes from these materials. In this work, a combined experimental and computational investigation of the main structural and physicochemical parameters driving drug adsorption/desorption kinetics was carried out. Two model drugs (aspirin and ibuprofen) and three water-stable, biocompatible MOFs (MIL-100(Fe), UiO-66(Zr), and MIL-127(Fe)) have been selected to obtain a variety of drug–matrix couples with different structural and physicochemical characteristics. This study evidenced that the drug-loading and drug-delivery processes are mainly governed by structural parameters (accessibility of the framework and drug volume) as well as the MOF/drug hydrophobic/hydrophilic balance. As a result, the delivery of the drug under simulated cutaneous conditions (aqueous media at 37 °C) demonstrated that these systems fulfill the requirements to be used as topical drug-delivery systems, such as released payload between 1 and 7 days. These results highlight the importance of the rational selection of MOFs, evidencing the effect of geometrical and chemical parameters of both the MOF and the drug on the drug adsorption and release. PMID:29623304

  18. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  19. Retention of membrane charge attributes by cryopreserved-thawed sperm and zeta selection.

    PubMed

    Kam, Tricia L; Jacobson, John D; Patton, William C; Corselli, Johannah U; Chan, Philip J

    2007-09-01

    Mature sperm can be selected based on their negative zeta electrokinetic potential. The zeta selection of cryopreserved sperm is unknown. The objective was to study the effect of zeta processing on the morphology and kinematic parameters of cryopreserved-thawed sperm. Colloid-washed sperm (N = 9 cases) were cryopreserved for 24 h, thawed and diluted in serum-free medium in positive-charged tubes. After centrifugation, the tubes were decanted, serum-supplemented medium was added and the resuspended sperm were analyzed. Untreated sperm and fresh sperm served as the controls. There were improvements in strict normal morphology in fresh (11.8 +/- 0.3 versus control 8.8 +/- 0.3 %, mean +/- SEM) and thawed (8.7 +/- 0.2 versus control 5.4 +/- 0.2%) sperm after zeta processing. Percent sperm necrosis was reduced after zeta processing (66.0 +/- 0.6 versus unprocessed 74.6 +/- 0.3%). Progression decreased by 50% but not total motility after zeta processing of thawed sperm. The results suggested that the cryopreservation process did not impact the sperm membrane net zeta potential and higher percentages of sperm with normal strict morphology, acrosome integrity and reduced necrosis were recovered. The zeta method was simple and improved the selection of quality sperm after cryopreservation but more studies would be needed before routine clinical application.

  20. A risk-based approach to management of leachables utilizing statistical analysis of extractables.

    PubMed

    Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M

    2015-04-01

    To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.

  1. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  2. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  3. Development and evaluation of a dimensionless mechanistic pan coating model for the prediction of coated tablet appearance.

    PubMed

    Niblett, Daniel; Porter, Stuart; Reynolds, Gavin; Morgan, Tomos; Greenamoyer, Jennifer; Hach, Ronald; Sido, Stephanie; Karan, Kapish; Gabbott, Ian

    2017-08-07

    A mathematical, mechanistic tablet film-coating model has been developed for pharmaceutical pan coating systems based on the mechanisms of atomisation, tablet bed movement and droplet drying with the main purpose of predicting tablet appearance quality. Two dimensionless quantities were used to characterise the product properties and operating parameters: the dimensionless Spray Flux (relating to area coverage of the spray droplets) and the Niblett Number (relating to the time available for drying of coating droplets). The Niblett Number is the ratio between the time a droplet needs to dry under given thermodynamic conditions and the time available for the droplet while on the surface of the tablet bed. The time available for drying on the tablet bed surface is critical for appearance quality. These two dimensionless quantities were used to select process parameters for a set of 22 coating experiments, performed over a wide range of multivariate process parameters. The dimensionless Regime Map created can be used to visualise the effect of interacting process parameters on overall tablet appearance quality and defects such as picking and logo bridging. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Process qualification and testing of LENS deposited AY1E0125 D-bottle brackets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atwood, Clinton J.; Smugeresky, John E.; Jew, Michael

    2006-11-01

    The LENS Qualification team had the goal of performing a process qualification for the Laser Engineered Net Shaping{trademark}(LENS{reg_sign}) process. Process Qualification requires that a part be selected for process demonstration. The AY1E0125 D-Bottle Bracket from the W80-3 was selected for this work. The repeatability of the LENS process was baselined to determine process parameters. Six D-Bottle brackets were deposited using LENS, machined to final dimensions, and tested in comparison to conventionally processed brackets. The tests, taken from ES1E0003, included a mass analysis and structural dynamic testing including free-free and assembly-level modal tests, and Haversine shock tests. The LENS brackets performedmore » with very similar characteristics to the conventionally processed brackets. Based on the results of the testing, it was concluded that the performance of the brackets made them eligible for parallel path testing in subsystem level tests. The testing results and process rigor qualified the LENS process as detailed in EER200638525A.« less

  5. Fuzzy adaptive strong tracking scaled unscented Kalman filter for initial alignment of large misalignment angles

    NASA Astrophysics Data System (ADS)

    Li, Jing; Song, Ningfang; Yang, Gongliu; Jiang, Rui

    2016-07-01

    In the initial alignment process of strapdown inertial navigation system (SINS), large misalignment angles always bring nonlinear problem, which can usually be processed using the scaled unscented Kalman filter (SUKF). In this paper, the problem of large misalignment angles in SINS alignment is further investigated, and the strong tracking scaled unscented Kalman filter (STSUKF) is proposed with fixed parameters to improve convergence speed, while these parameters are artificially constructed and uncertain in real application. To further improve the alignment stability and reduce the parameters selection, this paper proposes a fuzzy adaptive strategy combined with STSUKF (FUZZY-STSUKF). As a result, initial alignment scheme of large misalignment angles based on FUZZY-STSUKF is designed and verified by simulations and turntable experiment. The results show that the scheme improves the accuracy and convergence speed of SINS initial alignment compared with those based on SUKF and STSUKF.

  6. Pareto-Zipf law in growing systems with multiplicative interactions

    NASA Astrophysics Data System (ADS)

    Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi

    2018-06-01

    Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.

  7. ISRU System Model Tool: From Excavation to Oxygen Production

    NASA Technical Reports Server (NTRS)

    Santiago-Maldonado, Edgardo; Linne, Diane L.

    2007-01-01

    In the late 80's, conceptual designs for an in situ oxygen production plant were documented in a study by Eagle Engineering [1]. In the "Summary of Findings" of this study, it is clearly pointed out that: "reported process mass and power estimates lack a consistent basis to allow comparison." The study goes on to say: "A study to produce a set of process mass, power, and volume requirements on a consistent basis is recommended." Today, approximately twenty years later, as humans plan to return to the moon and venture beyond, the need for flexible up-to-date models of the oxygen extraction production process has become even more clear. Multiple processes for the production of oxygen from lunar regolith are being investigated by NASA, academia, and industry. Three processes that have shown technical merit are molten regolith electrolysis, hydrogen reduction, and carbothermal reduction. These processes have been selected by NASA as the basis for the development of the ISRU System Model Tool (ISMT). In working to develop up-to-date system models for these processes NASA hopes to accomplish the following: (1) help in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the excavation and oxygen production processes, and (4) provide estimates on energy and power requirements, mass and volume of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters. Also, as confidence and high fidelity is achieved with each component's model, new techniques and processes can be introduced and analyzed at a fraction of the cost of traditional hardware development and test approaches. A first generation ISRU System Model Tool has been used to provide inputs to the Lunar Architecture Team studies.

  8. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  9. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  10. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  11. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.

  12. Gluten-Free Precooked Rice-Yellow Pea Pasta: Effect of Extrusion-Cooking Conditions on Phenolic Acids Composition, Selected Properties and Microstructure.

    PubMed

    Bouasla, Abdallah; Wójtowicz, Agnieszka; Zidoune, Mohammed Nasereddine; Olech, Marta; Nowak, Renata; Mitrus, Marcin; Oniszczuk, Anna

    2016-05-01

    Rice/yellow pea flour blend (2/1 ratio) was used to produce gluten-free precooked pasta using a single-screw modified extrusion-cooker TS-45. The effect of moisture content (28%, 30%, and 32%) and screw speed (60, 80, and 100 rpm) on some quality parameters was assessed. The phenolic acids profile and selected pasta properties were tested, like pasting properties, water absorption capacity, cooking loss, texture characteristics, microstructure, and sensory overall acceptability. Results indicated that dough moisture content influenced all tested quality parameters of precooked pasta except firmness. Screw speed showed an effect only on some quality parameters. The extrusion-cooking process at 30% of dough moisture with 80 rpm is appropriate to obtain rice-yellow pea precooked pasta with high content of phenolics and adequate quality. These pasta products exhibited firm texture, low stickiness, and regular and compact interne structure confirmed by high score in sensory overall acceptability. © 2016 Institute of Food Technologists®

  13. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  14. The specificity of the effects of stimulant medication on classroom learning-related measures of cognitive processing for attention deficit disorder children.

    PubMed

    Balthazor, M J; Wagner, R K; Pelham, W E

    1991-02-01

    There appear to be beneficial effects of stimulant medication on daily classroom measures of cognitive functioning for Attention Deficit Disorder (ADD) children, but the specificity and origin of such effects is unclear. Consistent with previous results, 0.3 mg/kg methylphenidate improved ADD children's performance on a classroom reading comprehension measure. Using the Posner letting-matching task and four additional measures of phonological processing, we attempted to isolate the effects of methylphenidate to parameter estimates of (a) selective attention, (b) the basic cognitive process of retrieving name codes from permanent memory, and (c) a constant term that represented nonspecific aspects of information processing. Responses to the letter-matching stimuli were faster and more accurate with medication compared to placebo. The improvement in performance was isolated to the parameter estimate that reflected nonspecific aspects of information processing. A lack of medication effect on the other measures of phonological processing supported the Posner task findings in indicating that methylphenidate appears to exert beneficial effects on academic processing through general rather than specific aspects of information processing.

  15. Airborne Wind Profiling With the Data Acquisition and Processing System for a Pulsed 2-Micron Coherent Doppler Lidar System

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.

    2012-01-01

    A pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia flew on the NASA's DC-8 aircraft during the NASA Genesis and Rapid Intensification Processes (GRIP) during the summer of 2010. The participation was part of the project Doppler Aerosol Wind Lidar (DAWN) Air. Selected results of airborne wind profiling are presented and compared with the dropsonde data for verification purposes. Panoramic presentations of different wind parameters over a nominal observation time span are also presented for selected GRIP data sets. The realtime data acquisition and analysis software that was employed during the GRIP campaign is introduced with its unique features.

  16. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  17. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  18. Effect of Process Parameters on Catalytic Incineration of Solvent Emissions

    PubMed Central

    Ojala, Satu; Lassi, Ulla; Perämäki, Paavo; Keiski, Riitta L.

    2008-01-01

    Catalytic oxidation is a feasible and affordable technology for solvent emission abatement. However, finding optimal operation conditions is important, since they are strongly dependent on the application area of VOC incineration. This paper presents the results of the laboratory experiments concerning four most central parameters, that is, effects of concentration, gas hourly space velocity (GHSV), temperature, and moisture on the oxidation of n-butyl acetate. Both fresh and industrially aged commercial Pt/Al2O3 catalysts were tested to determine optimal process conditions and the significance order and level of selected parameters. The effects of these parameters were evaluated by computer-aided statistical experimental design. According to the results, GHSV was the most dominant parameter in the oxidation of n-butyl acetate. Decreasing GHSV and increasing temperature increased the conversion of n-butyl acetate. The interaction effect of GHSV and temperature was more significant than the effect of concentration. Both of these affected the reaction by increasing the conversion of n-butyl acetate. Moisture had only a minor decreasing effect on the conversion, but it also decreased slightly the formation of by products. Ageing did not change the significance order of the above-mentioned parameters, however, the effects of individual parameters increased slightly as a function of ageing. PMID:18584032

  19. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  20. Application of Radiation Chemistry to Some Selected Technological Issues Related to the Development of Nuclear Energy.

    PubMed

    Bobrowski, Krzysztof; Skotnicki, Konrad; Szreder, Tomasz

    2016-10-01

    The most important contributions of radiation chemistry to some selected technological issues related to water-cooled reactors, reprocessing of spent nuclear fuel and high-level radioactive wastes, and fuel evolution during final radioactive waste disposal are highlighted. Chemical reactions occurring at the operating temperatures and pressures of reactors and involving primary transients and stable products from water radiolysis are presented and discussed in terms of the kinetic parameters and radiation chemical yields. The knowledge of these parameters is essential since they serve as input data to the models of water radiolysis in the primary loop of light water reactors and super critical water reactors. Selected features of water radiolysis in heterogeneous systems, such as aqueous nanoparticle suspensions and slurries, ceramic oxides surfaces, nanoporous, and cement-based materials, are discussed. They are of particular concern in the primary cooling loops in nuclear reactors and long-term storage of nuclear waste in geological repositories. This also includes radiation-induced processes related to corrosion of cladding materials and copper-coated iron canisters, dissolution of spent nuclear fuel, and changes of bentonite clays properties. Radiation-induced processes affecting stability of solvents and solvent extraction ligands as well oxidation states of actinide metal ions during recycling of the spent nuclear fuel are also briefly summarized.

  1. Rotary wave-ejector enhanced pulse detonation engine

    NASA Astrophysics Data System (ADS)

    Nalim, M. R.; Izzy, Z. A.; Akbari, P.

    2012-01-01

    The use of a non-steady ejector based on wave rotor technology is modeled for pulse detonation engine performance improvement and for compatibility with turbomachinery components in hybrid propulsion systems. The rotary wave ejector device integrates a pulse detonation process with an efficient momentum transfer process in specially shaped channels of a single wave-rotor component. In this paper, a quasi-one-dimensional numerical model is developed to help design the basic geometry and operating parameters of the device. The unsteady combustion and flow processes are simulated and compared with a baseline PDE without ejector enhancement. A preliminary performance assessment is presented for the wave ejector configuration, considering the effect of key geometric parameters, which are selected for high specific impulse. It is shown that the rotary wave ejector concept has significant potential for thrust augmentation relative to a basic pulse detonation engine.

  2. Simulation and design of ECT differential bobbin probes for the inspection of cracks in bolts

    NASA Astrophysics Data System (ADS)

    Ra, S. W.; Im, K. H.; Lee, S. G.; Kim, H. J.; Song, S. J.; Kim, S. K.; Cho, Y. T.; Woo, Y. D.; Jung, J. A.

    2015-12-01

    All Various defects could be generated in bolts for a use of oil filters for the manufacturing process and then may affect to the safety and quality in bolts. Also, fine defects may be imbedded in oil filter system during multiple forging manufacturing processes. So it is very important that such defects be investigated and screened during the multiple manufacturing processes. Therefore, in order effectively to evaluate the fine defects, the design parameters for bobbin-types were selected under a finite element method (FEM) simulations and Eddy current testing (ECT). Especially the FEM simulations were performed to make characterization in the crack detection of the bolts and the parameters such as number of turns of the coil, the coil size and applied frequency were calculated based on the simulation results.

  3. Automated ensemble assembly and validation of microbial genomes.

    PubMed

    Koren, Sergey; Treangen, Todd J; Hill, Christopher M; Pop, Mihai; Phillippy, Adam M

    2014-05-03

    The continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible. To encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides comparable to or exceeding the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers. Ensemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to select an assembly best tailored to their specific needs.

  4. Sensor selection and chemo-sensory optimization: toward an adaptable chemo-sensory system.

    PubMed

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to "adapt" in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve.

  5. Sensor Selection and Chemo-Sensory Optimization: Toward an Adaptable Chemo-Sensory System

    PubMed Central

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to “adapt” in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve. PMID:22319492

  6. Fossil fuel furnace reactor

    DOEpatents

    Parkinson, William J.

    1987-01-01

    A fossil fuel furnace reactor is provided for simulating a continuous processing plant with a batch reactor. An internal reaction vessel contains a batch of shale oil, with the vessel having a relatively thin wall thickness for a heat transfer rate effective to simulate a process temperature history in the selected continuous processing plant. A heater jacket is disposed about the reactor vessel and defines a number of independent controllable temperature zones axially spaced along the reaction vessel. Each temperature zone can be energized to simulate a time-temperature history of process material through the continuous plant. A pressure vessel contains both the heater jacket and the reaction vessel at an operating pressure functionally selected to simulate the continuous processing plant. The process yield from the oil shale may be used as feedback information to software simulating operation of the continuous plant to provide operating parameters, i.e., temperature profiles, ambient atmosphere, operating pressure, material feed rates, etc., for simulation in the batch reactor.

  7. Ti Alloys Processed By Selective Laser Melting And By Laser Cladding: Microstructures And Mechanical Properties

    NASA Astrophysics Data System (ADS)

    Mertens, Anne; Contrepois, Quentin; Dormal, Thierry; Lemaire, Olivier; Lecomte-Beckers, Jacqueline

    2012-07-01

    In this study, samples of alloy Ti-6Al-4V have been processed by Selective Laser Melting (SLM) and by Laser Cladding (LC), two layer-by-layer near-net-shape processes allowing for economic production of complex parts. The resulting microstructures have been characterised in details, so as to allow for a better understanding of the solidification process and of the subsequent phase transformations taking place upon cooling for both techniques. On the one hand, a new “MesoClad” laser with a maximum power of 300 W has been used successfully to produce thin wall samples by LC. On the other hand, the influence of processing parameters on the mechanical properties was investigated by means of uniaxial tensile testing performed on samples produced by SLM with different orientations with respect to the direction of mechanical solicitation. A strong anisotropy in mechanical behaviour was thus interpreted in relations with the microstructures and processing conditions.

  8. Estimating Soil Moisture Using Polsar Data: a Machine Learning Approach

    NASA Astrophysics Data System (ADS)

    Khedri, E.; Hasanlou, M.; Tabatabaeenejad, A.

    2017-09-01

    Soil moisture is an important parameter that affects several environmental processes. This parameter has many important functions in numerous sciences including agriculture, hydrology, aerology, flood prediction, and drought occurrence. However, field procedures for moisture calculations are not feasible in a vast agricultural region territory. This is due to the difficulty in calculating soil moisture in vast territories and high-cost nature as well as spatial and local variability of soil moisture. Polarimetric synthetic aperture radar (PolSAR) imaging is a powerful tool for estimating soil moisture. These images provide a wide field of view and high spatial resolution. For estimating soil moisture, in this study, a model of support vector regression (SVR) is proposed based on obtained data from AIRSAR in 2003 in C, L, and P channels. In this endeavor, sequential forward selection (SFS) and sequential backward selection (SBS) are evaluated to select suitable features of polarized image dataset for high efficient modeling. We compare the obtained data with in-situ data. Output results show that the SBS-SVR method results in higher modeling accuracy compared to SFS-SVR model. Statistical parameters obtained from this method show an R2 of 97% and an RMSE of lower than 0.00041 (m3/m3) for P, L, and C channels, which has provided better accuracy compared to other feature selection algorithms.

  9. RTM: Cost-effective processing of composite structures

    NASA Technical Reports Server (NTRS)

    Hasko, Greg; Dexter, H. Benson

    1991-01-01

    Resin transfer molding (RTM) is a promising method for cost effective fabrication of high strength, low weight composite structures from textile preforms. In this process, dry fibers are placed in a mold, resin is introduced either by vacuum infusion or pressure, and the part is cured. RTM has been used in many industries, including automotive, recreation, and aerospace. Each of the industries has different requirements of material strength, weight, reliability, environmental resistance, cost, and production rate. These requirements drive the selection of fibers and resins, fiber volume fractions, fiber orientations, mold design, and processing equipment. Research is made into applying RTM to primary aircraft structures which require high strength and stiffness at low density. The material requirements are discussed of various industries, along with methods of orienting and distributing fibers, mold configurations, and processing parameters. Processing and material parameters such as resin viscosity, perform compaction and permeability, and tool design concepts are discussed. Experimental methods to measure preform compaction and permeability are presented.

  10. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  11. Use of in-die powder densification parameters in the implementation of process analytical technologies for tablet production on industrial scale.

    PubMed

    Cespi, Marco; Perinelli, Diego R; Casettari, Luca; Bonacucina, Giulia; Caporicci, Giuseppe; Rendina, Filippo; Palmieri, Giovanni F

    2014-12-30

    The use of process analytical technologies (PAT) to ensure final product quality is by now a well established practice in pharmaceutical industry. To date, most of the efforts in this field have focused on development of analytical methods using spectroscopic techniques (i.e., NIR, Raman, etc.). This work evaluated the possibility of using the parameters derived from the processing of in-line raw compaction data (the forces and displacement of the punches) as a PAT tool for controlling the tableting process. To reach this goal, two commercially available formulations were used, changing the quantitative composition and compressing them on a fully instrumented rotary pressing machine. The Heckel yield pressure and the compaction energies, together with the tablets hardness and compaction pressure, were selected and evaluated as discriminating parameters in all the prepared formulations. The apparent yield pressure, as shown in the obtained results, has the necessary sensitivity to be effectively included in a PAT strategy to monitor the tableting process. Additional investigations were performed to understand the criticalities and the mechanisms beyond this performing parameter and the associated implications. Specifically, it was discovered that the efficiency of the apparent yield pressure depends on the nominal drug title, the drug densification mechanism and the error in pycnometric density. In this study, the potential of using some parameters derived from the compaction raw data has been demonstrated to be an attractive alternative and complementary method to the well established spectroscopic techniques to monitor and control the tableting process. The compaction data monitoring method is also easy to set up and very cost effective. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. [Effects of anxiety and the COMT gene on cortical evoked potentials and performance effectiveness of selective attention].

    PubMed

    Alfimova, M V; Golimbet, V E; Lebedeva, I S; Korovaĭtseva, G I; Lezheĭko, T V

    2014-01-01

    We studied influence of the anxiety-related trait Harm Avoidance and the COMT gene, which is an important modulator of prefrontal functioning, on event-related potentials in oddball paradigm and performance effectiveness of selective attention. For 50 individuals accuracy and time of searching words among letters at any desired rate and then under an instruction to perform the task as quickly and accurate as possible were measured. Scores on the Harm Avoidance scale from Cloninger's Temperament and Character Inventory, N100 and P300 parameters, and COMTVa1158Met genotypes were obtained for them as well. Searching accuracy and time were mainly related to N100 amplitude. The COMT genotype and Harm Avoidance did not affect N100 amplitude; however, the N100 amplitude modulated their effects on accuracy and time dynamics. Harm Avoidance was positively correlated with P300 latency. The results suggest that anxiety and the COMT gene effects on performance effectiveness of selective attention depend on cognitive processes reflected in N100 parameters.

  13. Lessons learned in deploying software estimation technology and tools

    NASA Technical Reports Server (NTRS)

    Panlilio-Yap, Nikki; Ho, Danny

    1994-01-01

    Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.

  14. The XCatDB, a Rich 3XMM Catalogue Interface

    NASA Astrophysics Data System (ADS)

    Michel, L.; Grisé, F.; Motch, C.; Gomez-Moran, A. N.

    2015-09-01

    The last release of the XMM catalog, the 3XMM-DR4 published in July 2013, is the largest X-ray catalog ever built. It includes lots of data products such as spectra, time series, images, previews, and extractions of archival catalogs matching the position of X-ray sources. The Strasbourg Observatory built an original interface called XCatDB. It was designed to make the best of this wide set of related products with an emphasis on the images. Besides, it offers an easy access to all other catalog parameters. Users can select data with very elaborate queries and can process them with online services such as an X-ray spectral fitting routine. The combination of all these features allows the users to select data of interest to the naked eye as well as to filter catalog parameters. Data selections can be picked out for further scientific analysis thanks to an interface operating external VO clients. The XcatDB has been developed with Saada.

  15. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  16. Increasing signal processing sophistication in the calculation of the respiratory modulation of the photoplethysmogram (DPOP).

    PubMed

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-06-01

    DPOP (∆POP or Delta-POP) is a non-invasive parameter which measures the strength of respiratory modulations present in the pulse oximetry photoplethysmogram (pleth) waveform. It has been proposed as a non-invasive surrogate parameter for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. Many groups have reported on the DPOP parameter and its correlation with PPV using various semi-automated algorithmic implementations. The study reported here demonstrates the performance gains made by adding increasingly sophisticated signal processing components to a fully automated DPOP algorithm. A DPOP algorithm was coded and its performance systematically enhanced through a series of code module alterations and additions. Each algorithm iteration was tested on data from 20 mechanically ventilated OR patients. Correlation coefficients and ROC curve statistics were computed at each stage. For the purposes of the analysis we split the data into a manually selected 'stable' region subset of the data containing relatively noise free segments and a 'global' set incorporating the whole data record. Performance gains were measured in terms of correlation against PPV measurements in OR patients undergoing controlled mechanical ventilation. Through increasingly advanced pre-processing and post-processing enhancements to the algorithm, the correlation coefficient between DPOP and PPV improved from a baseline value of R = 0.347 to R = 0.852 for the stable data set, and, correspondingly, R = 0.225 to R = 0.728 for the more challenging global data set. Marked gains in algorithm performance are achievable for manually selected stable regions of the signals using relatively simple algorithm enhancements. Significant additional algorithm enhancements, including a correction for low perfusion values, were required before similar gains were realised for the more challenging global data set.

  17. A Control Algorithm for Chaotic Physical Systems

    DTIC Science & Technology

    1991-10-01

    revision expands the grid to cover the entire area of any attractor that is present. 5 Map Selection The final choices of the state- space mapping process...interval h?; overrange R0 ; control parameter interval AkO and range [kbro, khigh]; iteration depth. "* State- space mapping : 1. Set up grid by expanding

  18. [Leaching of nonferrous metals from copper-smelting slag with acidophilic microorganisms].

    PubMed

    Murav'ev, M I; Fomchenko, N V

    2013-01-01

    The leaching process of copper and zinc from copper converter slag with sulphuric solutions of trivalent iron sulphate obtained using the association of acidophilic chemolithotrophic microorganisms was investigated. The best parameters of chemical leaching (temperature 70 degrees C, an initial concentration of trivalent iron in the leaching solution of 10.1 g/L, and a solid-phase content in the suspension of 10%) were selected. Carrying out the process under these parameters resulted in the recovery of 89.4% of copper and 39.3% of zinc in the solution. The possibility of the bioregeneration of trivalent iron in the solution obtained after the chemical leaching of slag by iron-oxidizingacidophilic chemolithotrophic microorganisms without inhibiting their activity was demonstrated.

  19. Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach

    NASA Astrophysics Data System (ADS)

    Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.

    2017-07-01

    The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.

  20. Spatial Analysis of Traffic and Routing Path Methods for Tsunami Evacuation

    NASA Astrophysics Data System (ADS)

    Fakhrurrozi, A.; Sari, A. M.

    2018-02-01

    Tsunami disaster occurred relatively very fast. Thus, it has a very large-scale impact on both non-material and material aspects. Community evacuation caused mass panic, crowds, and traffic congestion. A further research in spatial based modelling, traffic engineering and splitting zone evacuation simulation is very crucial as an effort to reduce higher losses. This topic covers some information from the previous research. Complex parameters include route selection, destination selection, the spontaneous timing of both the departure of the source and the arrival time to destination and other aspects of the result parameter in various methods. The simulation process and its results, traffic modelling, and routing analysis emphasized discussion which is the closest to real conditions in the tsunami evacuation process. The method that we should highlight is Clearance Time Estimate based on Location Priority in which the computation result is superior to others despite many drawbacks. The study is expected to have input to improve and invent a new method that will be a part of decision support systems for disaster risk reduction of tsunamis disaster.

  1. Qualitative modeling of the decision-making process using electrooculography.

    PubMed

    Zargari Marandi, Ramtin; Sabzpoushan, S H

    2015-12-01

    A novel method based on electrooculography (EOG) has been introduced in this work to study the decision-making process. An experiment was designed and implemented wherein subjects were asked to choose between two items from the same category that were presented within a limited time. The EOG and voice signals of the subjects were recorded during the experiment. A calibration task was performed to map the EOG signals to their corresponding gaze positions on the screen by using an artificial neural network. To analyze the data, 16 parameters were extracted from the response time and EOG signals of the subjects. Evaluation and comparison of the parameters, together with subjects' choices, revealed functional information. On the basis of this information, subjects switched their eye gazes between items about three times on average. We also found, according to statistical hypothesis testing-that is, a t test, t(10) = 71.62, SE = 1.25, p < .0001-that the correspondence rate of a subjects' gaze at the moment of selection with the selected item was significant. Ultimately, on the basis of these results, we propose a qualitative choice model for the decision-making task.

  2. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  3. Selection of meteorological parameters affecting rainfall estimation using neuro-fuzzy computing methodology

    NASA Astrophysics Data System (ADS)

    Hashim, Roslan; Roy, Chandrabhushan; Motamedi, Shervin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Lee, Siew Cheng

    2016-05-01

    Rainfall is a complex atmospheric process that varies over time and space. Researchers have used various empirical and numerical methods to enhance estimation of rainfall intensity. We developed a novel prediction model in this study, with the emphasis on accuracy to identify the most significant meteorological parameters having effect on rainfall. For this, we used five input parameters: wet day frequency (dwet), vapor pressure (e̅a), and maximum and minimum air temperatures (Tmax and Tmin) as well as cloud cover (cc). The data were obtained from the Indian Meteorological Department for the Patna city, Bihar, India. Further, a type of soft-computing method, known as the adaptive-neuro-fuzzy inference system (ANFIS), was applied to the available data. In this respect, the observation data from 1901 to 2000 were employed for testing, validating, and estimating monthly rainfall via the simulated model. In addition, the ANFIS process for variable selection was implemented to detect the predominant variables affecting the rainfall prediction. Finally, the performance of the model was compared to other soft-computing approaches, including the artificial neural network (ANN), support vector machine (SVM), extreme learning machine (ELM), and genetic programming (GP). The results revealed that ANN, ELM, ANFIS, SVM, and GP had R2 of 0.9531, 0.9572, 0.9764, 0.9525, and 0.9526, respectively. Therefore, we conclude that the ANFIS is the best method among all to predict monthly rainfall. Moreover, dwet was found to be the most influential parameter for rainfall prediction, and the best predictor of accuracy. This study also identified sets of two and three meteorological parameters that show the best predictions.

  4. Evaluation of orbits with incomplete knowledge of the mathematical expectancy and the matrix of covariation of errors

    NASA Technical Reports Server (NTRS)

    Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.

    1980-01-01

    The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).

  5. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  6. Evolutionary model selection and parameter estimation for protein-protein interaction network based on differential evolution algorithm

    PubMed Central

    Huang, Lei; Liao, Li; Wu, Cathy H.

    2016-01-01

    Revealing the underlying evolutionary mechanism plays an important role in understanding protein interaction networks in the cell. While many evolutionary models have been proposed, the problem about applying these models to real network data, especially for differentiating which model can better describe evolutionary process for the observed network urgently remains as a challenge. The traditional way is to use a model with presumed parameters to generate a network, and then evaluate the fitness by summary statistics, which however cannot capture the complete network structures information and estimate parameter distribution. In this work we developed a novel method based on Approximate Bayesian Computation and modified Differential Evolution (ABC-DEP) that is capable of conducting model selection and parameter estimation simultaneously and detecting the underlying evolutionary mechanisms more accurately. We tested our method for its power in differentiating models and estimating parameters on the simulated data and found significant improvement in performance benchmark, as compared with a previous method. We further applied our method to real data of protein interaction networks in human and yeast. Our results show Duplication Attachment model as the predominant evolutionary mechanism for human PPI networks and Scale-Free model as the predominant mechanism for yeast PPI networks. PMID:26357273

  7. Demonstration of Aerosol Property Profiling by Multi-wavelength Lidar Under Varying Relative Humidity Conditions

    NASA Technical Reports Server (NTRS)

    Whiteman, D.N.; Veselovskii, I.; Kolgotin, A.; Korenskii, M.; Andrews, E.

    2008-01-01

    The feasibility of using a multi-wavelength Mie-Raman lidar based on a tripled Nd:YAG laser for profiling aerosol physical parameters in the planetary boundary layer (PBL) under varying conditions of relative humidity (RH) is studied. The lidar quantifies three aerosol backscattering and two extinction coefficients and from these optical data the particle parameters such as concentration, size and complex refractive index are retrieved through inversion with regularization. The column-integrated, lidar-derived parameters are compared with results from the AERONET sun photometer. The lidar and sun photometer agree well in the characterization of the fine mode parameters, however the lidar shows less sensitivity to coarse mode. The lidar results reveal a strong dependence of particle properties on RH. The height regions with enhanced RH are characterized by an increase of backscattering and extinction coefficient and a decrease in the Angstrom exponent coinciding with an increase in the particle size. We present data selection techniques useful for selecting cases that can support the calculation of hygroscopic growth parameters using lidar. Hygroscopic growth factors calculated using these techniques agree with expectations despite the lack of co-located radiosonde data. Despite this limitation, the results demonstrate the potential of multi-wavelength Raman lidar technique for study of aerosol humidification process.

  8. Abnormal externally guided movement preparation in recent-onset schizophrenia is associated with impaired selective attention to external input.

    PubMed

    Smid, Henderikus G O M; Westenbroek, Joanna M; Bruggeman, Richard; Knegtering, Henderikus; Van den Bosch, Robert J

    2009-11-30

    Several theories propose that the primary cognitive impairment in schizophrenia concerns a deficit in the processing of external input information. There is also evidence, however, for impaired motor preparation in schizophrenia. This provokes the question whether the impaired motor preparation in schizophrenia is a secondary consequence of disturbed (selective) processing of the input needed for that preparation, or an independent primary deficit. The aim of the present study was to discriminate between these hypotheses, by investigating externally guided movement preparation in relation to selective stimulus processing. The sample comprised 16 recent-onset schizophrenia patients and 16 controls who performed a movement-precuing task. In this task, a precue delivered information about one, two or no parameters of a movement summoned by a subsequent stimulus. Performance measures and measures derived from the electroencephalogram showed that patients yielded smaller benefits from the precues and showed less cue-based preparatory activity in advance of the imperative stimulus than the controls, suggesting a response preparation deficit. However, patients also showed less activity reflecting selective attention to the precue. We therefore conclude that the existing evidence for an impairment of externally guided motor preparation in schizophrenia is most likely due to a deficit in selective attention to the external input, which lends support to theories proposing that the primary cognitive deficit in schizophrenia concerns the processing of input information.

  9. Design Space Approach in Optimization of Fluid Bed Granulation and Tablets Compression Process

    PubMed Central

    Djuriš, Jelena; Medarević, Djordje; Krstić, Marko; Vasiljević, Ivana; Mašić, Ivana; Ibrić, Svetlana

    2012-01-01

    The aim of this study was to optimize fluid bed granulation and tablets compression processes using design space approach. Type of diluent, binder concentration, temperature during mixing, granulation and drying, spray rate, and atomization pressure were recognized as critical formulation and process parameters. They were varied in the first set of experiments in order to estimate their influences on critical quality attributes, that is, granules characteristics (size distribution, flowability, bulk density, tapped density, Carr's index, Hausner's ratio, and moisture content) using Plackett-Burman experimental design. Type of diluent and atomization pressure were selected as the most important parameters. In the second set of experiments, design space for process parameters (atomization pressure and compression force) and its influence on tablets characteristics was developed. Percent of paracetamol released and tablets hardness were determined as critical quality attributes. Artificial neural networks (ANNs) were applied in order to determine design space. ANNs models showed that atomization pressure influences mostly on the dissolution profile, whereas compression force affects mainly the tablets hardness. Based on the obtained ANNs models, it is possible to predict tablet hardness and paracetamol release profile for any combination of analyzed factors. PMID:22919295

  10. Processing of AlGaAs/GaAs quantum-cascade structures for terahertz laser

    NASA Astrophysics Data System (ADS)

    Szerling, Anna; Kosiel, Kamil; Szymański, Michał; Wasilewski, Zbig; Gołaszewska, Krystyna; Łaszcz, Adam; Płuska, Mariusz; Trajnerowicz, Artur; Sakowicz, Maciej; Walczakowski, Michał; Pałka, Norbert; Jakieła, Rafał; Piotrowska, Anna

    2015-01-01

    We report research results with regard to AlGaAs/GaAs structure processing for THz quantum-cascade lasers (QCLs). We focus on the processes of Ti/Au cladding fabrication for metal-metal waveguides and wafer bonding with indium solder. Particular emphasis is placed on optimization of technological parameters for the said processes that result in working devices. A wide range of technological parameters was studied using test structures and the analysis of their electrical, optical, chemical, and mechanical properties performed by electron microscopic techniques, energy dispersive x-ray spectrometry, secondary ion mass spectroscopy, atomic force microscopy, Fourier-transform infrared spectroscopy, and circular transmission line method. On that basis, a set of technological parameters was selected for the fabrication of devices lasing at a maximum temperature of 130 K from AlGaAs/GaAs structures grown by means of molecular beam epitaxy. Their resulting threshold-current densities were on a level of 1.5 kA/cm2. Furthermore, initial stage research regarding fabrication of Cu-based claddings is reported as these are theoretically more promising than the Au-based ones with regard to low-loss waveguide fabrication for THz QCLs.

  11. Multisensor-based real-time quality monitoring by means of feature extraction, selection and modeling for Al alloy in arc welding

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben

    2015-08-01

    Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.

  12. Simultaneously selecting appropriate partners for gaming and strategy adaptation to enhance network reciprocity in the prisoner's dilemma

    NASA Astrophysics Data System (ADS)

    Tanimoto, Jun

    2014-01-01

    Network reciprocity is one mechanism for adding social viscosity, which leads to cooperative equilibrium in 2 × 2 prisoner's dilemma games. Previous studies have shown that cooperation can be enhanced by using a skewed, rather than a random, selection of partners for either strategy adaptation or the gaming process. Here we show that combining both processes for selecting a gaming partner and an adaptation partner further enhances cooperation, provided that an appropriate selection rule and parameters are adopted. We also show that this combined model significantly enhances cooperation by reducing the degree of activity in the underlying network; we measure the degree of activity with a quantity called effective degree. More precisely, during the initial evolutionary stage in which the global cooperation fraction declines because initially allocated cooperators becoming defectors, the model shows that weak cooperative clusters perish and only a few strong cooperative clusters survive. This finding is the most important key to attaining significant network reciprocity.

  13. Effects of raw material extrusion and steam conditioning on feed pellet quality and nutrient digestibility of growing meat rabbits.

    PubMed

    Liao, Kuoyao; Cai, Jingyi; Shi, Zhujun; Tian, Gang; Yan, Dong; Chen, Delin

    2017-06-01

    This study was conducted to investigate the effects of raw material extrusion and steam conditioning on feed pellet quality and nutrient digestibility of growing meat rabbits, in order to determine appropriate rabbit feed processing methods and processing parameters. In Exp. 1, an orthogonal design was adopted. Barrel temperature, material moisture content and feed rate were selected as test factors, and acid detergent fiber (ADF) content was selected as an evaluation index to research the optimum extrusion parameters. In Exp. 2, a two-factor design was adopted. Four kinds of rabbit feeds were processed and raw material extrusion adopted optimum extrusion parameters of Exp. 1. A total of 40 healthy and 42-day-old rabbits with similar weight were used in a randomized design, which consisted of 4 groups and 10 replicates in each group (1 rabbits in each replicate). The adaptation period lasted for 7 d, and the digestion trial lasted for 4 d. The results showed as follows: 1) ADF was significantly affected by barrel temperature ( P  < 0.05); the optimum extrusion parameters were barrel temperature 125 °C, moisture content 16% and feed rate 9 Hz. 2) Raw material extrusion and steam conditioning both significantly decreased powder percentage, pulverization ratio and protein solubility ( P  < 0.05), significantly improved hardness and starch gelatinization degree of rabbit feed ( P  < 0.05). They both had significant interaction effects on the processing quality of rabbit feed ( P  < 0.05). 3) Extrusion significantly improved the apparent digestibility of dry matter and total energy ( P  < 0.05). Extrusion and steam conditioning both significantly improved the apparent digestibility of crude fiber (CF), ADF and NDF ( P  < 0.05), but they had no interaction effects on the apparent digestibility of rabbit feed. Thus, using extrusion and steam conditioning technology at the same time in the weaning rabbits feed processing can improve the pellet quality and nutrient apparent digestibility of rabbit feed.

  14. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  15. Dream controller

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L; Wang, Qiang; Chow, Andrew J

    2013-11-26

    A method and apparatus for intelligently controlling continuous process variables. A Dream Controller comprises an Intelligent Engine mechanism and a number of Model-Free Adaptive (MFA) controllers, each of which is suitable to control a process with specific behaviors. The Intelligent Engine can automatically select the appropriate MFA controller and its parameters so that the Dream Controller can be easily used by people with limited control experience and those who do not have the time to commission, tune, and maintain automatic controllers.

  16. Analysis of 3D printing parameters of gears for hybrid manufacturing

    NASA Astrophysics Data System (ADS)

    Budzik, Grzegorz; Przeszlowski, Łukasz; Wieczorowski, Michal; Rzucidlo, Arkadiusz; Gapinski, Bartosz; Krolczyk, Grzegorz

    2018-05-01

    The paper deals with analysis and selection of parameters of rapid prototyping of gears by selective sintering of metal powders. Presented results show wide spectrum of application of RP systems in manufacturing processes of machine elements, basing on analysis of market in term of application of additive manufacturing technology in different sectors of industry. Considerable growth of these methods over the past years can be observed. The characteristic errors of printed model with respect to ideal one for each technique were pointed out. Special attention was paid to the method of preparation of numerical data CAD/STL/RP. Moreover the analysis of manufacturing processes of gear type elements was presented. The tested gears were modeled with different allowances for final machining and made by DMLS. Metallographic analysis and strength tests on prepared specimens were performed. The above mentioned analysis and tests were used to compare the real properties of material with the nominal ones. To improve the quality of surface after sintering the gears were subjected to final machining. The analysis of geometry of gears after hybrid manufacturing method was performed (fig.1). The manufacturing process was defined in a traditional way as well as with the aid of modern manufacturing techniques. Methodology and obtained results can be used for other machine elements than gears and constitutes the general theory of production processes in rapid prototyping methods as well as in designing and implementation of production.

  17. Micropatterning of poly(dimethylsiloxane) using a photoresist lift-off technique for selective electrical insulation of microelectrode arrays

    PubMed Central

    Park, Jaewon; Kim, Hyun Soo; Han, Arum

    2009-01-01

    A poly(dimethylsiloxane) (PDMS) patterning method based on a photoresist lift-off technique to make an electrical insulation layer with selective openings is presented. The method enables creating PDMS patterns with small features and various thicknesses without any limitation in the designs and without the need for complicated processes or expensive equipments. Patterned PDMS layers were created by spin-coating liquid phase PDMS on top of a substrate having sacrificial photoresist patterns, followed by a photoresist lift-off process. The thickness of the patterned PDMS layers could be accurately controlled (6.5–24 µm) by adjusting processing parameters such as PDMS spin-coating speeds, PDMS dilution ratios, and sacrificial photoresist thicknesses. PDMS features as small as 15 µm were successfully patterned and the effects of each processing parameter on the final patterns were investigated. Electrical resistance tests between adjacent electrodes with and without the insulation layer showed that the patterned PDMS layer functions properly as an electrical insulation layer. Biocompatibility of the patterned PDMS layer was confirmed by culturing primary neuron cells on top of the layer for up to two weeks. An extensive neuronal network was successfully formed, showing that this PDMS patterning method can be applied to various biosensing microdevices. The utility of this fabrication method was further demonstrated by successfully creating a patterned electrical insulation layer on flexible substrates containing multi-electrode arrays. PMID:19946385

  18. Optimization of Straight Cylindrical Turning Using Artificial Bee Colony (ABC) Algorithm

    NASA Astrophysics Data System (ADS)

    Prasanth, Rajanampalli Seshasai Srinivasa; Hans Raj, Kandikonda

    2017-04-01

    Artificial bee colony (ABC) algorithm, that mimics the intelligent foraging behavior of honey bees, is increasingly gaining acceptance in the field of process optimization, as it is capable of handling nonlinearity, complexity and uncertainty. Straight cylindrical turning is a complex and nonlinear machining process which involves the selection of appropriate cutting parameters that affect the quality of the workpiece. This paper presents the estimation of optimal cutting parameters of the straight cylindrical turning process using the ABC algorithm. The ABC algorithm is first tested on four benchmark problems of numerical optimization and its performance is compared with genetic algorithm (GA) and ant colony optimization (ACO) algorithm. Results indicate that, the rate of convergence of ABC algorithm is better than GA and ACO. Then, the ABC algorithm is used to predict optimal cutting parameters such as cutting speed, feed rate, depth of cut and tool nose radius to achieve good surface finish. Results indicate that, the ABC algorithm estimated a comparable surface finish when compared with real coded genetic algorithm and differential evolution algorithm.

  19. Design Optimization of Microalloyed Steels Using Thermodynamics Principles and Neural-Network-Based Modeling

    NASA Astrophysics Data System (ADS)

    Mohanty, Itishree; Chintha, Appa Rao; Kundu, Saurabh

    2018-06-01

    The optimization of process parameters and composition is essential to achieve the desired properties with minimal additions of alloying elements in microalloyed steels. In some cases, it may be possible to substitute such steels for those which are more richly alloyed. However, process control involves a larger number of parameters, making the relationship between structure and properties difficult to assess. In this work, neural network models have been developed to estimate the mechanical properties of steels containing Nb + V or Nb + Ti. The outcomes have been validated by thermodynamic calculations and plant data. It has been shown that subtle thermodynamic trends can be captured by the neural network model. Some experimental rolling data have also been used to support the model, which in addition has been applied to calculate the costs of optimizing microalloyed steel. The generated pareto fronts identify many combinations of strength and elongation, making it possible to select composition and process parameters for a range of applications. The ANN model and the optimization model are being used for prediction of properties in a running plant and for development of new alloys, respectively.

  20. Development of Experimental Setup of Metal Rapid Prototyping Machine using Selective Laser Sintering Technique

    NASA Astrophysics Data System (ADS)

    Patil, S. N.; Mulay, A. V.; Ahuja, B. B.

    2018-04-01

    Unlike in the traditional manufacturing processes, additive manufacturing as rapid prototyping, allows designers to produce parts that were previously considered too complex to make economically. The shift is taking place from plastic prototype to fully functional metallic parts by direct deposition of metallic powders as produced parts can be directly used for desired purpose. This work is directed towards the development of experimental setup of metal rapid prototyping machine using selective laser sintering and studies the various parameters, which plays important role in the metal rapid prototyping using SLS technique. The machine structure in mainly divided into three main categories namely, (1) Z-movement of bed and table, (2) X-Y movement arrangement for LASER movements and (3) feeder mechanism. Z-movement of bed is controlled by using lead screw, bevel gear pair and stepper motor, which will maintain the accuracy of layer thickness. X-Y movements are controlled using timing belt and stepper motors for precise movements of LASER source. Feeder mechanism is then developed to control uniformity of layer thickness metal powder. Simultaneously, the study is carried out for selection of material. Various types of metal powders can be used for metal RP as Single metal powder, mixture of two metals powder, and combination of metal and polymer powder. Conclusion leads to use of mixture of two metals powder to minimize the problems such as, balling effect and porosity. Developed System can be validated by conducting various experiments on manufactured part to check mechanical and metallurgical properties. After studying the results of these experiments, various process parameters as LASER properties (as power, speed etc.), and material properties (as grain size and structure etc.) will be optimized. This work is mainly focused on the design and development of cost effective experimental setup of metal rapid prototyping using SLS technique which will gives the feel of metal rapid prototyping process and its important parameters.

  1. Synthesis and Explosive Consolidation of Titanium, Aluminium, Boron and Carbon Containing Powders

    NASA Astrophysics Data System (ADS)

    Chikhradze, Mikheil; Oniashvili, George; Chikhradze, Nikoloz; D. S Marquis, Fernand

    2016-10-01

    The development of modern technologies in the field of materials science has increased the interest towards the bulk materials with improved physical, chemical and mechanical properties. Composites, fabricated in Ti-Al-B-C systems are characterized by unique physical and mechanical properties. They are attractive for aerospace, power engineering, machine and chemical applications. The technologies to fabricate ultrafine grained powder and bulk materials in Ti-Al-B-C system are described in the paper. It includes results of theoretical and experimental investigation for selection of powders composition and determination of thermodynamic conditions for bland preparation, as well as optimal technological parameters for mechanical alloying and adiabatic compaction. The crystalline coarse Ti, Al, C powders and amorphous B were used as precursors and blends with different compositions of Ti-Al, Ti-Al-C, Ti-B-C and Ti-Al-B were prepared. Preliminary determination/selection of blend compositions was made on the basis of phase diagrams. The powders were mixed according to the selected ratios of components to produce the blend. Blends were processed in “Fritsch” Planetary premium line ball mill for mechanical alloying, syntheses of new phases, amorphization and ultrafine powder production. The blends processing time was variable: 1 to 20 hours. The optimal technological regimes of nano blend preparation were determined experimentally. Ball milled nano blends were placed in metallic tube and loaded by shock waves for realization of consolidation in adiabatic regime. The structure and properties of the obtained ultrafine grained materials depending on the processing parameters are investigated and discussed. For consolidation of the mixture, explosive compaction technology is applied at room temperatures. The prepared mixtures were located in low carbon steel tube and blast energies were used for explosive consolidation compositions. The relationship of ball milling technological parameters and explosive consolidation conditions on the structure/properties of the obtained samples are described in the paper.

  2. Multi-parameter phenotypic profiling: using cellular effects to characterize small-molecule compounds.

    PubMed

    Feng, Yan; Mitchison, Timothy J; Bender, Andreas; Young, Daniel W; Tallarico, John A

    2009-07-01

    Multi-parameter phenotypic profiling of small molecules provides important insights into their mechanisms of action, as well as a systems level understanding of biological pathways and their responses to small molecule treatments. It therefore deserves more attention at an early step in the drug discovery pipeline. Here, we summarize the technologies that are currently in use for phenotypic profiling--including mRNA-, protein- and imaging-based multi-parameter profiling--in the drug discovery context. We think that an earlier integration of phenotypic profiling technologies, combined with effective experimental and in silico target identification approaches, can improve success rates of lead selection and optimization in the drug discovery process.

  3. Cryogenic Etching of High Aspect Ratio 400 nm Pitch Silicon Gratings.

    PubMed

    Miao, Houxun; Chen, Lei; Mirzaeimoghri, Mona; Kasica, Richard; Wen, Han

    2016-10-01

    The cryogenic process and Bosch process are two widely used processes for reactive ion etching of high aspect ratio silicon structures. This paper focuses on the cryogenic deep etching of 400 nm pitch silicon gratings with various etching mask materials including polymer, Cr, SiO 2 and Cr-on-polymer. The undercut is found to be the key factor limiting the achievable aspect ratio for the direct hard masks of Cr and SiO 2 , while the etch selectivity responds to the limitation of the polymer mask. The Cr-on-polymer mask provides the same high selectivity as Cr and reduces the excessive undercut introduced by direct hard masks. By optimizing the etching parameters, we etched a 400 nm pitch grating to ≈ 10.6 μ m depth, corresponding to an aspect ratio of ≈ 53.

  4. Effects of binge drinking and hangover on response selection sub-processes-a study using EEG and drift diffusion modeling.

    PubMed

    Stock, Ann-Kathrin; Hoffmann, Sven; Beste, Christian

    2017-09-01

    Effects of binge drinking on cognitive control and response selection are increasingly recognized in research on alcohol (ethanol) effects. Yet, little is known about how those processes are modulated by hangover effects. Given that acute intoxication and hangover seem to be characterized by partly divergent effects and mechanisms, further research on this topic is needed. In the current study, we hence investigated this with a special focus on potentially differential effects of alcohol intoxication and subsequent hangover on sub-processes involved in the decision to select a response. We do so combining drift diffusion modeling of behavioral data with neurophysiological (EEG) data. Opposed to common sense, the results do not show an impairment of all assessed measures. Instead, they show specific effects of high dose alcohol intoxication and hangover on selective drift diffusion model and EEG parameters (as compared to a sober state). While the acute intoxication induced by binge-drinking decreased the drift rate, it was increased by the subsequent hangover, indicating more efficient information accumulation during hangover. Further, the non-decisional processes of information encoding decreased with intoxication, but not during hangover. These effects were reflected in modulations of the N2, P1 and N1 event-related potentials, which reflect conflict monitoring, perceptual gating and attentional selection processes, respectively. As regards the functional neuroanatomical architecture, the anterior cingulate cortex (ACC) as well as occipital networks seem to be modulated. Even though alcohol is known to have broad neurobiological effects, its effects on cognitive processes are rather specific. © 2016 Society for the Study of Addiction.

  5. Airbreathing engine selection criteria for SSTO propulsion system

    NASA Astrophysics Data System (ADS)

    Ohkami, Yoshiaki; Maita, Masataka

    1995-02-01

    This paper presents airbreathing engine selection criteria to be applied to the propulsion system of a Single Stage To Orbit (SSTO). To establish the criteria, a relation among three major parameters, i.e., delta-V capability, weight penalty, and effective specific impulse of the engine subsystem, is derived as compared to these parameters of the LH2/LOX rocket engine. The effective specific impulse is a function of the engine I(sub sp) and vehicle thrust-to-drag ratio which is approximated by a function of the vehicle velocity. The weight penalty includes the engine dry weight, cooling subsystem weight. The delta-V capability is defined by the velocity region starting from the minimum operating velocity up to the maximum velocity. The vehicle feasibility is investigated in terms of the structural and propellant weights, which requires an iteration process adjusting the system parameters. The system parameters are computed by iteration based on the Newton-Raphson method. It has been concluded that performance in the higher velocity region is extremely important so that the airbreathing engines are required to operate beyond the velocity equivalent to the rocket engine exhaust velocity (approximately 4500 m/s).

  6. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  7. Selection of solubility parameters for characterization of pharmaceutical excipients.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam; Héberger, Károly

    2007-11-09

    The solubility parameter (delta(2)), corrected solubility parameter (delta(T)) and its components (delta(d), delta(p), delta(h)) were determined for series of pharmaceutical excipients by using inverse gas chromatography (IGC). Principal component analysis (PCA) was applied for the selection of the solubility parameters which assure the complete characterization of examined materials. Application of PCA suggests that complete description of examined materials is achieved with four solubility parameters, i.e. delta(2) and Hansen solubility parameters (delta(d), delta(p), delta(h)). Selection of the excipients through PCA of their solubility parameters data can be used for prediction of their behavior in a multi-component system, e.g. for selection of the best materials to form stable pharmaceutical liquid mixtures or stable coating formulation.

  8. Influence of Structural Features and Fracture Processes on Surface Roughness: A Case Study from the Krosno Sandstones of the Górka-Mucharz Quarry (Little Beskids, Southern Poland)

    NASA Astrophysics Data System (ADS)

    Pieczara, Łukasz

    2015-09-01

    The paper presents the results of analysis of surface roughness parameters in the Krosno Sandstones of Mucharz, southern Poland. It was aimed at determining whether these parameters are influenced by structural features (mainly the laminar distribution of mineral components and directional distribution of non-isometric grains) and fracture processes. The tests applied in the analysis enabled us to determine and describe the primary statistical parameters used in the quantitative description of surface roughness, as well as specify the usefulness of contact profilometry as a method of visualizing spatial differentiation of fracture processes in rocks. These aims were achieved by selecting a model material (Krosno Sandstones from the Górka-Mucharz Quarry) and an appropriate research methodology. The schedule of laboratory analyses included: identification analyses connected with non-destructive ultrasonic tests, aimed at the preliminary determination of rock anisotropy, strength point load tests (cleaved surfaces were obtained due to destruction of rock samples), microscopic analysis (observation of thin sections in order to determine the mechanism of inducing fracture processes) and a test method of measuring surface roughness (two- and three-dimensional diagrams, topographic and contour maps, and statistical parameters of surface roughness). The highest values of roughness indicators were achieved for surfaces formed under the influence of intragranular fracture processes (cracks propagating directly through grains). This is related to the structural features of the Krosno Sandstones (distribution of lamination and bedding).

  9. GRCop-84 Rolling Parameter Study

    NASA Technical Reports Server (NTRS)

    Loewenthal, William S.; Ellis, David L.

    2008-01-01

    This report is a section of the final report on the GRCop-84 task of the Constellation Program and incorporates the results obtained between October 2000 and September 2005, when the program ended. NASA Glenn Research Center (GRC) has developed a new copper alloy, GRCop-84 (Cu-8 at.% Cr-4 at.% Nb), for rocket engine main combustion chamber components that will improve rocket engine life and performance. This work examines the sensitivity of GRCop-84 mechanical properties to rolling parameters as a means to better define rolling parameters for commercial warm rolling. Experiment variables studied were total reduction, rolling temperature, rolling speed, and post rolling annealing heat treatment. The responses were tensile properties measured at 23 and 500 C, hardness, and creep at three stress-temperature combinations. Understanding these relationships will better define boundaries for a robust commercial warm rolling process. The four processing parameters were varied within limits consistent with typical commercial production processes. Testing revealed that the rolling-related variables selected have a minimal influence on tensile, hardness, and creep properties over the range of values tested. Annealing had the expected result of lowering room temperature hardness and strength while increasing room temperature elongations with 600 C (1112 F) having the most effect. These results indicate that the process conditions to warm roll plate and sheet for these variables can range over wide levels without negatively impacting mechanical properties. Incorporating broader process ranges in future rolling campaigns should lower commercial rolling costs through increased productivity.

  10. Process Parameters Optimization in Single Point Incremental Forming

    NASA Astrophysics Data System (ADS)

    Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh

    2016-04-01

    This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.

  11. Model of the best-of-N nest-site selection process in honeybees.

    PubMed

    Reina, Andreagiovanni; Marshall, James A R; Trianni, Vito; Bose, Thomas

    2017-05-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N-1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.

  12. Response terminated displays unload selective attention

    PubMed Central

    Roper, Zachary J. J.; Vecera, Shaun P.

    2013-01-01

    Perceptual load theory successfully replaced the early vs. late selection debate by appealing to adaptive control over the efficiency of selective attention. Early selection is observed unless perceptual load (p-Load) is sufficiently low to grant attentional “spill-over” to task-irrelevant stimuli. Many studies exploring load theory have used limited display durations that perhaps impose artificial limits on encoding processes. We extended the exposure duration in a classic p-Load task to alleviate temporal encoding demands that may otherwise tax mnemonic consolidation processes. If the load effect arises from perceptual demands alone, then freeing-up available mnemonic resources by extending the exposure duration should have little effect. The results of Experiment 1 falsify this prediction. We observed a reliable flanker effect under high p-Load, response-terminated displays. Next, we orthogonally manipulated exposure duration and task-relevance. Counter-intuitively, we found that the likelihood of observing the flanker effect under high p-Load resides with the duration of the task-relevant array, not the flanker itself. We propose that stimulus and encoding demands interact to produce the load effect. Our account clarifies how task parameters differentially impinge upon cognitive processes to produce attentional “spill-over” by appealing to visual short-term memory as an additional processing bottleneck when stimuli are briefly presented. PMID:24399983

  13. Response terminated displays unload selective attention.

    PubMed

    Roper, Zachary J J; Vecera, Shaun P

    2013-01-01

    Perceptual load theory successfully replaced the early vs. late selection debate by appealing to adaptive control over the efficiency of selective attention. Early selection is observed unless perceptual load (p-Load) is sufficiently low to grant attentional "spill-over" to task-irrelevant stimuli. Many studies exploring load theory have used limited display durations that perhaps impose artificial limits on encoding processes. We extended the exposure duration in a classic p-Load task to alleviate temporal encoding demands that may otherwise tax mnemonic consolidation processes. If the load effect arises from perceptual demands alone, then freeing-up available mnemonic resources by extending the exposure duration should have little effect. The results of Experiment 1 falsify this prediction. We observed a reliable flanker effect under high p-Load, response-terminated displays. Next, we orthogonally manipulated exposure duration and task-relevance. Counter-intuitively, we found that the likelihood of observing the flanker effect under high p-Load resides with the duration of the task-relevant array, not the flanker itself. We propose that stimulus and encoding demands interact to produce the load effect. Our account clarifies how task parameters differentially impinge upon cognitive processes to produce attentional "spill-over" by appealing to visual short-term memory as an additional processing bottleneck when stimuli are briefly presented.

  14. Model of the best-of-N nest-site selection process in honeybees

    NASA Astrophysics Data System (ADS)

    Reina, Andreagiovanni; Marshall, James A. R.; Trianni, Vito; Bose, Thomas

    2017-05-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N -1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.

  15. Physicochemical Characteristic of Municipal Wastewater in Tropical Area: Case Study of Surabaya City, Indonesia

    NASA Astrophysics Data System (ADS)

    Wijaya, I. M. W.; Soedjono, E. S.

    2018-03-01

    Municipal wastewater is the main contributor to diverse water pollution problems. In order to prevent the pollution risks, wastewater have to be treated before discharged to the main water. Selection of appropriated treatment process need the characteristic information of wastewater as design consideration. This study aims to analyse the physicochemical characteristic of municipal wastewater from inlet and outlet of ABR unit around Surabaya City. Medokan Semampir and Genteng Candi Rejo has been selected as wastewater sampling point. The samples were analysed in laboratory with parameters, such as pH, TSS, COD, BOD, NH4 +, NO3 -, NO2 -, P, and detergent. The results showed that all parameters in both locations are under the national standard of discharged water quality. In other words, the treated water is securely discharged to the river

  16. Coupled parametric design of flow control and duct shape

    NASA Technical Reports Server (NTRS)

    Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)

    2009-01-01

    A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.

  17. Selection of Compositions in Ti-Cr-C-Steel, Ti-B, Ti-B-Me Systems and Establishing Synthesis Parameters for Obtaining Product by “SHS-Electrical Rolling”

    NASA Astrophysics Data System (ADS)

    Aslamazashvili, Zurab; Tavadze, Giorgi; Chikhradze, Mikheil; Namicheishvili, Teimuraz; Melashvili, Zaqaria

    2017-12-01

    For the production materials by the proposed Self-propagating High-Temperature Synthesis (SHS) - Electric Rolling method, there are no limitations in the length of the material and the width only depends on the length of rolls. The innovation method enables to carry out the process in nonstop regime, which is possible by merging energy consuming SHS method and Electrical Rolling. For realizing the process it is mandatory and sufficient, that initial components, after initiation by thermal pulse, could interaction with the heat emission, which itself ensures the self-propagation of synthesis front in lieu of heat transfer in the whole sample. Just after that process, the rolls instantly start rotation with the set speed to ensure the motion of material. This speed should be equal to the speed of propagation of synthesis front. The synthesized product in hot plastic condition is delivered to the rolls in nonstop regime, simultaneously, providing the current in deformation zone in order to compensate the energy loses. As a result by using the innovation SHS -Electrical Rolling technology we obtain long dimensional metal-ceramic product. In the presented paper optimal compositions of SHS chasms were selected in Ti-Cr-C-Steel, Ti-B and Ti-B-Me systems. For the selection of the compounds the thermodynamic analysis has been carried out which enabled to determine adiabatic temperature of synthesis theoretically and to determine balanced concentrations of synthesized product at synthesis temperature. Thermodynamic analysis also gave possibility to determine optimal compositions of chasms and define the conditions, which are important for correct realization of synthesis process. For obtaining non porous materials and product by SHS-Electrical Rolling, it is necessary to select synthesis and compacting parameters correctly. These parameters are the pressure and the time. In Ti-Cr-C-Steel, Ti-B and Ti-B-Me systems the high quality (nonporous or low porosity <2%) of materials and product is directly depended on the liquid phase content just after the passing of synthesis front in the sample. The more content of liquid phase provides the higher quality of material. The content of liquid phase itself depends on synthesis parameters: speed and temperature of synthesis. The higher the speed and temperature of synthesis we have, higher the content of liquid phase is formed. The speed and the temperature of synthesis depend on the Δρ relative density of sample formed from initial chasm, this mean it depends on the pressure of formation of the sample. The paper describes the results of determination of optimal pressures in Ti-Cr-C-Steel, Ti-B and Ti-B-Me systems. Their values are defined as 50-70 MPa, 180-220 MPa and 45-70 MPa.

  18. Continuous separation of copper ions from a mixture of heavy metal ions using a three-zone carousel process packed with metal ion-imprinted polymer.

    PubMed

    Jo, Se-Hee; Lee, See-Young; Park, Kyeong-Mok; Yi, Sung Chul; Kim, Dukjoon; Mun, Sungyong

    2010-11-05

    In this study, a three-zone carousel process based on a proper molecular imprinted polymer (MIP) resin was developed for continuous separation of Cu(2+) from Mn(2+) and Co(2+). For this task, the Cu (II)-imprinted polymer (Cu-MIP) resin was synthesized first and used to pack the chromatographic columns of a three-zone carousel process. Prior to the experiment of the carousel process based on the Cu-MIP resin (MIP-carousel process), a series of single-column experiments were performed to estimate the intrinsic parameters of the three heavy metal ions and to find out the appropriate conditions of regeneration and re-equilibration. The results from these single-column experiments and the additional computer simulations were then used for determination of the operating parameters of the MIP-carousel process under consideration. Based on the determined operating parameters, the MIP-carousel experiments were carried out. It was confirmed from the experimental results that the proposed MIP-carousel process was markedly effective in separating Cu(2+) from Mn(2+) and Co(2+) in a continuous mode with high purity and a relatively small loss. Thus, the MIP-carousel process developed in this study deserves sufficient attention in materials processing industries or metal-related industries, where the selective separation of heavy metal ions with the same charge has been a major concern. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Additive Manufacturing of Nickel Superalloys: Opportunities for Innovation and Challenges Related to Qualification

    NASA Astrophysics Data System (ADS)

    Babu, S. S.; Raghavan, N.; Raplee, J.; Foster, S. J.; Frederick, C.; Haines, M.; Dinwiddie, R.; Kirka, M. K.; Plotkowski, A.; Lee, Y.; Dehoff, R. R.

    2018-06-01

    Innovative designs for turbines can be achieved by advances in nickel-based superalloys and manufacturing methods, including the adoption of additive manufacturing. In this regard, selective electron beam melting (SEBM) and selective laser melting (SLM) of nickel-based superalloys do provide distinct advantages. Furthermore, the direct energy deposition (DED) processes can be used for repair and reclamation of nickel alloy components. The current paper explores opportunities for innovation and qualification challenges with respect to deployment of AM as a disruptive manufacturing technology. In the first part of the paper, fundamental correlations of processing parameters to defect tendency and microstructure evolution will be explored using DED process. In the second part of the paper, opportunities for innovation in terms of site-specific control of microstructure during processing will be discussed. In the third part of the paper, challenges in qualification of AM parts for service will be discussed and potential methods to alleviate these issues through in situ process monitoring, and big data analytics are proposed.

  20. Layerwise Monitoring of the Selective Laser Melting Process by Thermography

    NASA Astrophysics Data System (ADS)

    Krauss, Harald; Zeugner, Thomas; Zaeh, Michael F.

    Selective Laser Melting is utilized to build parts directly from CAD data. In this study layerwise monitoring of the temperature distribution is used to gather information about the process stability and the resulting part quality. The heat distribution varies with different kinds of parameters including scan vector length, laser power, layer thickness and inter-part distance in the job layout. By integration of an off-axis mounted uncooled thermal detector, the solidification as well as the layer deposition are monitored and evaluated. This enables the identification of hot spots in an early stage during the solidification process and helps to avoid process interrupts. Potential quality indicators are derived from spatially resolved measurement data and are correlated to the resulting part properties. A model of heat dissipation is presented based on the measurement of the material response for varying heat input. Current results show the feasibility of process surveillance by thermography for a limited section of the building platform in a commercial system.

  1. Hemodynamic changes in a rat parietal cortex after endothelin-1-induced middle cerebral artery occlusion monitored by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ma, Yushu; Dou, Shidan; Wang, Yi; La, Dongsheng; Liu, Jianghong; Ma, Zhenhe

    2016-07-01

    A blockage of the middle cerebral artery (MCA) on the cortical branch will seriously affect the blood supply of the cerebral cortex. Real-time monitoring of MCA hemodynamic parameters is critical for therapy and rehabilitation. Optical coherence tomography (OCT) is a powerful imaging modality that can produce not only structural images but also functional information on the tissue. We use OCT to detect hemodynamic changes after MCA branch occlusion. We injected a selected dose of endothelin-1 (ET-1) at a depth of 1 mm near the MCA and let the blood vessels follow a process first of occlusion and then of slow reperfusion as realistically as possible to simulate local cerebral ischemia. During this period, we used optical microangiography and Doppler OCT to obtain multiple hemodynamic MCA parameters. The change trend of these parameters from before to after ET-1 injection clearly reflects the dynamic regularity of the MCA. These results show the mechanism of the cerebral ischemia-reperfusion process after a transient middle cerebral artery occlusion and confirm that OCT can be used to monitor hemodynamic parameters.

  2. ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, Muthumari; Tamang, Santosh

    2017-08-01

    Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.

  3. [Active surveillance of adverse drug reaction in the era of big data: challenge and opportunity for control selection].

    PubMed

    Wang, S F; Zhan, S Y

    2016-07-01

    Electronic healthcare databases have become an important source for active surveillance of drug safety in the era of big data. The traditional epidemiology research designs are needed to confirm the association between drug use and adverse events based on these datasets, and the selection of the comparative control is essential to each design. This article aims to explain the principle and application of each type of control selection, introduce the methods and parameters for method comparison, and describe the latest achievements in the batch processing of control selection, which would provide important methodological reference for the use of electronic healthcare databases to conduct post-marketing drug safety surveillance in China.

  4. Perspective and prospective of pretreatment of corn straw for butanol production.

    PubMed

    Baral, Nawa Raj; Li, Jiangzheng; Jha, Ajay Kumar

    2014-01-01

    Corn straw, lignocellulosic biomass, is a potential substrate for microbial production of bio-butanol. Bio-butanol is a superior second generation biofuel among its kinds. Present researches are focused on the selection of butanol tolerant clostridium strain(s) to optimize butanol yield in the fermentation broth because of toxicity of bio-butanol to the clostridium strain(s) itself. However, whatever the type of the strain(s) used, pretreatment process always affects not only the total sugar yield before fermentation but also the performance and growth of microbes during fermentation due to the formation of hydroxyl-methyl furfural, furfural and phenolic compounds. In addition, the lignocellulosic biomasses also resist physical and biological attacks. Thus, selection of best pretreatment process and its parameters is crucial. In this context, worldwide research efforts are increased in past 12 years and researchers are tried to identify the best pretreatment method, pretreatment conditions for the actual biomass. In this review, effect of particle size, status of most common pretreatment method and enzymatic hydrolysis particularly for corn straw as a substrate is presented. This paper also highlights crucial parameters necessary to consider during most common pretreatment processes such as hydrothermal, steam explosion, ammonia explosion, sulfuric acid, and sodium hydroxide pretreatment. Moreover, the prospective of pretreatment methods and challenges is discussed.

  5. Reforming options for hydrogen production from fossil fuels for PEM fuel cells

    NASA Astrophysics Data System (ADS)

    Ersoz, Atilla; Olgun, Hayati; Ozdogan, Sibel

    PEM fuel cell systems are considered as a sustainable option for the future transport sector in the future. There is great interest in converting current hydrocarbon based transportation fuels into hydrogen rich gases acceptable by PEM fuel cells on-board of vehicles. In this paper, we compare the results of our simulation studies for 100 kW PEM fuel cell systems utilizing three different major reforming technologies, namely steam reforming (SREF), partial oxidation (POX) and autothermal reforming (ATR). Natural gas, gasoline and diesel are the selected hydrocarbon fuels. It is desired to investigate the effect of the selected fuel reforming options on the overall fuel cell system efficiency, which depends on the fuel processing, PEM fuel cell and auxiliary system efficiencies. The Aspen-HYSYS 3.1 code has been used for simulation purposes. Process parameters of fuel preparation steps have been determined considering the limitations set by the catalysts and hydrocarbons involved. Results indicate that fuel properties, fuel processing system and its operation parameters, and PEM fuel cell characteristics all affect the overall system efficiencies. Steam reforming appears as the most efficient fuel preparation option for all investigated fuels. Natural gas with steam reforming shows the highest fuel cell system efficiency. Good heat integration within the fuel cell system is absolutely necessary to achieve acceptable overall system efficiencies.

  6. A novel orbiter mission concept for venus with the EnVision proposal

    NASA Astrophysics Data System (ADS)

    de Oliveira, Marta R. R.; Gil, Paulo J. S.; Ghail, Richard

    2018-07-01

    In space exploration, planetary orbiter missions are essential to gain insight into planets as a whole, and to help uncover unanswered scientific questions. In particular, the planets closest to the Earth have been a privileged target of the world's leading space agencies. EnVision is a mission proposal designed for Venus and competing for ESA's next launch opportunity with the objective of studying Earth's closest neighbor. The main goal is to study geological and atmospheric processes, namely surface processes, interior dynamics and atmosphere, to determine the reasons behind Venus and Earth's radically different evolution despite the planets' similarities. To achieve these goals, the operational orbit selection is a fundamental element of the mission design process. The design of an orbit around Venus faces specific challenges, such as the impossibility of choosing Sun-synchronous orbits. In this paper, an innovative genetic algorithm optimization was applied to select the optimal orbit based on the parameters with more influence in the mission planning, in particular the mission duration and the coverage of sites of interest on the Venusian surface. The solution obtained is a near-polar circular orbit with an altitude of 259 km that enables the coverage of all priority targets almost two times faster than with the parameters considered before this study.

  7. syris: a flexible and efficient framework for X-ray imaging experiments simulation.

    PubMed

    Faragó, Tomáš; Mikulík, Petr; Ershov, Alexey; Vogelgesang, Matthias; Hänschke, Daniel; Baumbach, Tilo

    2017-11-01

    An open-source framework for conducting a broad range of virtual X-ray imaging experiments, syris, is presented. The simulated wavefield created by a source propagates through an arbitrary number of objects until it reaches a detector. The objects in the light path and the source are time-dependent, which enables simulations of dynamic experiments, e.g. four-dimensional time-resolved tomography and laminography. The high-level interface of syris is written in Python and its modularity makes the framework very flexible. The computationally demanding parts behind this interface are implemented in OpenCL, which enables fast calculations on modern graphics processing units. The combination of flexibility and speed opens new possibilities for studying novel imaging methods and systematic search of optimal combinations of measurement conditions and data processing parameters. This can help to increase the success rates and efficiency of valuable synchrotron beam time. To demonstrate the capabilities of the framework, various experiments have been simulated and compared with real data. To show the use case of measurement and data processing parameter optimization based on simulation, a virtual counterpart of a high-speed radiography experiment was created and the simulated data were used to select a suitable motion estimation algorithm; one of its parameters was optimized in order to achieve the best motion estimation accuracy when applied on the real data. syris was also used to simulate tomographic data sets under various imaging conditions which impact the tomographic reconstruction accuracy, and it is shown how the accuracy may guide the selection of imaging conditions for particular use cases.

  8. Application of Anaerobic Digestion Model No. 1 for simulating anaerobic mesophilic sludge digestion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, Carlos, E-mail: carllosmendez@gmail.com; Esquerre, Karla, E-mail: karlaesquerre@ufba.br; Matos Queiroz, Luciano, E-mail: lmqueiroz@ufba.br

    2015-01-15

    Highlights: • The behavior of a anaerobic reactor was evaluated through modeling. • Parametric sensitivity analysis was used to select most sensitive of the ADM1. • The results indicate that the ADM1 was able to predict the experimental results. • Organic load rate above of 35 kg/m{sup 3} day affects the performance of the process. - Abstract: Improving anaerobic digestion of sewage sludge by monitoring common indicators such as volatile fatty acids (VFAs), gas composition and pH is a suitable solution for better sludge management. Modeling is an important tool to assess and to predict process performance. The present studymore » focuses on the application of the Anaerobic Digestion Model No. 1 (ADM1) to simulate the dynamic behavior of a reactor fed with sewage sludge under mesophilic conditions. Parametric sensitivity analysis is used to select the most sensitive ADM1 parameters for estimation using a numerical procedure while other parameters are applied without any modification to the original values presented in the ADM1 report. The results indicate that the ADM1 model after parameter estimation was able to predict the experimental results of effluent acetate, propionate, composites and biogas flows and pH with reasonable accuracy. The simulation of the effect of organic shock loading clearly showed that an organic shock loading rate above of 35 kg/m{sup 3} day affects the performance of the reactor. The results demonstrate that simulations can be helpful to support decisions on predicting the anaerobic digestion process of sewage sludge.« less

  9. Standardization of pitch-range settings in voice acoustic analysis.

    PubMed

    Vogel, Adam P; Maruff, Paul; Snyder, Peter J; Mundt, James C

    2009-05-01

    Voice acoustic analysis is typically a labor-intensive, time-consuming process that requires the application of idiosyncratic parameters tailored to individual aspects of the speech signal. Such processes limit the efficiency and utility of voice analysis in clinical practice as well as in applied research and development. In the present study, we analyzed 1,120 voice files, using standard techniques (case-by-case hand analysis), taking roughly 10 work weeks of personnel time to complete. The results were compared with the analytic output of several automated analysis scripts that made use of preset pitch-range parameters. After pitch windows were selected to appropriately account for sex differences, the automated analysis scripts reduced processing time of the 1,120 speech samples to less than 2.5 h and produced results comparable to those obtained with hand analysis. However, caution should be exercised when applying the suggested preset values to pathological voice populations.

  10. Plasma-enhanced atomic layer deposition of titanium oxynitrides films: A comparative spectroscopic and electrical study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowińska, Małgorzata, E-mail: malgorzata.sowinska@b-tu.de; Henkel, Karsten; Schmeißer, Dieter

    2016-01-15

    The process parameters' impact of the plasma-enhanced atomic layer deposition (PE-ALD) method on the oxygen to nitrogen (O/N) ratio in titanium oxynitride (TiO{sub x}N{sub y}) films was studied. Titanium(IV)isopropoxide in combination with NH{sub 3} plasma and tetrakis(dimethylamino)titanium by applying N{sub 2} plasma processes were investigated. Samples were characterized by the in situ spectroscopic ellipsometry, x-ray photoelectron spectroscopy, and electrical characterization (current–voltage: I-V and capacitance–voltage: C-V) methods. The O/N ratio in the TiO{sub x}N{sub y} films is found to be very sensitive for their electric properties such as conductivity, dielectric breakdown, and permittivity. Our results indicate that these PE-ALD film propertiesmore » can be tuned, via the O/N ratio, by the selection of the process parameters and precursor/coreactant combination.« less

  11. Research on the influence of ozone dissolved in the fuel-water emulsion on the parameters of the CI engine

    NASA Astrophysics Data System (ADS)

    Wojs, M. K.; Orliński, P.; Kamela, W.; Kruczyński, P.

    2016-09-01

    The article presents the results of empirical research on the impact of ozone dissolved in fuel-water emulsion on combustion process and concentration of toxic substances in CI engine. The effect of ozone presence in the emulsion and its influence on main engine characteristics (power, torque, fuel consumption) and selected parameters that characterize combustion process (levels of pressures and temperatures in combustion chamber, period of combustion delay, heat release rate, fuel burnt rate) is shown. The change in concentration of toxic components in exhausts gases when engine is fueled with ozonized emulsion was also identified. The empirical research and their analysis showed significant differences in the combustion process when fuel-water emulsion containing ozone was used. These differences include: increased power and efficiency of the engine that are accompanied by reduction in time of combustion delay and beneficial effects of ozone on HC, PM, CO and NOX emissions.

  12. Investigation of the Bitumen Modification Process Regime Parameters Influence on Polymer-Bitumen Bonding Qualitative Indicators

    NASA Astrophysics Data System (ADS)

    Belyaev, P. S.; Mishchenko, S. V.; Belyaev, V. P.; Belousov, O. A.; Frolov, V. A.

    2018-04-01

    The objects of this study are petroleum road bitumen and polymeric bituminous binder for road surfaces obtained by polymer materials. The subject of the study is monitoring the polymer-bitumen binder quality changes as a result of varying the bitumen modification process. The purpose of the work is to identify the patterns of the modification process and build a mathematical model that provides the ability to calculate and select technological equipment. It is shown that the polymer-bitumen binder production with specified quality parameters can be ensured in apparatuses with agitators in turbulent mode without the colloidal mills use. Bitumen mix and modifying additives limiting indicators which can be used as restrictions in the form of mathematical model inequalities are defined. A mathematical model for the polymer-bitumen binder preparation has been developed and its adequacy has been confirmed.

  13. Advanced Oxidation Processes: Process Mechanisms, Affecting Parameters and Landfill Leachate Treatment.

    PubMed

    Su-Huan, Kow; Fahmi, Muhammad Ridwan; Abidin, Che Zulzikrami Azner; Soon-An, Ong

    2016-11-01

      Advanced oxidation processes (AOPs) are of special interest in treating landfill leachate as they are the most promising procedures to degrade recalcitrant compounds and improve the biodegradability of wastewater. This paper aims to refresh the information base of AOPs and to discover the research gaps of AOPs in landfill leachate treatment. A brief overview of mechanisms involving in AOPs including ozone-based AOPs, hydrogen peroxide-based AOPs and persulfate-based AOPs are presented, and the parameters affecting AOPs are elaborated. Particularly, the advancement of AOPs in landfill leachate treatment is compared and discussed. Landfill leachate characterization prior to method selection and method optimization prior to treatment are necessary, as the performance and practicability of AOPs are influenced by leachate matrixes and treatment cost. More studies concerning the scavenging effects of leachate matrixes towards AOPs, as well as the persulfate-based AOPs in landfill leachate treatment, are necessary in the future.

  14. Decreasing diameter fluctuation of polymer optical fiber with optimized drawing conditions

    NASA Astrophysics Data System (ADS)

    Çetinkaya, Onur; Wojcik, Grzegorz; Mergo, Pawel

    2018-05-01

    The diameter fluctuations of poly(methyl methacrylate) based polymer optical fibers, during drawing processes, have been comprehensively studied. In this study, several drawing parameters were selected for investigation; such as drawing tensions, preform diameters, preform feeding speeds, and argon flows. Varied drawing tensions were used to draw fibers, while other parameters were maintained at constant. At a later stage in the process, micro-structured polymer optical fibers were drawn under optimized drawing conditions. Fiber diameter deviations were reduced to 2.2%, when a 0.2 N drawing tension was employed during the drawing process. Higher drawing tensions led to higher diameter fluctuations. The Young’s modulus of fibers drawn with different tensions was also measured. Our results showed that fiber elasticity increased as drawing tensions decreased. The inhomogeneity of fibers was also determined by comparing the deviation of Young’s modulus.

  15. Cost of ownership for inspection equipment

    NASA Astrophysics Data System (ADS)

    Dance, Daren L.; Bryson, Phil

    1993-08-01

    Cost of Ownership (CoO) models are increasingly a part of the semiconductor equipment evaluation and selection process. These models enable semiconductor manufacturers and equipment suppliers to quantify a system in terms of dollars per wafer. Because of the complex nature of the semiconductor manufacturing process, there are several key attributes that must be considered in order to accurately reflect the true 'cost of ownership'. While most CoO work to date has been applied to production equipment, the need to understand cost of ownership for inspection and metrology equipment presents unique challenges. Critical parameters such as detection sensitivity as a function of size and type of defect are not included in current CoO models yet are, without question, major factors in the technical evaluation process and life-cycle cost. This paper illustrates the relationship between these parameters, as components of the alpha and beta risk, and cost of ownership.

  16. On Nb Silicide Based Alloys: Alloy Design and Selection.

    PubMed

    Tsakiropoulos, Panos

    2018-05-18

    The development of Nb-silicide based alloys is frustrated by the lack of composition-process-microstructure-property data for the new alloys, and by the shortage of and/or disagreement between thermodynamic data for key binary and ternary systems that are essential for designing (selecting) alloys to meet property goals. Recent publications have discussed the importance of the parameters δ (related to atomic size), Δχ (related to electronegativity) and valence electron concentration (VEC) (number of valence electrons per atom filled into the valence band) for the alloying behavior of Nb-silicide based alloys (J Alloys Compd 748 (2018) 569), their solid solutions (J Alloys Compd 708 (2017) 961), the tetragonal Nb₅Si₃ (Materials 11 (2018) 69), and hexagonal C14-NbCr₂ and cubic A15-Nb₃X phases (Materials 11 (2018) 395) and eutectics with Nb ss and Nb₅Si₃ (Materials 11 (2018) 592). The parameter values were calculated using actual compositions for alloys, their phases and eutectics. This paper is about the relationships that exist between the alloy parameters δ, Δχ and VEC, and creep rate and isothermal oxidation (weight gain) and the concentrations of solute elements in the alloys. Different approaches to alloy design (selection) that use property goals and these relationships for Nb-silicide based alloys are discussed and examples of selected alloy compositions and their predicted properties are given. The alloy design methodology, which has been called NICE (Niobium Intermetallic Composite Elaboration), enables one to design (select) new alloys and to predict their creep and oxidation properties and the macrosegregation of Si in cast alloys.

  17. On Nb Silicide Based Alloys: Alloy Design and Selection

    PubMed Central

    Tsakiropoulos, Panos.

    2018-01-01

    The development of Nb-silicide based alloys is frustrated by the lack of composition-process-microstructure-property data for the new alloys, and by the shortage of and/or disagreement between thermodynamic data for key binary and ternary systems that are essential for designing (selecting) alloys to meet property goals. Recent publications have discussed the importance of the parameters δ (related to atomic size), Δχ (related to electronegativity) and valence electron concentration (VEC) (number of valence electrons per atom filled into the valence band) for the alloying behavior of Nb-silicide based alloys (J Alloys Compd 748 (2018) 569), their solid solutions (J Alloys Compd 708 (2017) 961), the tetragonal Nb5Si3 (Materials 11 (2018) 69), and hexagonal C14-NbCr2 and cubic A15-Nb3X phases (Materials 11 (2018) 395) and eutectics with Nbss and Nb5Si3 (Materials 11 (2018) 592). The parameter values were calculated using actual compositions for alloys, their phases and eutectics. This paper is about the relationships that exist between the alloy parameters δ, Δχ and VEC, and creep rate and isothermal oxidation (weight gain) and the concentrations of solute elements in the alloys. Different approaches to alloy design (selection) that use property goals and these relationships for Nb-silicide based alloys are discussed and examples of selected alloy compositions and their predicted properties are given. The alloy design methodology, which has been called NICE (Niobium Intermetallic Composite Elaboration), enables one to design (select) new alloys and to predict their creep and oxidation properties and the macrosegregation of Si in cast alloys. PMID:29783707

  18. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.

  19. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  20. Generalization of the photo process window and its application to OPC test pattern design

    NASA Astrophysics Data System (ADS)

    Eisenmann, Hans; Peter, Kai; Strojwas, Andrzej J.

    2003-07-01

    From the early development phase up to the production phase, test pattern play a key role for microlithography. The requirement for test pattern is to represent the design well and to cover the space of all process conditions, e.g. to investigate the full process window and all other process parameters. This paper shows that the current state-of-the-art test pattern do not address these requirements sufficiently and makes suggestions for a better selection of test pattern. We present a new methodology to analyze an existing layout (e.g. logic library, test pattern or full chip) for critical layout situations which does not need precise process data. We call this method "process space decomposition", because it is aimed at decomposing the process impact to a layout feature into a sum of single independent contributions, the dimensions of the process space. This is a generalization of the classical process window, which examines defocus and exposure dependency of given test pattern, e.g. CD value of dense and isolated lines. In our process space we additionally define the dimensions resist effects, etch effects, mask error and misalignment, which describe the deviation of the printed silicon pattern from its target. We further extend it by the pattern space using a product based layout (library, full chip or synthetic test pattern). The criticality of pattern is defined by their deviation due to aerial image, their sensitivity to the respective dimension or several combinations of these. By exploring the process space for a given design, the method allows to find the most critical patterns independent of specific process parameters. The paper provides examples for different applications of the method: (1) selection of design oriented test pattern for lithography development (2) test pattern reduction in process characterization (3) verification/optimization of printability and performance of post processing procedures (like OPC) (4) creation of a sensitive process monitor.

  1. Application of Twin Screw Extrusion in the Manufacture of Cocrystals, Part I: Four Case Studies

    PubMed Central

    Daurio, Dominick; Medina, Cesar; Saw, Robert; Nagapudi, Karthik; Alvarez-Núñez, Fernando

    2011-01-01

    The application of twin screw extrusion (TSE) as a scalable and green process for the manufacture of cocrystals was investigated. Four model cocrystal forming systems, Caffeine-Oxalic acid, Nicotinamide-trans cinnamic acid, Carbamazepine-Saccharin, and Theophylline-Citric acid, were selected for the study. The parameters of the extrusion process that influenced cocrystal formation were examined. TSE was found to be an effective method to make cocrystals for all four systems studied. It was demonstrated that temperature and extent of mixing in the extruder were the primary process parameters that influenced extent of conversion to the cocrystal in neat TSE experiments. In addition to neat extrusion, liquid-assisted TSE was also demonstrated for the first time as a viable process for making cocrystals. Notably, the use of catalytic amount of benign solvents led to a lowering of processing temperatures required to form the cocrystal in the extruder. TSE should be considered as an efficient, scalable, and environmentally friendly process for the manufacture of cocrystals with little to no solvent requirements. PMID:24310598

  2. Effect of pilot-scale aseptic processing on tomato soup quality parameters.

    PubMed

    Colle, Ines J P; Andrys, Anna; Grundelius, Andrea; Lemmens, Lien; Löfgren, Anders; Buggenhout, Sandy Van; Loey, Ann; Hendrickx, Marc Van

    2011-01-01

    Tomatoes are often processed into shelf-stable products. However, the different processing steps might have an impact on the product quality. In this study, a model tomato soup was prepared and the impact of pilot-scale aseptic processing, including heat treatment and high-pressure homogenization, on some selected quality parameters was evaluated. The vitamin C content, the lycopene isomer content, and the lycopene bioaccessibility were considered as health-promoting attributes. As a structural characteristic, the viscosity of the tomato soup was investigated. A tomato soup without oil as well as a tomato soup containing 5% olive oil were evaluated. Thermal processing had a negative effect on the vitamin C content, while lycopene degradation was limited. For both compounds, high-pressure homogenization caused additional losses. High-pressure homogenization also resulted in a higher viscosity that was accompanied by a decrease in lycopene bioaccessibility. The presence of lipids clearly enhanced the lycopene isomerization susceptibility and improved the bioaccessibility. The results obtained in this study are of relevance for product formulation and process design of tomato-based food products. © 2011 Institute of Food Technologists®

  3. Continuous recovery of valine in a model mixture of amino acids and salt from Corynebacterium bacteria fermentation using a simulated moving bed chromatography.

    PubMed

    Park, Chanhun; Nam, Hee-Geun; Jo, Se-Hee; Wang, Nien-Hwa Linda; Mun, Sungyong

    2016-02-26

    The economical efficiency of valine production in related industries is largely affected by the performance of a valine separation process, in which valine is to be separated from leucine, alanine, and ammonium sulfate. Such separation is currently handled by a batch-mode hybrid process based on ion-exchange and crystallization schemes. To make a substantial improvement in the economical efficiency of an industrial valine production, such a batch-mode process based on two different separation schemes needs to be converted into a continuous-mode separation process based on a single separation scheme. To address this issue, a simulated moving bed (SMB) technology was applied in this study to the development of a continuous-mode valine-separation chromatographic process with uniformity in adsorbent and liquid phases. It was first found that a Chromalite-PCG600C resin could be eligible for the adsorbent of such process, particularly in an industrial scale. The intrinsic parameters of each component on the Chromalite-PCG600C adsorbent were determined and then utilized in selecting a proper set of configurations for SMB units, columns, and ports, under which the SMB operating parameters were optimized with a genetic algorithm. Finally, the optimized SMB based on the selected configurations was tested experimentally, which confirmed its effectiveness in continuous separation of valine from leucine, alanine, ammonium sulfate with high purity, high yield, high throughput, and high valine product concentration. It is thus expected that the developed SMB process in this study will be able to serve as one of the trustworthy ways of improving the economical efficiency of an industrial valine production process. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Capture and isotopic exchange method for water and hydrogen isotopes on zeolite catalysts up to technical scale for pre-study of processing highly tritiated water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michling, R.; Braun, A.; Cristescu, I.

    2015-03-15

    Highly tritiated water (HTW) may be generated at ITER by various processes and, due to the excessive radio toxicity, the self-radiolysis and the exceedingly corrosive property of HTW, a potential hazard is associated with its storage and process. Therefore, the capture and exchange method for HTW utilizing Molecular Sieve Beds (MSB) was investigated in view of adsorption capacity, isotopic exchange performance and process parameters. For the MSB, different types of zeolite were selected. All zeolite materials were additionally coated with platinum. The following work comprised the selection of the most efficient zeolite candidate based on detailed parametric studies during themore » H{sub 2}/D{sub 2}O laboratory scale exchange experiments (about 25 g zeolite per bed) at the Tritium Laboratory Karlsruhe (TLK). For the zeolite, characterization analytical techniques such as Infrared Spectroscopy, Thermogravimetry and online mass spectrometry were implemented. Followed by further investigation of the selected zeolite catalyst under full technical operation, a MSB (about 22 kg zeolite) was processed with hydrogen flow rates up to 60 mol*h{sup -1} and deuterated water loads up to 1.6 kg in view of later ITER processing of arising HTW. (authors)« less

  5. A review of engineering aspects of intensification of chemical synthesis using ultrasound.

    PubMed

    Sancheti, Sonam V; Gogate, Parag R

    2017-05-01

    Cavitation generated using ultrasound can enhance the rates of several chemical reactions giving better selectivity based on the physical and chemical effects. The present review focuses on overview of the different reactions that can be intensified using ultrasound followed by the discussion on the chemical kinetics for ultrasound assisted reactions, engineering aspects related to reactor designs and effect of operating parameters on the degree of intensification obtained for chemical synthesis. The cavitational effects in terms of magnitudes of collapse temperatures and collapse pressure, number of free radicals generated and extent of turbulence are strongly dependent on the operating parameters such as ultrasonic power, frequency, duty cycle, temperature as well as physicochemical parameters of liquid medium which controls the inception of cavitation. Guidelines have been presented for the optimum selection based on the critical analysis of the existing literature so that maximum process intensification benefits can be obtained. Different reactor designs have also been analyzed with guidelines for efficient scale up of the sonochemical reactor, which would be dependent on the type of reaction, controlling mechanism of reaction, catalyst and activation energy requirements. Overall, it has been established that sonochemistry offers considerable potential for green and sustainable processing and efficient scale up procedures are required so as to harness the effects at actual commercial level. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Models for selecting GMA Welding Parameters for Improving Mechanical Properties of Weld Joints

    NASA Astrophysics Data System (ADS)

    Srinivasa Rao, P.; Ramachandran, Pragash; Jebaraj, S.

    2016-02-01

    During the process of Gas Metal Arc (GMAW) welding, the weld joints mechanical properties are influenced by the welding parameters such as welding current and arc voltage. These parameters directly will influence the quality of the weld in terms of mechanical properties. Even small variation in any of the cited parameters may have an important effect on depth of penetration and on joint strength. In this study, S45C Constructional Steel is taken as the base metal to be tested using the parameters wire feed rate, voltage and type of shielding gas. Physical properties considered in the present study are tensile strength and hardness. The testing of weld specimen is carried out as per ASTM Standards. Mathematical models to predict the tensile strength and depth of penetration of weld joint have been developed by regression analysis using the experimental results.

  7. Kernel parameter variation-based selective ensemble support vector data description for oil spill detection on the ocean via hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Uslu, Faruk Sukru

    2017-07-01

    Oil spills on the ocean surface cause serious environmental, political, and economic problems. Therefore, these catastrophic threats to marine ecosystems require detection and monitoring. Hyperspectral sensors are powerful optical sensors used for oil spill detection with the help of detailed spectral information of materials. However, huge amounts of data in hyperspectral imaging (HSI) require fast and accurate computation methods for detection problems. Support vector data description (SVDD) is one of the most suitable methods for detection, especially for large data sets. Nevertheless, the selection of kernel parameters is one of the main problems in SVDD. This paper presents a method, inspired by ensemble learning, for improving performance of SVDD without tuning its kernel parameters. Additionally, a classifier selection technique is proposed to get more gain. The proposed approach also aims to solve the small sample size problem, which is very important for processing high-dimensional data in HSI. The algorithm is applied to two HSI data sets for detection problems. In the first HSI data set, various targets are detected; in the second HSI data set, oil spill detection in situ is realized. The experimental results demonstrate the feasibility and performance improvement of the proposed algorithm for oil spill detection problems.

  8. The Influence of Selected Parameters on Evaluation of the Geometrical Shape Deviation - Cylindricity in 3D Measuring Machine Workspace

    NASA Astrophysics Data System (ADS)

    Drbúl, Mário; Šajgalík, Michal; Litvaj, lvan; Babík, Ondrej

    2016-12-01

    Each part as a final product and its surface is composed of various geometric elements, although at first glance seem as smooth and shiny. During the manufacturing process, there is a number of influences (e.g. selected manufacturing technology, production process, human factors, the strategy of measurement, scanning speed, shape of the measurement contact tip, temperature, or the surface tension and the like), which hinder the production of component with ideally shaped elements. From the economic and design point of view (in accordance with determined GPS standards), there is necessary fast and accurate analyze and evaluate these elements. Presented article deals with the influence of scanning speed and measuring strategy for assessment of shape deviations.

  9. Talent identification and selection process of outfield players and goalkeepers in a professional soccer club.

    PubMed

    Gil, Susana María; Zabala-Lili, Jon; Bidaurrazaga-Letona, Iraia; Aduna, Badiola; Lekue, Jose Antonio; Santos-Concejero, Jordan; Granados, Cristina

    2014-12-01

    Abstract The aim of this study was to analyse the talent identification process of a professional soccer club. A preselection of players (n = 64) aged 9-10 years and a final selection (n = 21) were performed by the technical staff through the observation during training sessions and matches. Also, 34 age-matched players of an open soccer camp (CampP) acted as controls. All participants underwent anthropometric, maturity and performance measurements. Preselected outfield players (OFs) were older and leaner than CampP (P < 0.05). Besides, they performed better in velocity, agility, endurance and jump tests (P < 0.05). A discriminant analysis showed that velocity and agility were the most important parameters. Finally, selected OFs were older and displayed better agility and endurance compared to the nonselected OFs (P < 0.05). Goalkeepers (GKs) were taller and heavier and had more body fat than OFs; also, they performed worse in the physical tests (P < 0.05). Finally, selected GKs were older and taller, had a higher predicted height and advanced maturity and performed better in the handgrip (dynamometry) and jump tests (P < 0.05). Thus, the technical staff selected OFs with a particular anthropometry and best performance, particularly agility and endurance, while GKs had a different profile. Moreover, chronological age had an important role in the whole selection process.

  10. A genetic algorithm for optimization of neural network capable of learning to search for food in a maze

    NASA Astrophysics Data System (ADS)

    Budilova, E. V.; Terekhin, A. T.; Chepurnov, S. A.

    1994-09-01

    A hypothetical neural scheme is proposed that ensures efficient decision making by an animal searching for food in a maze. Only the general structure of the network is fixed; its quantitative characteristics are found by numerical optimization that simulates the process of natural selection. Selection is aimed at maximization of the expected number of descendants, which is directly related to the energy stored during the reproductive cycle. The main parameters to be optimized are the increments of the interneuronal links and the working-memory constants.

  11. Assay Development Process | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    Typical steps involved in the development of a  mass spectrometry-based targeted assay include: (1) selection of surrogate or signature peptides corresponding to the targeted protein or modification of interest; (2) iterative optimization of instrument and method parameters for optimal detection of the selected peptide; (3) method development for protein extraction from biological matrices such as tissue, whole cell lysates, or blood plasma/serum and proteolytic digestion of proteins (usually with trypsin); (4) evaluation of the assay in the intended biological matrix to determine if e

  12. Flammability Parameters of Candles

    NASA Astrophysics Data System (ADS)

    Balog, Karol; Kobetičová, Hana; Štefko, Tomáš

    2017-06-01

    The paper deals with the assessment of selected fire safety characteristics of candles. Weight loss of a candle during the burning process, candle burning rate, soot index, heat release rate and yield of carbon oxides were determined. Soot index was determined according to EN 15426: 2007 - Candles - Specification for Sooting Behavior. All samples met the prescribed amount of produced soot. Weight loss, heat release rate and the yield of carbon oxides were determined for one selected sample. While yield of CO increased during the measurement, the yield of CO2 decreased by half in 40 minutes.

  13. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  14. Characterization of the Fe-Co-1.5V soft ferromagnetic alloy processed by Laser Engineered Net Shaping (LENS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kustas, Andrew B.; Susan, Donald F.; Johnson, Kyle L.

    Processing of the low workability Fe-Co-1.5V (Hiperco® equivalent) alloy is demonstrated using the Laser Engineered Net Shaping (LENS) metals additive manufacturing technique. As an innovative and highly localized solidification process, LENS is shown to overcome workability issues that arise during conventional thermomechanical processing, enabling the production of bulk, near net-shape forms of the Fe-Co alloy. Bulk LENS structures appeared to be ductile with no significant macroscopic defects. Atomic ordering was evaluated and significantly reduced in as-built LENS specimens relative to an annealed condition, tailorable through selection of processing parameters. Fine equiaxed grain structures were observed in as-built specimens following solidification,more » which then evolved toward a highly heterogeneous bimodal grain structure after annealing. The microstructure evolution in Fe-Co is discussed in the context of classical solidification theory and selective grain boundary pinning processes. In conclusion, magnetic properties were also assessed and shown to fall within the extremes of conventionally processed Hiperco® alloys.« less

  15. Characterization of the Fe-Co-1.5V soft ferromagnetic alloy processed by Laser Engineered Net Shaping (LENS)

    DOE PAGES

    Kustas, Andrew B.; Susan, Donald F.; Johnson, Kyle L.; ...

    2018-02-21

    Processing of the low workability Fe-Co-1.5V (Hiperco® equivalent) alloy is demonstrated using the Laser Engineered Net Shaping (LENS) metals additive manufacturing technique. As an innovative and highly localized solidification process, LENS is shown to overcome workability issues that arise during conventional thermomechanical processing, enabling the production of bulk, near net-shape forms of the Fe-Co alloy. Bulk LENS structures appeared to be ductile with no significant macroscopic defects. Atomic ordering was evaluated and significantly reduced in as-built LENS specimens relative to an annealed condition, tailorable through selection of processing parameters. Fine equiaxed grain structures were observed in as-built specimens following solidification,more » which then evolved toward a highly heterogeneous bimodal grain structure after annealing. The microstructure evolution in Fe-Co is discussed in the context of classical solidification theory and selective grain boundary pinning processes. In conclusion, magnetic properties were also assessed and shown to fall within the extremes of conventionally processed Hiperco® alloys.« less

  16. Theoretical and practical investigation into sustainable metal joining process for the automotive industry

    NASA Astrophysics Data System (ADS)

    Al-Jader, M. A.; Cullen, J. D.; Shaw, Andy; Al-Shamma'a, A. I.

    2011-08-01

    Currently there are about 4300 weld points on the average steel vehicle. Errors and problems due to tip damage and wear can cause great losses due to production line downtime. Current industrial monitoring systems check the quality of the nugget after processing 15 cars average once every two weeks. The nuggets are examined off line using a destructive process, which takes approximately 10 days to complete causing a long delay in the production process. In this paper a simulation results using software package, SORPAS, will be presented to determined the sustainability factors in spot welding process including Voltage, Current, Force, Water cooling rates, Material thicknesses and usage. The experimental results of various spot welding processes will be investigated and reported. The correlation of experimental results shows that SORPAS simulations can be used as an off line measurement to reduce factory energy usage. This paper also provides an overview of electrode current selection and its variance over the lifetime of the electrode tip, and describes the proposed analysis system for the selection of welding parameters for the spot welding process, as the electrode tip wears.

  17. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    NASA Astrophysics Data System (ADS)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  18. Adaptive Modeling Procedure Selection by Data Perturbation.

    PubMed

    Zhang, Yongli; Shen, Xiaotong

    2015-10-01

    Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.

  19. [Ecological security of wastewater treatment processes: a review].

    PubMed

    Yang, Sai; Hua, Tao

    2013-05-01

    Though the regular indicators of wastewater after treatment can meet the discharge requirements and reuse standards, it doesn't mean the effluent is harmless. From the sustainable point of view, to ensure the ecological and human security, comprehensive toxicity should be considered when discharge standards are set up. In order to improve the ecological security of wastewater treatment processes, toxicity reduction should be considered when selecting and optimizing the treatment processes. This paper reviewed the researches on the ecological security of wastewater treatment processes, with the focus on the purposes of various treatment processes, including the processes for special wastewater treatment, wastewater reuse, and for the safety of receiving waters. Conventional biological treatment combined with advanced oxidation technologies can enhance the toxicity reduction on the base of pollutants removal, which is worthy of further study. For the process aimed at wastewater reuse, the integration of different process units can complement the advantages of both conventional pollutants removal and toxicity reduction. For the process aimed at ecological security of receiving waters, the emphasis should be put on the toxicity reduction optimization of process parameters and process unit selection. Some suggestions for the problems in the current research and future research directions were put forward.

  20. New Design Heaters Using Tubes Finned by Deforming Cutting Method

    NASA Astrophysics Data System (ADS)

    Zubkov, N. N.; Nikitenko, S. M.; Nikitenko, M. S.

    2017-10-01

    The article describes the results of research aimed at selecting and assigning technological processing parameters for obtaining outer fins of heat-exchange tubes by the deformational cutting method, for use in a new design of industrial water-air heaters. The thermohydraulic results of comparative engineering tests of new and standard design air-heaters are presented.

Top