Sample records for process parameter development

  1. Optimization of Primary Drying in Lyophilization during Early Phase Drug Development using a Definitive Screening Design with Formulation and Process Factors.

    PubMed

    Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram

    2018-06-08

    Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.

  2. ASRM test report: Autoclave cure process development

    NASA Technical Reports Server (NTRS)

    Nachbar, D. L.; Mitchell, Suzanne

    1992-01-01

    ASRM insulated segments will be autoclave cured following insulation pre-form installation and strip wind operations. Following competitive bidding, Aerojet ASRM Division (AAD) Purchase Order 100142 was awarded to American Fuel Cell and Coated Fabrics Company, Inc. (Amfuel), Magnolia, AR, for subcontracted insulation autoclave cure process development. Autoclave cure process development test requirements were included in Task 3 of TM05514, Manufacturing Process Development Specification for Integrated Insulation Characterization and Stripwind Process Development. The test objective was to establish autoclave cure process parameters for ASRM insulated segments. Six tasks were completed to: (1) evaluate cure parameters that control acceptable vulcanization of ASRM Kevlar-filled EPDM insulation material; (2) identify first and second order impact parameters on the autoclave cure process; and (3) evaluate insulation material flow-out characteristics to support pre-form configuration design.

  3. Development of functionally-oriented technological processes of electroerosive processing

    NASA Astrophysics Data System (ADS)

    Syanov, S. Yu

    2018-03-01

    The stages of the development of functionally oriented technological processes of electroerosive processing from the separation of the surfaces of parts and their service functions to the determination of the parameters of the process of electric erosion, which will provide not only the quality parameters of the surface layer, but also the required operational properties, are described.

  4. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  5. Prediction of Tensile Strength of Friction Stir Weld Joints with Adaptive Neuro-Fuzzy Inference System (ANFIS) and Neural Network

    NASA Technical Reports Server (NTRS)

    Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.

    2015-01-01

    Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.

  6. Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.; Kachare, A. H.

    1981-01-01

    The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.

  7. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    PubMed

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  8. Modeling spray/puddle dissolution processes for deep-ultraviolet acid-hardened resists

    NASA Astrophysics Data System (ADS)

    Hutchinson, John M.; Das, Siddhartha; Qian, Qi-De; Gaw, Henry T.

    1993-10-01

    A study of the dissolution behavior of acid-hardened resists (AHR) was undertaken for spray and spray/puddle development processes. The Site Services DSM-100 end-point detection system is used to measure both spray and puddle dissolution data for a commercially available deep-ultraviolet AHR resist, Shipley SNR-248. The DSM allows in situ measurement of dissolution rate on the wafer chuck and hence allows parameter extraction for modeling spray and puddle processes. The dissolution data for spray and puddle processes was collected across a range of exposure dose and postexposure bake temperature. The development recipe was varied to decouple the contribution of the spray and puddle modes to the overall dissolution characteristics. The mechanisms involved in spray versus puddle dissolution and the impact of spray versus puddle dissolution on process performance metrics has been investigated. We used the effective-dose-modeling approach and the measurement capability of the DSM-100 and developed a lumped parameter model for acid-hardened resists that incorporates the effects of exposure, postexposure bake temperature and time, and development condition. The PARMEX photoresist-modeling program is used to determine parameters for the spray and for the puddle process. The lumped parameter AHR model developed showed good agreement with experimental data.

  9. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  10. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  11. White-light Interferometry using a Channeled Spectrum: II. Calibration Methods, Numerical and Experimental Results

    NASA Technical Reports Server (NTRS)

    Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.

    2007-01-01

    In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.

  12. Intelligent methods for the process parameter determination of plastic injection molding

    NASA Astrophysics Data System (ADS)

    Gao, Huang; Zhang, Yun; Zhou, Xundao; Li, Dequn

    2018-03-01

    Injection molding is one of the most widely used material processing methods in producing plastic products with complex geometries and high precision. The determination of process parameters is important in obtaining qualified products and maintaining product quality. This article reviews the recent studies and developments of the intelligent methods applied in the process parameter determination of injection molding. These intelligent methods are classified into three categories: Case-based reasoning methods, expert system- based methods, and data fitting and optimization methods. A framework of process parameter determination is proposed after comprehensive discussions. Finally, the conclusions and future research topics are discussed.

  13. Effect of processing parameters on reaction bonding of silicon nitride

    NASA Technical Reports Server (NTRS)

    Richman, M. H.; Gregory, O. J.; Magida, M. B.

    1980-01-01

    Reaction bonded silicon nitride was developed. The relationship between the various processing parameters and the resulting microstructures was to design and synthesize reaction bonded materials with improved room temperature mechanical properties.

  14. Optimum Design of Forging Process Parameters and Preform Shape under Uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2004-06-01

    Forging is a highly complex non-linear process that is vulnerable to various uncertainties, such as variations in billet geometry, die temperature, material properties, workpiece and forging equipment positional errors and process parameters. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion and production risk. Identifying the sources of uncertainties, quantifying and controlling them will reduce risk in the manufacturing environment, which will minimize the overall cost of production. In this paper, various uncertainties that affect forging tool life and preform design are identified, and their cumulative effect on the forging process is evaluated. Since the forging process simulation is computationally intensive, the response surface approach is used to reduce time by establishing a relationship between the system performance and the critical process design parameters. Variability in system performance due to randomness in the parameters is computed by applying Monte Carlo Simulations (MCS) on generated Response Surface Models (RSM). Finally, a Robust Methodology is developed to optimize forging process parameters and preform shape. The developed method is demonstrated by applying it to an axisymmetric H-cross section disk forging to improve the product quality and robustness.

  15. 'Scaling' analysis of the ice accretion process on aircraft surfaces

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.; Tabrizi, A. H.; Missimer, J. R.

    1982-01-01

    A comprehensive set of scaling parameters is developed for the ice accretion process by analyzing the energy equations of the dynamic freezing zone and the already frozen ice layer, the continuity equation associated with supercooled liquid droplets entering into and impacting within the dynamic freezing zone, and energy equation of the ice layer. No initial arbitrary judgments are made regarding the relative magnitudes of each of the terms. The method of intrinsic reference variables in employed in order to develop the appropriate scaling parameters and their relative significance in rime icing conditions in an orderly process, rather than utilizing empiricism. The significance of these parameters is examined and the parameters are combined with scaling criteria related to droplet trajectory similitude.

  16. Scheduling on the basis of the research of dependences among the construction process parameters

    NASA Astrophysics Data System (ADS)

    Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga

    2017-10-01

    The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.

  17. Process Development of Porcelain Ceramic Material with Binder Jetting Process for Dental Applications

    NASA Astrophysics Data System (ADS)

    Miyanaji, Hadi; Zhang, Shanshan; Lassell, Austin; Zandinejad, Amirali; Yang, Li

    2016-03-01

    Custom ceramic structures possess significant potentials in many applications such as dentistry and aerospace where extreme environments are present. Specifically, highly customized geometries with adequate performance are needed for various dental prostheses applications. This paper demonstrates the development of process and post-process parameters for a dental porcelain ceramic material using binder jetting additive manufacturing (AM). Various process parameters such as binder amount, drying power level, drying time and powder spread speed were studied experimentally for their effect on geometrical and mechanical characteristics of green parts. In addition, the effects of sintering and printing parameters on the qualities of the densified ceramic structures were also investigated experimentally. The results provide insights into the process-property relationships for the binder jetting AM process, and some of the challenges of the process that need to be further characterized for the successful adoption of the binder jetting technology in high quality ceramic fabrications are discussed.

  18. Intelligent Modeling Combining Adaptive Neuro Fuzzy Inference System and Genetic Algorithm for Optimizing Welding Process Parameters

    NASA Astrophysics Data System (ADS)

    Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.

    2011-04-01

    Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.

  19. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.

  20. Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak

    2018-03-01

    The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.

  1. Sensitivity analysis of the add-on price estimate for the silicon web growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1981-01-01

    The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.

  2. Technology Estimating 2: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.

    2014-01-01

    As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.

  3. The physicochemical process of bacterial attachment to abiotic surfaces: Challenges for mechanistic studies, predictability and the development of control strategies.

    PubMed

    Wang, Yi; Lee, Sui Mae; Dykes, Gary

    2015-01-01

    Bacterial attachment to abiotic surfaces can be explained as a physicochemical process. Mechanisms of the process have been widely studied but are not yet well understood due to their complexity. Physicochemical processes can be influenced by various interactions and factors in attachment systems, including, but not limited to, hydrophobic interactions, electrostatic interactions and substratum surface roughness. Mechanistic models and control strategies for bacterial attachment to abiotic surfaces have been established based on the current understanding of the attachment process and the interactions involved. Due to a lack of process control and standardization in the methodologies used to study the mechanisms of bacterial attachment, however, various challenges are apparent in the development of models and control strategies. In this review, the physicochemical mechanisms, interactions and factors affecting the process of bacterial attachment to abiotic surfaces are described. Mechanistic models established based on these parameters are discussed in terms of their limitations. Currently employed methods to study these parameters and bacterial attachment are critically compared. The roles of these parameters in the development of control strategies for bacterial attachment are reviewed, and the challenges that arise in developing mechanistic models and control strategies are assessed.

  4. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  5. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  6. Recapturing Graphite-Based Fuel Element Technology for Nuclear Thermal Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trammell, Michael P; Jolly, Brian C; Miller, James Henry

    ORNL is currently recapturing graphite based fuel forms for Nuclear Thermal Propulsion (NTP). This effort involves research and development on materials selection, extrusion, and coating processes to produce fuel elements representative of historical ROVER and NERVA fuel. Initially, lab scale specimens were fabricated using surrogate oxides to develop processing parameters that could be applied to full length NTP fuel elements. Progress toward understanding the effect of these processing parameters on surrogate fuel microstructure is presented.

  7. PLAN-TA9-2443(U), Rev. B Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing Standard Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Geoffrey Wayne

    2016-03-16

    This document identifies scope and some general procedural steps for performing Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing. This Test Plan describes the requirements, responsibilities, and process for preparing and testing a range of chemical surrogates intended to mimic the energetic response of waste created during processing of legacy nitrate salts. The surrogates developed are expected to bound1 the thermal and mechanical sensitivity of such waste, allowing for the development of process parameters required to minimize the risk to worker and public when processing this waste. Such parameters will be based on the worst-case kinetic parameters as derived frommore » APTAC measurements as well as the development of controls to mitigate sensitivities that may exist due to friction, impact, and spark. This Test Plan will define the scope and technical approach for activities that implement Quality Assurance requirements relevant to formulation and testing.« less

  8. Effect of spray drying processing parameters on the insecticidal activity of two encapsulated formulations of baculovirus

    USDA-ARS?s Scientific Manuscript database

    The aim of this work was to evaluate the effect of spray dryer processing parameters on the process yield and insecticidal activity of baculovirus to support the development of this beneficial group of microbes as biopesticides. For each of two baculoviruses [granulovirus (GV) from Pieris rapae (L....

  9. QbD for pediatric oral lyophilisates development: risk assessment followed by screening and optimization.

    PubMed

    Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan

    2017-12-01

    This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.

  10. CAD/CAM interface design of excimer laser micro-processing system

    NASA Astrophysics Data System (ADS)

    Jing, Liang; Chen, Tao; Zuo, Tiechuan

    2005-12-01

    Recently CAD/CAM technology has been gradually used in the field of laser processing. The excimer laser micro-processing system just identified G instruction before CAD/CAM interface was designed. However the course of designing a part with G instruction for users is too hard. The efficiency is low and probability of making errors is high. By secondary development technology of AutoCAD with Visual Basic, an application was developed to pick-up each entity's information in graph and convert them to each entity's processing parameters. Also an additional function was added into former controlling software to identify these processing parameters of each entity and realize continue processing of graphic. Based on the above CAD/CAM interface, Users can design a part in AutoCAD instead of using G instruction. The period of designing a part is sharply shortened. This new way of design greatly guarantees the processing parameters of the part is right and exclusive. The processing of complex novel bio-chip has been realized by this new function.

  11. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  12. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  13. Production Process of Biocompatible Magnesium Alloy Tubes Using Extrusion and Dieless Drawing Processes

    NASA Astrophysics Data System (ADS)

    Kustra, Piotr; Milenin, Andrij; Płonka, Bartłomiej; Furushima, Tsuyoshi

    2016-06-01

    Development of technological production process of biocompatible magnesium tubes for medical applications is the subject of the present paper. The technology consists of two stages—extrusion and dieless drawing process, respectively. Mg alloys for medical applications such as MgCa0.8 are characterized by low technological plasticity during deformation that is why optimization of production parameters is necessary to obtain good quality product. Thus, authors developed yield stress and ductility model for the investigated Mg alloy and then used the numerical simulations to evaluate proper manufacturing conditions. Grid Extrusion3d software developed by authors was used to determine optimum process parameters for extrusion—billet temperature 400 °C and extrusion velocity 1 mm/s. Based on those parameters the tube with external diameter 5 mm without defects was manufactured. Then, commercial Abaqus software was used for modeling dieless drawing. It was shown that the reduction in the area of 60% can be realized for MgCa0.8 magnesium alloy. Tubes with the final diameter of 3 mm were selected as a case study, to present capabilities of proposed processes.

  14. Mathematical modeling of heat treatment processes conserving biological activity of plant bioresources

    NASA Astrophysics Data System (ADS)

    Rodionova, N. S.; Popov, E. S.; Pozhidaeva, E. A.; Pynzar, S. S.; Ryaskina, L. O.

    2018-05-01

    The aim of this study is to develop a mathematical model of the heat exchange process of LT-processing to estimate the dynamics of temperature field changes and optimize the regime parameters, due to the non-stationarity process, the physicochemical and thermophysical properties of food systems. The application of LT-processing, based on the use of low-temperature modes in thermal culinary processing of raw materials with preliminary vacuum packaging in a polymer heat- resistant film is a promising trend in the development of technics and technology in the catering field. LT-processing application of food raw materials guarantees the preservation of biologically active substances in food environments, which are characterized by a certain thermolability, as well as extend the shelf life and high consumer characteristics of food systems that are capillary-porous bodies. When performing the mathematical modeling of the LT-processing process, the packet of symbolic mathematics “Maple” was used, as well as the mathematical packet flexPDE that uses the finite element method for modeling objects with distributed parameters. The processing of experimental results was evaluated with the help of the developed software in the programming language Python 3.4. To calculate and optimize the parameters of the LT processing process of polycomponent food systems, the differential equation of non-stationary thermal conductivity was used, the solution of which makes it possible to identify the temperature change at any point of the solid at different moments. The present study specifies data on the thermophysical characteristics of the polycomponent food system based on plant raw materials, with the help of which the physico-mathematical model of the LT- processing process has been developed. The obtained mathematical model allows defining of the dynamics of the temperature field in different sections of the LT-processed polycomponent food systems on the basis of calculating the evolution profiles of temperature fields, which enable one to analyze the efficiency of the regime parameters of heat treatment.

  15. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  16. CO 2 laser cutting of MDF . 1. Determination of process parameter settings

    NASA Astrophysics Data System (ADS)

    Lum, K. C. P.; Ng, S. L.; Black, I.

    2000-02-01

    This paper details an investigation into the laser processing of medium-density fibreboard (MDF). Part 1 reports on the determination of process parameter settings for the effective cutting of MDF by CO 2 laser, using an established experimental methodology developed to study the interrelationship between and effects of varying laser set-up parameters. Results are presented for both continuous wave (CW) and pulse mode (PM) cutting, and the associated cut quality effects have been commented on.

  17. Process Parameter Optimization for Wobbling Laser Spot Welding of Ti6Al4V Alloy

    NASA Astrophysics Data System (ADS)

    Vakili-Farahani, F.; Lungershausen, J.; Wasmer, K.

    Laser beam welding (LBW) coupled with "wobble effect" (fast oscillation of the laser beam) is very promising for high precision micro-joining industry. For this process, similarly to the conventional LBW, the laser welding process parameters play a very significant role in determining the quality of a weld joint. Consequently, four process parameters (laser power, wobble frequency, number of rotations within a single laser pulse and focused position) and 5 responses (penetration, width, heat affected zone (HAZ), area of the fusion zone, area of HAZ and hardness) were investigated for spot welding of Ti6Al4V alloy (grade 5) using a design of experiments (DoE) approach. This paper presents experimental results showing the effects of variating the considered most important process parameters on the spot weld quality of Ti6Al4V alloy. Semi-empirical mathematical models were developed to correlate laser welding parameters to each of the measured weld responses. Adequacies of the models were then examined by various methods such as ANOVA. These models not only allows a better understanding of the wobble laser welding process and predict the process performance but also determines optimal process parameters. Therefore, optimal combination of process parameters was determined considering certain quality criteria set.

  18. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  19. Robust parameter design for automatically controlled systems and nanostructure synthesis

    NASA Astrophysics Data System (ADS)

    Dasgupta, Tirthankar

    2007-12-01

    This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.

  20. Application of quality by design principles to the development and technology transfer of a major process improvement for the manufacture of a recombinant protein.

    PubMed

    Looby, Mairead; Ibarra, Neysi; Pierce, James J; Buckley, Kevin; O'Donovan, Eimear; Heenan, Mary; Moran, Enda; Farid, Suzanne S; Baganz, Frank

    2011-01-01

    This study describes the application of quality by design (QbD) principles to the development and implementation of a major manufacturing process improvement for a commercially distributed therapeutic protein produced in Chinese hamster ovary cell culture. The intent of this article is to focus on QbD concepts, and provide guidance and understanding on how the various components combine together to deliver a robust process in keeping with the principles of QbD. A fed-batch production culture and a virus inactivation step are described as representative examples of upstream and downstream unit operations that were characterized. A systematic approach incorporating QbD principles was applied to both unit operations, involving risk assessment of potential process failure points, small-scale model qualification, design and execution of experiments, definition of operating parameter ranges and process validation acceptance criteria followed by manufacturing-scale implementation and process validation. Statistical experimental designs were applied to the execution of process characterization studies evaluating the impact of operating parameters on product quality attributes and process performance parameters. Data from process characterization experiments were used to define the proven acceptable range and classification of operating parameters for each unit operation. Analysis of variance and Monte Carlo simulation methods were used to assess the appropriateness of process design spaces. Successful implementation and validation of the process in the manufacturing facility and the subsequent manufacture of hundreds of batches of this therapeutic protein verifies the approaches taken as a suitable model for the development, scale-up and operation of any biopharmaceutical manufacturing process. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  1. Wind speed vector restoration algorithm

    NASA Astrophysics Data System (ADS)

    Baranov, Nikolay; Petrov, Gleb; Shiriaev, Ilia

    2018-04-01

    Impulse wind lidar (IWL) signal processing software developed by JSC «BANS» recovers full wind speed vector by radial projections and provides wind parameters information up to 2 km distance. Increasing accuracy and speed of wind parameters calculation signal processing technics have been studied in this research. Measurements results of IWL and continuous scanning lidar were compared. Also, IWL data processing modeling results have been analyzed.

  2. Computer-Controlled Cylindrical Polishing Process for Large X-Ray Mirror Mandrels

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    We are developing high-energy grazing incidence shell optics for hard-x-ray telescopes. The resolution of a mirror shells depends on the quality of cylindrical mandrel from which they are being replicated. Mid-spatial-frequency axial figure error is a dominant contributor in the error budget of the mandrel. This paper presents our efforts to develop a deterministic cylindrical polishing process in order to keep the mid-spatial-frequency axial figure errors to a minimum. Simulation software is developed to model the residual surface figure errors of a mandrel due to the polishing process parameters and the tools used, as well as to compute the optical performance of the optics. The study carried out using the developed software was focused on establishing a relationship between the polishing process parameters and the mid-spatial-frequency error generation. The process parameters modeled are the speeds of the lap and the mandrel, the tool s influence function, the contour path (dwell) of the tools, their shape and the distribution of the tools on the polishing lap. Using the inputs from the mathematical model, a mandrel having conical approximated Wolter-1 geometry, has been polished on a newly developed computer-controlled cylindrical polishing machine. The preliminary results of a series of polishing experiments demonstrate a qualitative agreement with the developed model. We report our first experimental results and discuss plans for further improvements in the polishing process. The ability to simulate the polishing process is critical to optimize the polishing process, improve the mandrel quality and significantly reduce the cost of mandrel production

  3. Effect of Electron Beam Freeform Fabrication (EBF3) Processing Parameters on Composition of Ti-6-4

    NASA Technical Reports Server (NTRS)

    Lach, Cynthia L.; Taminger, Karen; Schuszler, A. Bud, II; Sankaran, Sankara; Ehlers, Helen; Nasserrafi, Rahbar; Woods, Bryan

    2007-01-01

    The Electron Beam Freeform Fabrication (EBF3) process developed at NASA Langley Research Center was evaluated using a design of experiments approach to determine the effect of processing parameters on the composition and geometry of Ti-6-4 deposits. The effects of three processing parameters: beam power, translation speed, and wire feed rate, were investigated by varying one while keeping the remaining parameters constant. A three-factorial, three-level, fully balanced mutually orthogonal array (L27) design of experiments approach was used to examine the effects of low, medium, and high settings for the processing parameters on the chemistry, geometry, and quality of the resulting deposits. Single bead high deposits were fabricated and evaluated for 27 experimental conditions. Loss of aluminum in Ti-6-4 was observed in EBF3 processing due to selective vaporization of the aluminum from the sustained molten pool in the vacuum environment; therefore, the chemistries of the deposits were measured and compared with the composition of the initial wire and base plate to determine if the loss of aluminum could be minimized through careful selection of processing parameters. The influence of processing parameters and coupling between these parameters on bulk composition, measured by Direct Current Plasma (DCP), local microchemistries determined by Wavelength Dispersive Spectrometry (WDS), and deposit geometry will also be discussed.

  4. Process Integration and Optimization of ICME Carbon Fiber Composites for Vehicle Lightweighting: A Preliminary Development

    DOE PAGES

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    2017-01-02

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  5. Hot-crack test for aluminium alloys welds using TIG process

    NASA Astrophysics Data System (ADS)

    Niel, A.; Deschaux-Beaume, F.; Bordreuil, C.; Fras, G.

    2010-06-01

    Hot cracking is a critical defect frequently observed during welding of aluminium alloys. In order to better understand the interaction between cracking phenomenon, process parameters, mechanical factors and microstructures resulting from solidification after welding, an original hot-cracking test during welding is developed. According to in-situ observations and post mortem analyses, hot cracking mechanisms are investigated, taking into account the interaction between microstructural parameters, depending on the thermal cycles, and mechanical parameters, depending on geometry and clamping conditions of the samples and on the thermal field on the sample. Finally, a process map indicating the limit between cracking and non-cracking zones according to welding parameters is presented.

  6. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.

  7. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.

  8. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  9. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  10. Design of a Uranium Dioxide Spheroidization System

    NASA Technical Reports Server (NTRS)

    Cavender, Daniel P.; Mireles, Omar R.; Frendi, Abdelkader

    2013-01-01

    The plasma spheroidization system (PSS) is the first process in the development of tungsten-uranium dioxide (W-UO2) fuel cermets. The PSS process improves particle spherocity and surface morphology for coating by chemical vapor deposition (CVD) process. Angular fully dense particles melt in an argon-hydrogen plasma jet at between 32-36 kW, and become spherical due to surface tension. Surrogate CeO2 powder was used in place of UO2 for system and process parameter development. Particles range in size from 100 - 50 microns in diameter. Student s t-test and hypothesis testing of two proportions statistical methods were applied to characterize and compare the spherocity of pre and post process powders. Particle spherocity was determined by irregularity parameter. Processed powders show great than 800% increase in the number of spherical particles over the stock powder with the mean spherocity only mildly improved. It is recommended that powders be processed two-three times in order to reach the desired spherocity, and that process parameters be optimized for a more narrow particles size range. Keywords: spherocity, spheroidization, plasma, uranium-dioxide, cermet, nuclear, propulsion

  11. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  12. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  13. Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.

    2002-05-01

    Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.

  14. Mathematical Model of Nonstationary Separation Processes Proceeding in the Cascade of Gas Centrifuges in the Process of Separation of Multicomponent Isotope Mixtures

    NASA Astrophysics Data System (ADS)

    Orlov, A. A.; Ushakov, A. A.; Sovach, V. P.

    2017-03-01

    We have developed and realized on software a mathematical model of the nonstationary separation processes proceeding in the cascades of gas centrifuges in the process of separation of multicomponent isotope mixtures. With the use of this model the parameters of the separation process of germanium isotopes have been calculated. It has been shown that the model adequately describes the nonstationary processes in the cascade and is suitable for calculating their parameters in the process of separation of multicomponent isotope mixtures.

  15. A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.

    PubMed

    Faya, Paul; Stamey, James D; Seaman, John W

    2017-01-01

    For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.

  16. Analysing the influence of FSP process parameters on IGC susceptibility of AA5083 using Sugeno - Fuzzy model

    NASA Astrophysics Data System (ADS)

    Jayakarthick, C.; Povendhan, A. P.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Aluminium alloy AA5083 was friction stir processed to improve the intergranular corrosion (IGC) resistance. FSP trials were performed by varying the process parameters as per Taguchi’s L18 orthogonal array. IGC resistance of the friction stir processed specimens were found by immersing them in concentrated nitric acid and measuring the mass loss per unit area. Results indicate that dispersion and partial dissolution of secondary phase increased IGC resistance of the friction stir processed specimens. A Sugeno fuzzy model was developed to study the effect of FSP process parameters on the IGC susceptibility of friction stir processed specimens. Tool Rotation Speed, Tool Traverse Speed and Shoulder Diameter have a significant effect on the IGC susceptibility of the friction stir processed specimens.

  17. Development of AACAP practice parameters for gender nonconformity and gender discordance in children and adolescents.

    PubMed

    Adelson, Stewart L

    2011-10-01

    The American Academy of Child and Adolescent Psychiatry (AACAP) is preparing a publication, Practice Parameter on Gay, Lesbian or Bisexual Sexual Orientation, Gender-Nonconformity, and Gender Discordance in Children and Adolescents. This article discusses the development of the part of the parameter related to gender nonconformity and gender discordance and describes the practice parameter preparation process,rationale, key scientific evidence, and methodology. Also discussed are terminology considerations, related clinical issues and practice skills, and overall organization of information including influences on gender development, gender role behavior, gender nonconformity and gender discordance, and their relationship to the development of sexual orientation.

  18. Identification of Optimum Magnetic Behavior of NanoCrystalline CmFeAl Type Heusler Alloy Powders Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Srivastava, Y.; Srivastava, S.; Boriwal, L.

    2016-09-01

    Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.

  19. Process parameter and surface morphology of pineapple leaf electrospun nanofibers (PALF)

    NASA Astrophysics Data System (ADS)

    Surip, S. N.; Aziz, F. M. A.; Bonnia, N. N.; Sekak, K. A.; Zakaria, M. N.

    2017-09-01

    In recent times, nanofibers have attracted the attention of researchers due to their pronounced micro and nano structural characteristics that enable the development of advanced materials that have sophisticated applications. The production of nanofibers by the electrospinning process is influenced both by the electrostatic forces and the viscoelastic behavior of the polymer. Process parameters, like solution feed rate, applied voltage, nozzle-collector distance, and spinning environment, and material properties, like solution concentration, viscosity, surface tension, conductivity, and solvent vapor pressure, influence the structure and properties of electrospun nanofibers. Significant work has been done to characterize the properties of PALF nanofibers as a function of process and material parameters.

  20. Relationship between the erosion properties of soils and other parameters

    USDA-ARS?s Scientific Manuscript database

    Soil parameters are essential for erosion process prediction and ultimately improved model development, especially as they relate to dam and levee failure. Soil parameters including soil texture and structure, soil classification, soil compaction, moisture content, and degree of saturation can play...

  1. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  2. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  3. Design of a uranium-dioxide powder spheroidization system by plasma processing

    NASA Astrophysics Data System (ADS)

    Cavender, Daniel

    The plasma spheroidization system (PSS) is the first process in the development of a tungsten-uranium dioxide (W-UO2) ceramic-metallic (cermet) fuel for nuclear thermal rocket (NTR) propulsion. For the purposes of fissile fuel retention, UO2 spheroids ranging in size from 50 - 100 micrometers (μm) in diameter will be encapsulated in a tungsten shell. The PSS produces spherical particles by melting angular stock particles in an argon-hydrogen plasma jet where they become spherical due to surface tension. Surrogate CeO 2 powder was used in place of UO2 for system and process parameter development. Stock and spheroidized powders were micrographed using optical and scanning electron microscopy and evaluated by statistical methods to characterize and compare the spherocity of pre and post process powders. Particle spherocity was determined by irregularity parameter. Processed powders showed a statistically significant improvement in spherocity, with greater that 60% of the examined particles having an irregularity parameter of equal to or lower than 1.2, compared to stock powder.

  4. A risk-based approach to management of leachables utilizing statistical analysis of extractables.

    PubMed

    Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M

    2015-04-01

    To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.

  5. A Design of Experiments Approach Defining the Relationships Between Processing and Microstructure for Ti-6Al-4V

    NASA Technical Reports Server (NTRS)

    Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.

    2004-01-01

    A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .

  6. The design and development of transonic multistage compressors

    NASA Technical Reports Server (NTRS)

    Ball, C. L.; Steinke, R. J.; Newman, F. A.

    1988-01-01

    The development of the transonic multistage compressor is reviewed. Changing trends in design and performance parameters are noted. These changes are related to advances in compressor aerodynamics, computational fluid mechanics and other enabling technologies. The parameters normally given to the designer and those that need to be established during the design process are identified. Criteria and procedures used in the selection of these parameters are presented. The selection of tip speed, aerodynamic loading, flowpath geometry, incidence and deviation angles, blade/vane geometry, blade/vane solidity, stage reaction, aerodynamic blockage, inlet flow per unit annulus area, stage/overall velocity ratio, and aerodynamic losses are considered. Trends in these parameters both spanwise and axially through the machine are highlighted. The effects of flow mixing and methods for accounting for the mixing in the design process are discussed.

  7. Effect of pulsed current GTA welding parameters on the fusion zone microstructure of AA 6061 aluminium alloy

    NASA Astrophysics Data System (ADS)

    Kumar, T. Senthil; Balasubramanian, V.; Babu, S.; Sanavullah, M. Y.

    2007-08-01

    AA6061 aluminium alloy (Al-Mg-Si alloy) has gathered wide acceptance in the fabrication of food processing equipment, chemical containers, passenger cars, road tankers, and railway transport systems. The preferred process for welding these aluminium alloys is frequently Gas Tungsten Arc (GTA) welding due to its comparatively easy applicability and lower cost. In the case of single pass GTA welding of thinner sections of this alloy, the pulsed current has been found beneficial due to its advantages over the conventional continuous current processes. The use of pulsed current parameters has been found to improve the mechanical properties of the welds compared to those of continuous current welds of this alloy due to grain refinement occurring in the fusion zone. In this investigation, an attempt has been made to develop a mathematical model to predict the fusion zone grain diameter incorporating pulsed current welding parameters. Statistical tools such as design of experiments, analysis of variance, and regression analysis are used to develop the mathematical model. The developed model can be effectively used to predict the fusion grain diameter at a 95% confidence level for the given pulsed current parameters. The effect of pulsed current GTA welding parameters on the fusion zone grain diameter of AA 6061 aluminium alloy welds is reported in this paper.

  8. Optimization of hybrid laser - TIG welding of 316LN steel using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Ragavendran, M.; Chandrasekhar, N.; Ravikumar, R.; Saxena, Rajesh; Vasudevan, M.; Bhaduri, A. K.

    2017-07-01

    In the present study, the hybrid laser - TIG welding parameters for welding of 316LN austenitic stainless steel have been investigated by combining a pulsed laser beam with a TIG welding heat source at the weld pool. Laser power, pulse frequency, pulse duration, TIG current were presumed as the welding process parameters whereas weld bead width, weld cross-sectional area and depth of penetration (DOP) were considered as the process responses. Central composite design was used to complete the design matrix and welding experiments were conducted based on the design matrix. Weld bead measurements were then carried out to generate the dataset. Multiple regression models correlating the process parameters with the responses have been developed. The accuracy of the models were found to be good. Then, the desirability approach optimization technique was employed for determining the optimum process parameters to obtain the desired weld bead profile. Validation experiments were then carried out from the determined optimum process parameters. There was good agreement between the predicted and measured values.

  9. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The purpose of this program is to demonstrate the technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per watt peak. Program efforts included: preliminary design review, preliminary cell fabrication using the proposed process sequence, verification of sandblasting back cleanup, study of resist parameters, evaluation of pull strength of the proposed metallization, measurement of contact resistance of Electroless Ni contacts, optimization of process parameter, design of the MEPSDU module, identification and testing of insulator tapes, development of a lamination process sequence, identification, discussions, demonstrations and visits with candidate equipment vendors, evaluation of proposals for tabbing and stringing machine.

  10. Nonlinear Multiscale Modeling of 3D Woven Fiber Composites under Ballistic Loading

    DTIC Science & Technology

    2013-07-11

    contact parameters on the underlying damage processes is being studied and worked on. We further develop a material model suitable particularly for...of Material and Process Engineering. 2011/05/23 00:00:00, . : , TOTAL: 1 (d) Manuscripts Number of Peer-Reviewed Conference Proceeding publications...continuum damage mechanics suitable for polymer materials. The effect of contact parameters on the underlying damage processes is being studied and

  11. An Evaluation of Compressed Work Schedules and Their Impact on Electricity Use

    DTIC Science & Technology

    2010-03-01

    problems by introducing uncertainty to the known parameters of a given process ( Sobol , 1975). The MCS output represents approximate values of the...process within the observed parameters; the output is provided within a statistical distribution of likely outcomes ( Sobol , 1975). 31 In this...The Monte Carlo method is appropriate for “any process whose development is affected by random factors” ( Sobol , 1975:10). MCS introduces

  12. Process Optimization of Dual-Laser Beam Welding of Advanced Al-Li Alloys Through Hot Cracking Susceptibility Modeling

    NASA Astrophysics Data System (ADS)

    Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra

    2016-07-01

    Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.

  13. Mining manufacturing data for discovery of high productivity process characteristics.

    PubMed

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  14. The Effect of Process Parameters and Tool Geometry on Thermal Field Development and Weld Formation in Friction Stir Welding of the Alloys AZ31 and AZ61

    NASA Astrophysics Data System (ADS)

    Zettler, R.; Blanco, A. C.; dos Santos, J. F.; Marya, S.

    An increase in the use of magnesium (Mg) in the car manufacturing industry has raised questions concerning its weldability. Friction Stir Welding (FSW) has the advantage of achieving metallic bonding below that of the melting point of the base material thus avoiding many of the metallurgical problems associated with the solidification process. The present study presents the results of a development program carried out to investigate the response of Mg alloys AZ31 and AZ61 to different FSW tool geometries and process parameters. Temperature development across the weld zone was monitored and the produced welds have been subjected to microstructural analysis and mechanical testing. Defect free welds have been produced with optimised FSW-tool and parameters. The micro structure of the welded joint resulted in similar ductility and hardness levels as compared to that of the base material. The results also demonstrated that tool geometry plays a fundamental role in the response of the investigated alloys to the FSW process.

  15. A hybrid optimization approach in non-isothermal glass molding

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  16. Workflow for Criticality Assessment Applied in Biopharmaceutical Process Validation Stage 1.

    PubMed

    Zahel, Thomas; Marschall, Lukas; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Mueller, Eric M; Murphy, Patrick; Natschläger, Thomas; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-12

    Identification of critical process parameters that impact product quality is a central task during regulatory requested process validation. Commonly, this is done via design of experiments and identification of parameters significantly impacting product quality (rejection of the null hypothesis that the effect equals 0). However, parameters which show a large uncertainty and might result in an undesirable product quality limit critical to the product, may be missed. This might occur during the evaluation of experiments since residual/un-modelled variance in the experiments is larger than expected a priori. Estimation of such a risk is the task of the presented novel retrospective power analysis permutation test. This is evaluated using a data set for two unit operations established during characterization of a biopharmaceutical process in industry. The results show that, for one unit operation, the observed variance in the experiments is much larger than expected a priori, resulting in low power levels for all non-significant parameters. Moreover, we present a workflow of how to mitigate the risk associated with overlooked parameter effects. This enables a statistically sound identification of critical process parameters. The developed workflow will substantially support industry in delivering constant product quality, reduce process variance and increase patient safety.

  17. Influence of tool geometry and processing parameters on welding defects and mechanical properties for friction stir welding of 6061 Aluminium alloy

    NASA Astrophysics Data System (ADS)

    Daneji, A.; Ali, M.; Pervaiz, S.

    2018-04-01

    Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.

  18. Process-Structure Linkages Using a Data Science Approach: Application to Simulated Additive Manufacturing Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi

    A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less

  19. Process-Structure Linkages Using a Data Science Approach: Application to Simulated Additive Manufacturing Data

    DOE PAGES

    Popova, Evdokia; Rodgers, Theron M.; Gong, Xinyi; ...

    2017-03-13

    A novel data science workflow is developed and demonstrated to extract process-structure linkages (i.e., reduced-order model) for microstructure evolution problems when the final microstructure depends on (simulation or experimental) processing parameters. Our workflow consists of four main steps: data pre-processing, microstructure quantification, dimensionality reduction, and extraction/validation of process-structure linkages. These methods that can be employed within each step vary based on the type and amount of available data. In this paper, this data-driven workflow is applied to a set of synthetic additive manufacturing microstructures obtained using the Potts-kinetic Monte Carlo (kMC) approach. Additive manufacturing techniques inherently produce complex microstructures thatmore » can vary significantly with processing conditions. Using the developed workflow, a low-dimensional data-driven model was established to correlate process parameters with the predicted final microstructure. In addition, the modular workflows developed and presented in this work facilitate easy dissemination and curation by the broader community.« less

  20. Systematic development of technical textiles

    NASA Astrophysics Data System (ADS)

    Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.

    2016-07-01

    Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.

  1. Steps Towards Industrialization of Cu–III–VI2Thin‐Film Solar Cells:Linking Materials/Device Designs to Process Design For Non‐stoichiometric Photovoltaic Materials

    PubMed Central

    Chang, Hsueh‐Hsin; Sharma, Poonam; Letha, Arya Jagadhamma; Shao, Lexi; Zhang, Yafei; Tseng, Bae‐Heng

    2016-01-01

    The concept of in‐line sputtering and selenization become industrial standard for Cu–III–VI2 solar cell fabrication, but still it's very difficult to control and predict the optical and electrical parameters, which are closely related to the chemical composition distribution of the thin film. The present review article addresses onto the material design, device design and process design using parameters closely related to the chemical compositions. Its variation leads to change in the Poisson equation, current equation, and continuity equation governing the device design. To make the device design much realistic and meaningful, we need to build a model that relates the opto‐electrical properties to the chemical composition. The material parameters as well as device structural parameters are loaded into the process simulation to give a complete set of process control parameters. The neutral defect concentrations of non‐stoichiometric CuMSe2 (M = In and Ga) have been calculated under the specific atomic chemical potential conditions using this methodology. The optical and electrical properties have also been investigated for the development of a full‐function analytical solar cell simulator. The future prospects regarding the development of copper–indium–gallium–selenide thin film solar cells have also been discussed. PMID:27840790

  2. Steps Towards Industrialization of Cu-III-VI2Thin-Film Solar Cells:Linking Materials/Device Designs to Process Design For Non-stoichiometric Photovoltaic Materials.

    PubMed

    Hwang, Huey-Liang; Chang, Hsueh-Hsin; Sharma, Poonam; Letha, Arya Jagadhamma; Shao, Lexi; Zhang, Yafei; Tseng, Bae-Heng

    2016-10-01

    The concept of in-line sputtering and selenization become industrial standard for Cu-III-VI 2 solar cell fabrication, but still it's very difficult to control and predict the optical and electrical parameters, which are closely related to the chemical composition distribution of the thin film. The present review article addresses onto the material design, device design and process design using parameters closely related to the chemical compositions. Its variation leads to change in the Poisson equation, current equation, and continuity equation governing the device design. To make the device design much realistic and meaningful, we need to build a model that relates the opto-electrical properties to the chemical composition. The material parameters as well as device structural parameters are loaded into the process simulation to give a complete set of process control parameters. The neutral defect concentrations of non-stoichiometric CuMSe 2 (M = In and Ga) have been calculated under the specific atomic chemical potential conditions using this methodology. The optical and electrical properties have also been investigated for the development of a full-function analytical solar cell simulator. The future prospects regarding the development of copper-indium-gallium-selenide thin film solar cells have also been discussed.

  3. A hyperbolastic type-I diffusion process: Parameter estimation by means of the firefly algorithm.

    PubMed

    Barrera, Antonio; Román-Román, Patricia; Torres-Ruiz, Francisco

    2018-01-01

    A stochastic diffusion process, whose mean function is a hyperbolastic curve of type I, is presented. The main characteristics of the process are studied and the problem of maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stagewise procedure. Some examples based on simulated sample paths and real data illustrate this development. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Melt-Pool Temperature and Size Measurement During Direct Laser Sintering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    List, III, Frederick Alyious; Dinwiddie, Ralph Barton; Carver, Keith

    2017-08-01

    Additive manufacturing has demonstrated the ability to fabricate complex geometries and components not possible with conventional casting and machining. In many cases, industry has demonstrated the ability to fabricate complex geometries with improved efficiency and performance. However, qualification and certification of processes is challenging, leaving companies to focus on certification of material though design allowable based approaches. This significantly reduces the business case for additive manufacturing. Therefore, real time monitoring of the melt pool can be used to detect the development of flaws, such as porosity or un-sintered powder and aid in the certification process. Characteristics of the melt poolmore » in the Direct Laser Sintering (DLS) process is also of great interest to modelers who are developing simulation models needed to improve and perfect the DLS process. Such models could provide a means to rapidly develop the optimum processing parameters for new alloy powders and optimize processing parameters for specific part geometries. Stratonics’ ThermaViz system will be integrated with the Renishaw DLS system in order to demonstrate its ability to measure melt pool size, shape and temperature. These results will be compared with data from an existing IR camera to determine the best approach for the determination of these critical parameters.« less

  5. Application of Quality by Design to the characterization of the cell culture process of an Fc-Fusion protein.

    PubMed

    Rouiller, Yolande; Solacroup, Thomas; Deparis, Véronique; Barbafieri, Marco; Gleixner, Ralf; Broly, Hervé; Eon-Duval, Alex

    2012-06-01

    The production bioreactor step of an Fc-Fusion protein manufacturing cell culture process was characterized following Quality by Design principles. Using scientific knowledge derived from the literature and process knowledge gathered during development studies and manufacturing to support clinical trials, potential critical and key process parameters with a possible impact on product quality and process performance, respectively, were determined during a risk assessment exercise. The identified process parameters were evaluated using a design of experiment approach. The regression models generated from the data allowed characterizing the impact of the identified process parameters on quality attributes. The main parameters having an impact on product titer were pH and dissolved oxygen, while those having the highest impact on process- and product-related impurities and variants were pH and culture duration. The models derived from characterization studies were used to define the cell culture process design space. The design space limits were set in such a way as to ensure that the drug substance material would consistently have the desired quality. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. A high-throughput 2D-analytical technique to obtain single protein parameters from complex cell lysates for in silico process development of ion exchange chromatography.

    PubMed

    Kröner, Frieder; Elsäßer, Dennis; Hubbuch, Jürgen

    2013-11-29

    The accelerating growth of the market for biopharmaceutical proteins, the market entry of biosimilars and the growing interest in new, more complex molecules constantly pose new challenges for bioseparation process development. In the presented work we demonstrate the application of a multidimensional, analytical separation approach to obtain the relevant physicochemical parameters of single proteins in a complex mixture for in silico chromatographic process development. A complete cell lysate containing a low titre target protein was first fractionated by multiple linear salt gradient anion exchange chromatography (AEC) with varying gradient length. The collected fractions were subsequently analysed by high-throughput capillary gel electrophoresis (HT-CGE) after being desalted and concentrated. From the obtained data of the 2D-separation the retention-volumes and the concentration of the single proteins were determined. The retention-volumes of the single proteins were used to calculate the related steric-mass action model parameters. In a final evaluation experiment the received parameters were successfully applied to predict the retention behaviour of the single proteins in salt gradient AEC. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Development of polyvinyl acetate thin films by electrospinning for sensor applications

    NASA Astrophysics Data System (ADS)

    Veerabhadraiah, Amith; Ramakrishna, Sridhar; Angadi, Gangadhar; Venkatram, Mamtha; Kanivebagilu Ananthapadmanabha, Vishnumurthy; Hebbale NarayanaRao, Narasimha Murthy; Munishamaiah, Krishna

    2017-10-01

    Electrospinning is an effective process for synthesis of polymer fibers with diameters ranging between nanometers and micrometers by employing electrostatic force developed due to application of high voltage. The present work aims to develop an electrospinning system and optimize the process parameters for synthesis of Polyvinyl Acetate thin films used for gas and humidity sensors. Taguchi's Design of Experiment was adopted considering three main factors at three different levels for optimization of process parameters. The factors considered were flow rate (0.5, 0.6 and 0.7 ml/h), voltage (18, 19 and 20 kV) and spinneret to collector distance (8, 9, 10 cm) with fiber diameter as the response factor. The main effect plots and interaction plots of the parameters were studied to determine the most influencing parameter. Flow rate was the most significant factor followed by spinneret to collector distance. Least fiber diameter of 24.83 nm was observed at 19 kV, 0.5 ml/h flow rate and 8 cm spinneret to collector distance. SEM images revealed uniform fiber diameter at lower flow rate while bead formation increased monotonically with rise in flow rate.

  8. Stability of Intercellular Exchange of Biochemical Substances Affected by Variability of Environmental Parameters

    NASA Astrophysics Data System (ADS)

    Mihailović, Dragutin T.; Budinčević, Mirko; Balaž, Igor; Mihailović, Anja

    Communication between cells is realized by exchange of biochemical substances. Due to internal organization of living systems and variability of external parameters, the exchange is heavily influenced by perturbations of various parameters at almost all stages of the process. Since communication is one of essential processes for functioning of living systems it is of interest to investigate conditions for its stability. Using previously developed simplified model of bacterial communication in a form of coupled difference logistic equations we investigate stability of exchange of signaling molecules under variability of internal and external parameters.

  9. DOE Program on Seismic Characterization for Regions of Interest to CTBT Monitoring,

    DTIC Science & Technology

    1995-08-14

    processing of the monitoring network data). While developing and testing the corrections and other parameters needed by the automated processing systems...the secondary network. Parameters tabulated in the knowledge base must be appropriate for routine automated processing of network data, and must also...operation of the PNDC, as well as to results of investigations of "special events" (i.e., those events that fail to locate or discriminate during automated

  10. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  11. Development of Processing Parameters for Organic Binders Using Selective Laser Sintering

    NASA Technical Reports Server (NTRS)

    Mobasher, Amir A.

    2003-01-01

    This document describes rapid prototyping, its relation to Computer Aided Design (CAD), and the application of these techniques to choosing parameters for Selective Laser Sintering (SLS). The document reviews the parameters selected by its author for his project, the SLS machine used, and its software.

  12. Mammalian cell culture process for monoclonal antibody production: nonlinear modelling and parameter estimation.

    PubMed

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  13. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    PubMed Central

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797

  14. Report of the Association of Coloproctology of Great Britain and Ireland/British Society of Gastroenterology Colorectal Polyp Working Group: the development of a complex colorectal polyp minimum dataset.

    PubMed

    Chattree, A; Barbour, J A; Thomas-Gibson, S; Bhandari, P; Saunders, B P; Veitch, A M; Anderson, J; Rembacken, B J; Loughrey, M B; Pullan, R; Garrett, W V; Lewis, G; Dolwani, S; Rutter, M D

    2017-01-01

    The management of large non-pedunculated colorectal polyps (LNPCPs) is complex, with widespread variation in management and outcome, even amongst experienced clinicians. Variations in the assessment and decision-making processes are likely to be a major factor in this variability. The creation of a standardized minimum dataset to aid decision-making may therefore result in improved clinical management. An official working group of 13 multidisciplinary specialists was appointed by the Association of Coloproctology of Great Britain and Ireland (ACPGBI) and the British Society of Gastroenterology (BSG) to develop a minimum dataset on LNPCPs. The literature review used to structure the ACPGBI/BSG guidelines for the management of LNPCPs was used by a steering subcommittee to identify various parameters pertaining to the decision-making processes in the assessment and management of LNPCPs. A modified Delphi consensus process was then used for voting on proposed parameters over multiple voting rounds with at least 80% agreement defined as consensus. The minimum dataset was used in a pilot process to ensure rigidity and usability. A 23-parameter minimum dataset with parameters relating to patient and lesion factors, including six parameters relating to image retrieval, was formulated over four rounds of voting with two pilot processes to test rigidity and usability. This paper describes the development of the first reported evidence-based and expert consensus minimum dataset for the management of LNPCPs. It is anticipated that this dataset will allow comprehensive and standardized lesion assessment to improve decision-making in the assessment and management of LNPCPs. Colorectal Disease © 2016 The Association of Coloproctology of Great Britain and Ireland.

  15. A Systematic Approach of Employing Quality by Design Principles: Risk Assessment and Design of Experiments to Demonstrate Process Understanding and Identify the Critical Process Parameters for Coating of the Ethylcellulose Pseudolatex Dispersion Using Non-Conventional Fluid Bed Process.

    PubMed

    Kothari, Bhaveshkumar H; Fahmy, Raafat; Claycamp, H Gregg; Moore, Christine M V; Chatterjee, Sharmista; Hoag, Stephen W

    2017-05-01

    The goal of this study was to utilize risk assessment techniques and statistical design of experiments (DoE) to gain process understanding and to identify critical process parameters for the manufacture of controlled release multiparticulate beads using a novel disk-jet fluid bed technology. The material attributes and process parameters were systematically assessed using the Ishikawa fish bone diagram and failure mode and effect analysis (FMEA) risk assessment methods. The high risk attributes identified by the FMEA analysis were further explored using resolution V fractional factorial design. To gain an understanding of the processing parameters, a resolution V fractional factorial study was conducted. Using knowledge gained from the resolution V study, a resolution IV fractional factorial study was conducted; the purpose of this IV study was to identify the critical process parameters (CPP) that impact the critical quality attributes and understand the influence of these parameters on film formation. For both studies, the microclimate, atomization pressure, inlet air volume, product temperature (during spraying and curing), curing time, and percent solids in the coating solutions were studied. The responses evaluated were percent agglomeration, percent fines, percent yield, bead aspect ratio, median particle size diameter (d50), assay, and drug release rate. Pyrobuttons® were used to record real-time temperature and humidity changes in the fluid bed. The risk assessment methods and process analytical tools helped to understand the novel disk-jet technology and to systematically develop models of the coating process parameters like process efficiency and the extent of curing during the coating process.

  16. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    NASA Astrophysics Data System (ADS)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  17. Optical components damage parameters database system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong

    2012-10-01

    Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.

  18. Active vs. Passive Television Viewing: A Model of the Development of Television Information Processing by Children.

    ERIC Educational Resources Information Center

    Wright, John C.; And Others

    A conceptual model of how children process televised information was developed with the goal of identifying those parameters of the process that are both measurable and manipulable in research settings. The model presented accommodates the nature of information processing both by the child and by the presentation by the medium. Presentation is…

  19. Assessing heat treatment of chicken breast cuts by impedance spectroscopy.

    PubMed

    Schmidt, Franciny C; Fuentes, Ana; Masot, Rafael; Alcañiz, Miguel; Laurindo, João B; Barat, José M

    2017-03-01

    The aim of this work was to develop a new system based on impedance spectroscopy to assess the heat treatment of previously cooked chicken meat by two experiments; in the first, samples were cooked at different temperatures (from 60 to 90 ℃) until core temperature of the meat reached the water bath temperature. In the second approach, temperature was 80 ℃ and the samples were cooked for different times (from 5 to 55 min). Impedance was measured once samples had cooled. The examined processing parameters were the maximum temperature reached in thermal centre of the samples, weight loss, moisture and the integral of the temperature profile during the cooking-cooling process. The correlation between the processing parameters and impedance was studied by partial least square regressions. The models were able to predict the studied parameters. Our results are essential for developing a new system to control the technological, sensory and safety aspects of cooked meat products on the whole meat processing line.

  20. Mathematical Model Of Variable-Polarity Plasma Arc Welding

    NASA Technical Reports Server (NTRS)

    Hung, R. J.

    1996-01-01

    Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.

  1. Development of numerical processing in children with typical and dyscalculic arithmetic skills—a longitudinal study

    PubMed Central

    Landerl, Karin

    2013-01-01

    Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310

  2. In-depth analysis and characterization of a dual damascene process with respect to different CD

    NASA Astrophysics Data System (ADS)

    Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Kim, Wan-Soo; Thrun, Xaver

    2018-03-01

    In a 200 mm high volume environment, we studied data from a dual damascene process. Dual damascene is a combination of lithography, etch and CMP that is used to create copper lines and contacts in one single step. During these process steps, different metal CD are measured by different measurement methods. In this study, we analyze the key numbers of the different measurements after different process steps and develop simple models to predict the electrical behavior* . In addition, radial profiles have been analyzed of both inline measurement parameters and electrical parameters. A matching method was developed based on inline and electrical data. Finally, correlation analysis for radial signatures is presented that can be used to predict excursions in electrical signatures.

  3. Formation of the predicted training parameters in the form of a discrete information stream

    NASA Astrophysics Data System (ADS)

    Smolentseva, T. E.; Sumin, V. I.; Zolnikov, V. K.; Lavlinsky, V. V.

    2018-03-01

    In work process of training in the form of a discrete information stream is considered. On each of stages of the considered process portions of the training information and quality of their assimilation are analysed. Individual characteristics and reaction trained for every portion of information on appropriate sections are defined. The control algorithm of training with the predicted number of control checks of the trainee who allows to define what operating influence is considered it is necessary to create for the trainee. On the basis of this algorithm the vector of probabilities of ignorance of elements of the training information is received. As a result of the conducted researches the algorithm on formation of the predicted training parameters is developed. In work the task of comparison of duration of training received experimentally with predicted on the basis of it is solved the conclusion is drawn on efficiency of formation of the predicted training parameters. The program complex on the basis of the values of individual parameters received as a result of experiments on each trainee who allows to calculate individual characteristics is developed, to form rating and to monitor process of change of parameters of training.

  4. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm

    PubMed Central

    Tamjidy, Mehran; Baharudin, B. T. Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-01-01

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon’s entropy. PMID:28772893

  5. Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BABA,T.; ISHIGURO,K.; ISHIHARA,Y.

    1999-08-30

    Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs weremore » defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment.« less

  6. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm.

    PubMed

    Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-05-15

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.

  7. Group Contribution Methods for Phase Equilibrium Calculations.

    PubMed

    Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian

    2015-01-01

    The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.

  8. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion

    PubMed Central

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product’s performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner’s ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499

  9. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion.

    PubMed

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-08-06

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product's performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner's ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters.

  10. Numerical simulation of electron beam welding with beam oscillations

    NASA Astrophysics Data System (ADS)

    Trushnikov, D. N.; Permyakov, G. L.

    2017-02-01

    This research examines the process of electron-beam welding in a keyhole mode with the use of beam oscillations. We study the impact of various beam oscillations and their parameters on the shape of the keyhole, the flow of heat and mass transfer processes and weld parameters to develop methodological recommendations. A numerical three-dimensional mathematical model of electron beam welding is presented. The model was developed on the basis of a heat conduction equation and a Navier-Stokes equation taking into account phase transitions at the interface of a solid and liquid phase and thermocapillary convection (Marangoni effect). The shape of the keyhole is determined based on experimental data on the parameters of the secondary signal by using the method of a synchronous accumulation. Calculations of thermal and hydrodynamic processes were carried out based on a computer cluster, using a simulation package COMSOL Multiphysics.

  11. Development of system design information for carbon dioxide using an amine type sorber

    NASA Technical Reports Server (NTRS)

    Rankin, R. L.; Roehlich, F.; Vancheri, F.

    1971-01-01

    Development work on system design information for amine type carbon dioxide sorber is reported. Amberlite IR-45, an aminated styrene divinyl benzene matrix, was investigated to determine the influence of design parameters of sorber particle size, process flow rate, CO2 partial pressure, total pressure, and bed designs. CO2 capacity and energy requirements for a 4-man size system were related mathematically to important operational parameters. Some fundamental studies in CO2 sorber capacity, energy requirements, and process operation were also performed.

  12. Standardization of domestic frying processes by an engineering approach.

    PubMed

    Franke, K; Strijowski, U

    2011-05-01

    An approach was developed to enable a better standardization of domestic frying of potato products. For this purpose, 5 domestic fryers differing in heating power and oil capacity were used. A very defined frying process using a highly standardized model product and a broad range of frying conditions was carried out in these fryers and the development of browning representing an important quality parameter was measured. Product-to-oil ratio, oil temperature, and frying time were varied. Quite different color changes were measured in the different fryers although the same frying process parameters were applied. The specific energy consumption for water evaporation (spECWE) during frying related to product amount was determined for all frying processes to define an engineering parameter for characterizing the frying process. A quasi-linear regression approach was applied to calculate this parameter from frying process settings and fryer properties. The high significance of the regression coefficients and a coefficient of determination close to unity confirmed the suitability of this approach. Based on this regression equation, curves for standard frying conditions (SFC curves) were calculated which describe the frying conditions required to obtain the same level of spECWE in the different domestic fryers. Comparison of browning results from the different fryers operated at conditions near the SFC curves confirmed the applicability of the approach. © 2011 Institute of Food Technologists®

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    N.D. Francis

    The objective of this calculation is to develop a time dependent in-drift effective thermal conductivity parameter that will approximate heat conduction, thermal radiation, and natural convection heat transfer using a single mode of heat transfer (heat conduction). In order to reduce the physical and numerical complexity of the heat transfer processes that occur (and must be modeled) as a result of the emplacement of heat generating wastes, a single parameter will be developed that approximates all forms of heat transfer from the waste package surface to the drift wall (or from one surface exchanging heat with another). Subsequently, with thismore » single parameter, one heat transfer mechanism (e.g., conduction heat transfer) can be used in the models. The resulting parameter is to be used as input in the drift-scale process-level models applied in total system performance assessments for the site recommendation (TSPA-SR). The format of this parameter will be a time-dependent table for direct input into the thermal-hydrologic (TH) and the thermal-hydrologic-chemical (THC) models.« less

  14. A Telemetry Browser Built with Java Components

    NASA Astrophysics Data System (ADS)

    Poupart, E.

    In the context of CNES balloon scientific campaigns and telemetry survey field, a generic telemetry processing product, called TelemetryBrowser in the following, was developed reusing COTS, Java Components for most of them. Connection between those components relies on a software architecture based on parameter producers and parameter consumers. The first one transmit parameter values to the second one which has registered to it. All of those producers and consumers can be spread over the network thanks to Corba, and over every kind of workstation thanks to Java. This gives a very powerful mean to adapt to constraints like network bandwidth, or workstations processing or memory. It's also very useful to display and correlate at the same time information coming from multiple and various sources. An important point of this architecture is that the coupling between parameter producers and parameter consumers is reduced to the minimum and that transmission of information on the network is made asynchronously. So, if a parameter consumer goes down or runs slowly, there is no consequence on the other consumers, because producers don't wait for their consumers to finish their data processing before sending it to other consumers. An other interesting point is that parameter producers, also called TelemetryServers in the following are generated nearly automatically starting from a telemetry description using Flavori component. Keywords Java components, Corba, distributed application, OpenORBii, software reuse, COTS, Internet, Flavor. i Flavor (Formal Language for Audio-Visual Object Representation) is an object-oriented media representation language being developed at Columbia University. It is designed as an extension of Java and C++ and simplifies the development of applications that involve a significant media processing component (encoding, decoding, editing, manipulation, etc.) by providing bitstream representation semantics. (flavor.sourceforge.net) ii OpenORB provides a Java implementation of the OMG Corba 2.4.2 specification (openorb.sourceforge.net) 1/16

  15. Investigation on influence of Wurster coating process parameters for the development of delayed release minitablets of Naproxen.

    PubMed

    Shah, Neha; Mehta, Tejal; Aware, Rahul; Shetty, Vasant

    2017-12-01

    The present work aims at studying process parameters affecting coating of minitablets (3 mm in diameter) through Wurster coating process. Minitablets of Naproxen with high drug loading were manufactured using 3 mm multi-tip punches. The release profile of core pellets (published) and minitablets was compared with that of marketed formulation. The core formulation of minitablets was found to show similarity in dissolution profile with marketed formulation and hence was further carried forward for functional coating over it. Wurster processing was implemented to pursue functional coating over core formulation. Different process parameters were screened and control strategy was applied for factors significantly affecting the process. Modified Plackett Burman Design was applied for studying important factors. Based on the significant factors and minimum level of coating required for functionalization, optimized process was executed. Final coated batch was evaluated for coating thickness, surface morphology, and drug release study.

  16. Autonomous sensor particle for parameter tracking in large vessels

    NASA Astrophysics Data System (ADS)

    Thiele, Sebastian; Da Silva, Marco Jose; Hampel, Uwe

    2010-08-01

    A self-powered and neutrally buoyant sensor particle has been developed for the long-term measurement of spatially distributed process parameters in the chemically harsh environments of large vessels. One intended application is the measurement of flow parameters in stirred fermentation biogas reactors. The prototype sensor particle is a robust and neutrally buoyant capsule, which allows free movement with the flow. It contains measurement devices that log the temperature, absolute pressure (immersion depth) and 3D-acceleration data. A careful calibration including an uncertainty analysis has been performed. Furthermore, autonomous operation of the developed prototype was successfully proven in a flow experiment in a stirred reactor model. It showed that the sensor particle is feasible for future application in fermentation reactors and other industrial processes.

  17. Comparison of two methods for calculating the P sorption capacity parameter in soils

    USDA-ARS?s Scientific Manuscript database

    Phosphorus (P) cycling in soils is an important process affecting P movement through the landscape. The P cycling routines in many computer models are based on the relationships developed for the EPIC model. An important parameter required for this model is the P sorption capacity parameter (PSP). I...

  18. In-line monitoring of the coffee roasting process with near infrared spectroscopy: Measurement of sucrose and colour.

    PubMed

    Santos, João Rodrigo; Viegas, Olga; Páscoa, Ricardo N M J; Ferreira, Isabel M P L V O; Rangel, António O S S; Lopes, João Almeida

    2016-10-01

    In this work, a real-time and in-situ analytical tool based on near infrared spectroscopy is proposed to predict two of the most relevant coffee parameters during the roasting process, sucrose and colour. The methodology was developed taking in consideration different coffee varieties (Arabica and Robusta), coffee origins (Brazil, East-Timor, India and Uganda) and roasting process procedures (slow and fast). All near infrared spectroscopy-based calibrations were developed resorting to partial least squares regression. The results proved the suitability of this methodology as demonstrated by range-error-ratio and coefficient of determination higher than 10 and 0.85 respectively, for all modelled parameters. The relationship between sucrose and colour development during the roasting process is further discussed, in light of designing in real-time coffee products with similar visual appearance and distinct organoleptic profile. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  20. A fortran program for Monte Carlo simulation of oil-field discovery sequences

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Davis, J.C.

    1993-01-01

    We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.

  1. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  2. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  3. Real-time control data wrangling for development of mathematical control models of technological processes

    NASA Astrophysics Data System (ADS)

    Vasilyeva, N. V.; Koteleva, N. I.; Fedorova, E. R.

    2018-05-01

    The relevance of the research is due to the need to stabilize the composition of the melting products of copper-nickel sulfide raw materials in the Vanyukov furnace. The goal of this research is to identify the most suitable methods for the aggregation of the real time data for the development of a mathematical model for control of the technological process of melting copper-nickel sulfide raw materials in the Vanyukov furnace. Statistical methods of analyzing the historical data of the real technological object and the correlation analysis of process parameters are described. Factors that exert the greatest influence on the main output parameter (copper content in matte) and ensure the physical-chemical transformations are revealed. An approach to the processing of the real time data for the development of a mathematical model for control of the melting process is proposed. The stages of processing the real time information are considered. The adopted methodology for the aggregation of data suitable for the development of a control model for the technological process of melting copper-nickel sulfide raw materials in the Vanyukov furnace allows us to interpret the obtained results for their further practical application.

  4. Modeling of feed-forward control using the partial least squares regression method in the tablet compression process.

    PubMed

    Hattori, Yusuke; Otsuka, Makoto

    2017-05-30

    In the pharmaceutical industry, the implementation of continuous manufacturing has been widely promoted in lieu of the traditional batch manufacturing approach. More specially, in recent years, the innovative concept of feed-forward control has been introduced in relation to process analytical technology. In the present study, we successfully developed a feed-forward control model for the tablet compression process by integrating data obtained from near-infrared (NIR) spectra and the physical properties of granules. In the pharmaceutical industry, batch manufacturing routinely allows for the preparation of granules with the desired properties through the manual control of process parameters. On the other hand, continuous manufacturing demands the automatic determination of these process parameters. Here, we proposed the development of a control model using the partial least squares regression (PLSR) method. The most significant feature of this method is the use of dataset integrating both the NIR spectra and the physical properties of the granules. Using our model, we determined that the properties of products, such as tablet weight and thickness, need to be included as independent variables in the PLSR analysis in order to predict unknown process parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Investigation of Parametric Influence on the Properties of Al6061-SiCp Composite

    NASA Astrophysics Data System (ADS)

    Adebisi, A. A.; Maleque, M. A.; Bello, K. A.

    2017-03-01

    The influence of process parameter in stir casting play a major role on the development of aluminium reinforced silicon carbide particle (Al-SiCp) composite. This study aims to investigate the influence of process parameters on wear and density properties of Al-SiCp composite using stir casting technique. Experimental data are generated based on a four-factors-five-level central composite design of response surface methodology. Analysis of variance is utilized to confirm the adequacy and validity of developed models considering the significant model terms. Optimization of the process parameters adequately predicts the Al-SiCp composite properties with stirring speed as the most influencing factor. The aim of optimization process is to minimize wear and maximum density. The multiple objective optimization (MOO) achieved an optimal value of 14 wt% reinforcement fraction (RF), 460 rpm stirring speed (SS), 820 °C processing temperature (PTemp) and 150 secs processing time (PT). Considering the optimum parametric combination, wear mass loss achieved a minimum of 1 x 10-3 g and maximum density value of 2.780g/mm3 with a confidence and desirability level of 95.5%.

  6. A Comparison of the One-, the Modified Three-, and the Three-Parameter Item Response Theory Models in the Test Development Item Selection Process.

    ERIC Educational Resources Information Center

    Eignor, Daniel R.; Douglass, James B.

    This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…

  7. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  8. Influence of manufacturing parameters on the strength of PLA parts using Layered Manufacturing technique: A statistical approach

    NASA Astrophysics Data System (ADS)

    Jaya Christiyan, K. G.; Chandrasekhar, U.; Mathivanan, N. Rajesh; Venkateswarlu, K.

    2018-02-01

    A 3D printing was successfully used to fabricate samples of Polylactic Acid (PLA). Processing parameters such as Lay-up speed, Lay-up thickness, and printing nozzle were varied. All samples were tested for flexural strength using three point load test. A statistical mathematical model was developed to correlate the processing parameters with flexural strength. The result clearly demonstrated that the lay-up thickness and nozzle diameter influenced flexural strength significantly, whereas lay-up speed hardly influenced the flexural strength.

  9. Identification of drought in Dhalai river watershed using MCDM and ANN models

    NASA Astrophysics Data System (ADS)

    Aher, Sainath; Shinde, Sambhaji; Guha, Shantamoy; Majumder, Mrinmoy

    2017-03-01

    An innovative approach for drought identification is developed using Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN) models from surveyed drought parameter data around the Dhalai river watershed in Tripura hinterlands, India. Total eight drought parameters, i.e., precipitation, soil moisture, evapotranspiration, vegetation canopy, cropping pattern, temperature, cultivated land, and groundwater level were obtained from expert, literature and cultivator survey. Then, the Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP) were used for weighting of parameters and Drought Index Identification (DII). Field data of weighted parameters in the meso scale Dhalai River watershed were collected and used to train the ANN model. The developed ANN model was used in the same watershed for identification of drought. Results indicate that the Limited-Memory Quasi-Newton algorithm was better than the commonly used training method. Results obtained from the ANN model shows the drought index developed from the study area ranges from 0.32 to 0.72. Overall analysis revealed that, with appropriate training, the ANN model can be used in the areas where the model is calibrated, or other areas where the range of input parameters is similar to the calibrated region for drought identification.

  10. The Use of Logistics n the Quality Parameters Control System of Material Flow

    ERIC Educational Resources Information Center

    Karpova, Natalia P.; Toymentseva, Irina A.; Shvetsova, Elena V.; Chichkina, Vera D.; Chubarkova, Elena V.

    2016-01-01

    The relevance of the research problem is conditioned on the need to justify the use of the logistics methodologies in the quality parameters control process of material flows. The goal of the article is to develop theoretical principles and practical recommendations for logistical system control in material flows quality parameters. A leading…

  11. Classification of video sequences into chosen generalized use classes of target size and lighting level.

    PubMed

    Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin

    The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.

  12. Minimizing energy dissipation of matrix multiplication kernel on Virtex-II

    NASA Astrophysics Data System (ADS)

    Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook

    2002-07-01

    In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.

  13. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  14. The Influence of Friction Stir Weld Tool Form and Welding Parameters on Weld Structure and Properties: Nugget Bulge in Self-Reacting Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Schneider, Judy; Nunes, Arthur C., Jr.; Brendel, Michael S.

    2010-01-01

    Although friction stir welding (FSW) was patented in 1991, process development has been based upon trial and error and the literature still exhibits little understanding of the mechanisms determining weld structure and properties. New concepts emerging from a better understanding of these mechanisms enhance the ability of FSW engineers to think about the FSW process in new ways, inevitably leading to advances in the technology. A kinematic approach in which the FSW flow process is decomposed into several simple flow components has been found to explain the basic structural features of FSW welds and to relate them to tool geometry and process parameters. Using this modelling approach, this study reports on a correlation between the features of the weld nugget, process parameters, weld tool geometry, and weld strength. This correlation presents a way to select process parameters for a given tool geometry so as to optimize weld strength. It also provides clues that may ultimately explain why the weld strength varies within the sample population.

  15. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  16. Resist Parameter Extraction from Line-and-Space Patterns of Chemically Amplified Resist for Extreme Ultraviolet Lithography

    NASA Astrophysics Data System (ADS)

    Kozawa, Takahiro; Oizumi, Hiroaki; Itani, Toshiro; Tagawa, Seiichi

    2010-11-01

    The development of extreme ultraviolet (EUV) lithography has progressed owing to worldwide effort. As the development status of EUV lithography approaches the requirements for the high-volume production of semiconductor devices with a minimum line width of 22 nm, the extraction of resist parameters becomes increasingly important from the viewpoints of the accurate evaluation of resist materials for resist screening and the accurate process simulation for process and mask designs. In this study, we demonstrated that resist parameters (namely, quencher concentration, acid diffusion constant, proportionality constant of line edge roughness, and dissolution point) can be extracted from the scanning electron microscopy (SEM) images of patterned resists without the knowledge on the details of resist contents using two types of latest EUV resist.

  17. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    NASA Astrophysics Data System (ADS)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  18. Hiereachical Bayesian Model for Combining Geochemical and Geophysical Data for Environmental Applications Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2013-05-01

    Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less

  19. Strategic Planning: A (Site) Sight-Based Approach to Curriculum and Staff Development.

    ERIC Educational Resources Information Center

    Johnson, Daniel P.

    The purpose of (Colorado's) Clear Creek School District's strategic planning process has been to develop basic district-wide parameters to promote instructional improvement through a process of shared leadership. The approach is termed "sight-based" to indicate the school district's commitment to connecting curriculum and…

  20. The topographic development and areal parametric characterization of a stratified surface polished by mass finishing

    NASA Astrophysics Data System (ADS)

    Walton, Karl; Blunt, Liam; Fleming, Leigh

    2015-09-01

    Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.

  1. Parametric Study and Multi-Criteria Optimization in Laser Cladding by a High Power Direct Diode Laser

    NASA Astrophysics Data System (ADS)

    Farahmand, Parisa; Kovacevic, Radovan

    2014-12-01

    In laser cladding, the performance of the deposited layers subjected to severe working conditions (e.g., wear and high temperature conditions) depends on the mechanical properties, the metallurgical bond to the substrate, and the percentage of dilution. The clad geometry and mechanical characteristics of the deposited layer are influenced greatly by the type of laser used as a heat source and process parameters used. Nowadays, the quality of fabricated coating by laser cladding and the efficiency of this process has improved thanks to the development of high-power diode lasers, with power up to 10 kW. In this study, the laser cladding by a high power direct diode laser (HPDDL) as a new heat source in laser cladding was investigated in detail. The high alloy tool steel material (AISI H13) as feedstock was deposited on mild steel (ASTM A36) by a HPDDL up to 8kW laser and with new design lateral feeding nozzle. The influences of the main process parameters (laser power, powder flow rate, and scanning speed) on the clad-bead geometry (specifically layer height and depth of the heat affected zone), and clad microhardness were studied. Multiple regression analysis was used to develop the analytical models for desired output properties according to input process parameters. The Analysis of Variance was applied to check the accuracy of the developed models. The response surface methodology (RSM) and desirability function were used for multi-criteria optimization of the cladding process. In order to investigate the effect of process parameters on the molten pool evolution, in-situ monitoring was utilized. Finally, the validation results for optimized process conditions show the predicted results were in a good agreement with measured values. The multi-criteria optimization makes it possible to acquire an efficient process for a combination of clad geometrical and mechanical characteristics control.

  2. Beam engineering for zero conicity cutting and drilling with ultra fast laser (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Letan, Amelie; Mishchik, Konstantin; Audouard, Eric; Hoenninger, Clemens; Mottay, Eric P.

    2017-03-01

    With the development of high average power, high repetition rate, industrial ultrafast lasers, it is now possible to achieve a high throughput with femtosecond laser processing, providing that the operating parameters are finely tuned to the application. Femtosecond lasers play a key role in these processes, due to their ability to high quality micro processing. They are able to drill high thickness holes (up to 1 mm) with arbitrary shapes, such as zero-conicity or even inversed taper, but can also perform zero-taper cutting. A clear understanding of all the processing steps necessary to optimize the processing speed is a main challenge for industrial developments. Indeed, the laser parameters are not independent of the beam steering devices. Pulses energy and repetition rate have to be precisely adjusted to the beam angle with the sample, and to the temporal and spatial sequences of pulses superposition. The purpose of the present work is to identify the role of these parameters for high aspect ratio drilling and cutting not only with experimental trials, but also with numerical estimations, using a simple engineering model based on the two temperature description of ultra-fast ablation. Assuming a nonlinear logarithmic response of the materials to ultrafast pulses, each material can be described by only two adjustable parameters. Simple assumptions allow to predict the effect of beam velocity and non-normal incident beams to estimate profile shapes and processing time.

  3. Pharmaceutical quality by design: product and process development, understanding, and control.

    PubMed

    Yu, Lawrence X

    2008-04-01

    The purpose of this paper is to discuss the pharmaceutical Quality by Design (QbD) and describe how it can be used to ensure pharmaceutical quality. The QbD was described and some of its elements identified. Process parameters and quality attributes were identified for each unit operation during manufacture of solid oral dosage forms. The use of QbD was contrasted with the evaluation of product quality by testing alone. The QbD is a systemic approach to pharmaceutical development. It means designing and developing formulations and manufacturing processes to ensure predefined product quality. Some of the QbD elements include: Defining target product quality profile; Designing product and manufacturing processes; Identifying critical quality attributes, process parameters, and sources of variability; Controlling manufacturing processes to produce consistent quality over time. Using QbD, pharmaceutical quality is assured by understanding and controlling formulation and manufacturing variables. Product testing confirms the product quality. Implementation of QbD will enable transformation of the chemistry, manufacturing, and controls (CMC) review of abbreviated new drug applications (ANDAs) into a science-based pharmaceutical quality assessment.

  4. Additive Manufacturing of Fuel Injectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadek Tadros, Dr. Alber Alphonse; Ritter, Dr. George W.; Drews, Charles Donald

    Additive manufacturing (AM), also known as 3D-printing, has been shifting from a novelty prototyping paradigm to a legitimate manufacturing tool capable of creating components for highly complex engineered products. An emerging AM technology for producing metal parts is the laser powder bed fusion (L-PBF) process; however, industry manufacturing specifications and component design practices for L-PBF have not yet been established. Solar Turbines Incorporated (Solar), an industrial gas turbine manufacturer, has been evaluating AM technology for development and production applications with the desire to enable accelerated product development cycle times, overall turbine efficiency improvements, and supply chain flexibility relative to conventionalmore » manufacturing processes (casting, brazing, welding). Accordingly, Solar teamed with EWI on a joint two-and-a-half-year project with the goal of developing a production L-PBF AM process capable of consistently producing high-nickel alloy material suitable for high temperature gas turbine engine fuel injector components. The project plan tasks were designed to understand the interaction of the process variables and their combined impact on the resultant AM material quality. The composition of the high-nickel alloy powders selected for this program met the conventional cast Hastelloy X compositional limits and were commercially available in different particle size distributions (PSD) from two suppliers. Solar produced all the test articles and both EWI and Solar shared responsibility for analyzing them. The effects of powder metal input stock, laser parameters, heat treatments, and post-finishing methods were evaluated. This process knowledge was then used to generate tensile, fatigue, and creep material properties data curves suitable for component design activities. The key process controls for ensuring consistent material properties were documented in AM powder and process specifications. The basic components of the project were: • Powder metal input stock: Powder characterization, dimensional accuracy, metallurgical characterization, and mechanical properties evaluation. • Process parameters: Laser parameter effects, post-printing heat-treatment development, mechanical properties evaluation, and post-finishing technique. • Material design curves: Room and elevated temperature tensiles, low cycle fatigue, and creep rupture properties curves generated. • AM specifications: Key metal powder characteristics, laser parameters, and heat-treatment controls identified.« less

  5. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.

  6. A Novel Scale Up Model for Prediction of Pharmaceutical Film Coating Process Parameters.

    PubMed

    Suzuki, Yasuhiro; Suzuki, Tatsuya; Minami, Hidemi; Terada, Katsuhide

    2016-01-01

    In the pharmaceutical tablet film coating process, we clarified that a difference in exhaust air relative humidity can be used to detect differences in process parameters values, the relative humidity of exhaust air was different under different atmospheric air humidity conditions even though all setting values of the manufacturing process parameters were the same, and the water content of tablets was correlated with the exhaust air relative humidity. Based on this experimental data, the exhaust air relative humidity index (EHI), which is an empirical equation that includes as functional parameters the pan coater type, heated air flow rate, spray rate of coating suspension, saturated water vapor pressure at heated air temperature, and partial water vapor pressure at atmospheric air pressure, was developed. The predictive values of exhaust relative humidity using EHI were in good correlation with the experimental data (correlation coefficient of 0.966) in all datasets. EHI was verified using the date of seven different drug products of different manufacturing scales. The EHI model will support formulation researchers by enabling them to set film coating process parameters when the batch size or pan coater type changes, and without the time and expense of further extensive testing.

  7. Thermomechanical Simulation of the Splashing of Ceramic Droplets on a Rigid Substrate

    NASA Astrophysics Data System (ADS)

    Bertagnolli, Mauro; Marchese, Maurizio; Jacucci, Gianni; St. Doltsinis, Ioannis; Noelting, Swen

    1997-05-01

    Finite element simulation techniques have been applied to the spreading process of single ceramic liquid droplets impacting on a flat cold surface under plasma-spraying conditions. The goal of the present investigation is to predict the geometrical form of the splat as a function of technological process parameters, such as initial temperature and velocity, and to follow the thermal field developing in the droplet up to solidification. A non-linear finite element programming system has been utilized in order to model the complex physical phenomena involved in the present impact process. The Lagrangean description of the motion of the viscous melt in the drops, as constrained by surface tension and the developing contact with the target, has been coupled to an analysis of transient thermal phenomena accounting also for the solidification of the material. The present study refers to a parameter spectrum as from experimental data of technological relevance. The significance of process parameters for the most pronounced physical phenomena is discussed as are also the consequences of modelling. We consider the issue of solidification as well and touch on the effect of partially unmelted material.

  8. Effects of developer exhaustion on DFL Contrast FV-58 and Kodak Insight dental films.

    PubMed

    de Carvalho, Fabiano Pachêco; da Silveira, M M F; Frazão, M A G; de Santana, S T; dos Anjos Pontual, M L

    2011-09-01

    The aim of this study was to compare the properties of the DFL Contrast FV-58 F-speed film (DFL Co., Rio de Janerio, Brazil) with the Kodak Insight E/F speed film (Eastman Kodak, Rochester, NY) in fresh and exhausted processing solutions. The parameters studied were the speed, average gradient and latitude. Five samples of each type of film were exposed under standardized conditions over 5 weeks. The films were developed in fresh and progressively exhausted processing solutions. Characteristic curves were constructed from values of optical density and radiation dose and were used to calculate the parameters. An analysis of variance was performed separately for film type and time. DFL Contrast FV-58 film has a speed and average gradient that is significantly higher than Insight film, whereas the values of latitude are lower. Exhausted processing solutions were not significant in the parameters studied. DFL Contrast FV-58 film has stable properties when exhausted manual processing solutions are used and can be recommended for use in dental practice, contributing to dose reduction.

  9. Effects of developer exhaustion on DFL Contrast FV-58 and Kodak Insight dental films

    PubMed Central

    de Carvalho, FP; da Silveira, MMF; Frazão, MAG; de Santana, ST; dos Anjos Pontual, ML

    2011-01-01

    Objectives The aim of this study was to compare the properties of the DFL Contrast FV-58 F-speed film (DFL Co., Rio de Janerio, Brazil) with the Kodak Insight E/F speed film (Eastman Kodak, Rochester, NY) in fresh and exhausted processing solutions. The parameters studied were the speed, average gradient and latitude. Methods Five samples of each type of film were exposed under standardized conditions over 5 weeks. The films were developed in fresh and progressively exhausted processing solutions. Characteristic curves were constructed from values of optical density and radiation dose and were used to calculate the parameters. An analysis of variance was performed separately for film type and time. Results DFL Contrast FV-58 film has a speed and average gradient that is significantly higher than Insight film, whereas the values of latitude are lower. Exhausted processing solutions were not significant in the parameters studied. Conclusion DFL Contrast FV-58 film has stable properties when exhausted manual processing solutions are used and can be recommended for use in dental practice, contributing to dose reduction. PMID:21831975

  10. Multiobjective Optimization of Atmospheric Plasma Spray Process Parameters to Deposit Yttria-Stabilized Zirconia Coatings Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Ramachandran, C. S.; Balasubramanian, V.; Ananthapadmanabhan, P. V.

    2011-03-01

    Atmospheric plasma spraying is used extensively to make Thermal Barrier Coatings of 7-8% yttria-stabilized zirconia powders. The main problem faced in the manufacture of yttria-stabilized zirconia coatings by the atmospheric plasma spraying process is the selection of the optimum combination of input variables for achieving the required qualities of coating. This problem can be solved by the development of empirical relationships between the process parameters (input power, primary gas flow rate, stand-off distance, powder feed rate, and carrier gas flow rate) and the coating quality characteristics (deposition efficiency, tensile bond strength, lap shear bond strength, porosity, and hardness) through effective and strategic planning and the execution of experiments by response surface methodology. This article highlights the use of response surface methodology by designing a five-factor five-level central composite rotatable design matrix with full replication for planning, conduction, execution, and development of empirical relationships. Further, response surface methodology was used for the selection of optimum process parameters to achieve desired quality of yttria-stabilized zirconia coating deposits.

  11. The Influence of Welding Parameters on the Nugget Formation of Resistance Spot Welding of Inconel 625 Sheets

    NASA Astrophysics Data System (ADS)

    Rezaei Ashtiani, Hamid Reza; Zarandooz, Roozbeh

    2015-09-01

    A 2D axisymmetric electro-thermo-mechanical finite element (FE) model is developed to investigate the effect of current intensity, welding time, and electrode tip diameter on temperature distributions and nugget size in resistance spot welding (RSW) process of Inconel 625 superalloy sheets using ABAQUS commercial software package. The coupled electro-thermal analysis and uncoupled thermal-mechanical analysis are used for modeling process. In order to improve accuracy of simulation, material properties including physical, thermal, and mechanical properties have been considered to be temperature dependent. The thickness and diameter of computed weld nuggets are compared with experimental results and good agreement is observed. So, FE model developed in this paper provides prediction of quality and shape of the weld nuggets and temperature distributions with variation of each process parameter, suitably. Utilizing this FE model assists in adjusting RSW parameters, so that expensive experimental process can be avoided. The results show that increasing welding time and current intensity lead to an increase in the nugget size and electrode indentation, whereas increasing electrode tip diameter decreases nugget size and electrode indentation.

  12. On the Modeling of Vacuum Arc Remelting Process in Titanium Alloys

    NASA Astrophysics Data System (ADS)

    Patel, Ashish; Fiore, Daniel

    2016-07-01

    Mathematical modeling is routinely used in the process development and production of advanced aerospace alloys to gain greater insight into the effect of process parameters on final properties. This article describes the application of a 2-D mathematical VAR model presented at previous LMPC meetings. The impact of process parameters on melt pool geometry, solidification behavior, fluid-flow and chemistry in a Ti-6Al-4V ingot is discussed. Model predictions are validated against published data from a industrial size ingot, and results of a parametric study on particle dissolution are also discussed.

  13. Laser Peening Process and Its Impact on Materials Properties in Comparison with Shot Peening and Ultrasonic Impact Peening

    PubMed Central

    Gujba, Abdullahi K.; Medraj, Mamoun

    2014-01-01

    The laser shock peening (LSP) process using a Q-switched pulsed laser beam for surface modification has been reviewed. The development of the LSP technique and its numerous advantages over the conventional shot peening (SP) such as better surface finish, higher depths of residual stress and uniform distribution of intensity were discussed. Similar comparison with ultrasonic impact peening (UIP)/ultrasonic shot peening (USP) was incorporated, when possible. The generation of shock waves, processing parameters, and characterization of LSP treated specimens were described. Special attention was given to the influence of LSP process parameters on residual stress profiles, material properties and structures. Based on the studies so far, more fundamental understanding is still needed when selecting optimized LSP processing parameters and substrate conditions. A summary of the parametric studies of LSP on different materials has been presented. Furthermore, enhancements in the surface micro and nanohardness, elastic modulus, tensile yield strength and refinement of microstructure which translates to increased fatigue life, fretting fatigue life, stress corrosion cracking (SCC) and corrosion resistance were addressed. However, research gaps related to the inconsistencies in the literature were identified. Current status, developments and challenges of the LSP technique were discussed. PMID:28788284

  14. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  15. Application of physicochemical properties and process parameters in the development of a neural network model for prediction of tablet characteristics.

    PubMed

    Sovány, Tamás; Papós, Kitti; Kása, Péter; Ilič, Ilija; Srčič, Stane; Pintye-Hódi, Klára

    2013-06-01

    The importance of in silico modeling in the pharmaceutical industry is continuously increasing. The aim of the present study was the development of a neural network model for prediction of the postcompressional properties of scored tablets based on the application of existing data sets from our previous studies. Some important process parameters and physicochemical characteristics of the powder mixtures were used as training factors to achieve the best applicability in a wide range of possible compositions. The results demonstrated that, after some pre-processing of the factors, an appropriate prediction performance could be achieved. However, because of the poor extrapolation capacity, broadening of the training data range appears necessary.

  16. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  17. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  18. An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172

  19. Subsonic flight test evaluation of a propulsion system parameter estimation process for the F100 engine

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Gilyard, Glenn B.

    1992-01-01

    Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.

  20. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    PubMed

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Optimization of Process Parameters of Pulsed Electro Deposition Technique for Nanocrystalline Nickel Coating Using Gray Relational Analysis (GRA)

    NASA Astrophysics Data System (ADS)

    Venkatesh, C.; Sundara Moorthy, N.; Venkatesan, R.; Aswinprasad, V.

    The moving parts of any mechanism and machine parts are always subjected to a significant wear due to the development of friction. It is an utmost important aspect to address the wear problems in present environment. But the complexity goes on increasing to replace the worn out parts if they are very precise. Technology advancement in surface engineering ensures the minimum surface wear with the introduction of polycrystalline nano nickel coating. The enhanced tribological property of the nano nickel coating was achieved by the development of grain size and hardness of the surface. In this study, it has been decided to focus on the optimized parameters of the pulsed electro deposition to develop such a coating. Taguchi’s method coupled gray relational analysis was employed by considering the pulse frequency, average current density and duty cycle as the chief process parameters. The grain size and hardness were considered as responses. Totally, nine experiments were conducted as per L9 design of experiment. Additionally, response graph method has been applied to determine the most significant parameter to influence both the responses. In order to improve the degree of validation, confirmation test and predicted gray grade were carried out with the optimized parameters. It has been observed that there was significant improvement in gray grade for the optimal parameters.

  2. The degree of mutual anisotropy of biological liquids polycrystalline nets as a parameter in diagnostics and differentiations of hominal inflammatory processes

    NASA Astrophysics Data System (ADS)

    Angelsky, O. V.; Ushenko, Yu. A.; Balanetska, V. O.

    2011-09-01

    To characterize the degree of consistency of parameters of the optically uniaxial birefringent protein nets of blood plasma a new parameter - complex degree of mutual anisotropy is suggested. The technique of polarization measuring the coordinate distributions of the complex degree of mutual anisotropy of blood plasma is developed. It is shown that statistic approach to the analysis of the complex degree of mutual anisotropy distributions of blood plasma is effective during the diagnostics and differentiation of an acute inflammatory processes as well as acute and gangrenous appendicitis.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  4. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  5. Modelling the Cast Component Weight in Hot Chamber Die Casting using Combined Taguchi and Buckingham's π Approach

    NASA Astrophysics Data System (ADS)

    Singh, Rupinder

    2018-02-01

    Hot chamber (HC) die casting process is one of the most widely used commercial processes for the casting of low temperature metals and alloys. This process gives near-net shape product with high dimensional accuracy. However in actual field environment the best settings of input parameters is often conflicting as the shape and size of the casting changes and one have to trade off among various output parameters like hardness, dimensional accuracy, casting defects, microstructure etc. So for online inspection of the cast components properties (without affecting the production line) the weight measurement has been established as one of the cost effective method (as the difference in weight of sound and unsound casting reflects the possible casting defects) in field environment. In the present work at first stage the effect of three input process parameters (namely: pressure at 2nd phase in HC die casting; metal pouring temperature and die opening time) has been studied for optimizing the cast component weight `W' as output parameter in form of macro model based upon Taguchi L9 OA. After this Buckingham's π approach has been applied on Taguchi based macro model for the development of micro model. This study highlights the Taguchi-Buckingham based combined approach as a case study (for conversion of macro model into micro model) by identification of optimum levels of input parameters (based on Taguchi approach) and development of mathematical model (based on Buckingham's π approach). Finally developed mathematical model can be used for predicting W in HC die casting process with more flexibility. The results of study highlights second degree polynomial equation for predicting cast component weight in HC die casting and suggest that pressure at 2nd stage is one of the most contributing factors for controlling the casting defect/weight of casting.

  6. Setting priorities in health care organizations: criteria, processes, and parameters of success.

    PubMed

    Gibson, Jennifer L; Martin, Douglas K; Singer, Peter A

    2004-09-08

    Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.

  7. Fabrication of Microstripline Wiring for Large Format Transition Edge Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Chervenak, James A.; Adams, J. M.; Bailey, C. N.; Bandler, S.; Brekosky, R. P.; Eckart, M. E.; Erwin, A. E.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.; hide

    2012-01-01

    We have developed a process to integrate microstripline wiring with transition edge sensors (TES). The process includes additional layers for metal-etch stop and dielectric adhesion to enable recovery of parameters achieved in non-microstrip pixel designs. We report on device parameters in close-packed TES arrays achieved with the microstrip process including R(sub n), G, and T(sub c) uniformity. Further, we investigate limits of this method of producing high-density, microstrip wiring including critical current to determine the ultimate scalability of TES arrays with two layers of wiring.

  8. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  9. Estimation of environment-related properties of chemicals for design of sustainable processes: development of group-contribution+ (GC+) property models and uncertainty analysis.

    PubMed

    Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul

    2012-11-26

    The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.

  10. Comprehensive process maps for synthesizing high density aluminum oxide-carbon nanotube coatings by plasma spraying for improved mechanical and wear properties

    NASA Astrophysics Data System (ADS)

    Keshri, Anup Kumar

    Plasma sprayed aluminum oxide ceramic coating is widely used due to its outstanding wear, corrosion, and thermal shock resistance. But porosity is the integral feature in the plasma sprayed coating which exponentially degrades its properties. In this study, process maps were developed to obtain Al2O3-CNT composite coatings with the highest density (i.e. lowest porosity) and improved mechanical and wear properties. Process map is defined as a set of relationships that correlates large number of plasma processing parameters to the coating properties. Carbon nanotubes (CNTs) were added as reinforcement to Al2O 3 coating to improve the fracture toughness and wear resistance. Two novel powder processing approaches viz spray drying and chemical vapor growth were adopted to disperse CNTs in Al2O3 powder. The degree of CNT dispersion via chemical vapor deposition (CVD) was superior to spray drying but CVD could not synthesize powder in large amount. Hence optimization of plasma processing parameters and process map development was limited to spray dried Al2O3 powder containing 0, 4 and 8 wt. % CNTs. An empirical model using Pareto diagram was developed to link plasma processing parameters with the porosity of coating. Splat morphology as a function of plasma processing parameter was also studied to understand its effect on mechanical properties. Addition of a mere 1.5 wt. % CNTs via CVD technique showed ˜27% and ˜24% increase in the elastic modulus and fracture toughness respectively. Improved toughness was attributed to combined effect of lower porosity and uniform dispersion of CNTs which promoted the toughening by CNT bridging, crack deflection and strong CNT/Al2O3 interface. Al2O 3-8 wt. % CNT coating synthesized using spray dried powder showed 73% improvement in the fracture toughness when porosity reduced from 4.7% to 3.0%. Wear resistance of all coatings at room and elevated temperatures (573 K, 873 K) showed improvement with CNT addition and decreased porosity. Such behavior was due to improved mechanical properties, protective film formation due to tribochemical reaction, and CNT bridging between the splats. Finally, process maps correlating porosity content, CNT content, mechanical properties, and wear properties were developed.

  11. Development and evaluation of a dimensionless mechanistic pan coating model for the prediction of coated tablet appearance.

    PubMed

    Niblett, Daniel; Porter, Stuart; Reynolds, Gavin; Morgan, Tomos; Greenamoyer, Jennifer; Hach, Ronald; Sido, Stephanie; Karan, Kapish; Gabbott, Ian

    2017-08-07

    A mathematical, mechanistic tablet film-coating model has been developed for pharmaceutical pan coating systems based on the mechanisms of atomisation, tablet bed movement and droplet drying with the main purpose of predicting tablet appearance quality. Two dimensionless quantities were used to characterise the product properties and operating parameters: the dimensionless Spray Flux (relating to area coverage of the spray droplets) and the Niblett Number (relating to the time available for drying of coating droplets). The Niblett Number is the ratio between the time a droplet needs to dry under given thermodynamic conditions and the time available for the droplet while on the surface of the tablet bed. The time available for drying on the tablet bed surface is critical for appearance quality. These two dimensionless quantities were used to select process parameters for a set of 22 coating experiments, performed over a wide range of multivariate process parameters. The dimensionless Regime Map created can be used to visualise the effect of interacting process parameters on overall tablet appearance quality and defects such as picking and logo bridging. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Modelling of intermittent microwave convective drying: parameter sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  13. Experimental design approach to the process parameter optimization for laser welding of martensitic stainless steels in a constrained overlap configuration

    NASA Astrophysics Data System (ADS)

    Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.

    2011-02-01

    This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.

  14. Development of a Rational Design Space for Optimizing Mixing Conditions for Formation of Adhesive Mixtures for Dry-Powder Inhaler Formulations.

    PubMed

    Sarkar, Saurabh; Minatovicz, Bruna; Thalberg, Kyrre; Chaudhuri, Bodhisattwa

    2017-01-01

    The purpose of the present study was to develop guidance toward rational choice of blenders and processing conditions to make robust and high performing adhesive mixtures for dry-powder inhalers and to develop quantitative experimental approaches for optimizing the process. Mixing behavior of carrier (LH100) and AstraZeneca fine lactose in high-shear and low-shear double cone blenders was systematically investigated. Process variables impacting the mixing performance were evaluated for both blenders. The performance of the blenders with respect to the mixing time, press-on forces, static charging, and abrasion of carrier fines was monitored, and for some of the parameters, distinct differences could be detected. A comparison table is presented, which can be used as a guidance to enable rational choice of blender and process parameters based on the user requirements. Segregation of adhesive mixtures during hopper discharge was also investigated. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  15. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  16. Practice Parameter for Psychiatric Consultation to Schools

    ERIC Educational Resources Information Center

    Journal of the American Academy of Child and Adolescent Psychiatry, 2005

    2005-01-01

    This practice parameter reviews the topic of psychiatric consultation to schools. The review covers the history of school consultation and current consultative models; the process of developing a consultative relationship; school administrative procedures, personnel, and milieu; legal protections for students with mental disabilities; and issues…

  17. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    EPA Science Inventory

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  18. HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.

    PubMed

    Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C

    2004-07-01

    A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.

  19. Probabilistic modeling of percutaneous absorption for risk-based exposure assessments and transdermal drug delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford Kuofei

    Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less

  20. Harmony search optimization in dimensional accuracy of die sinking EDM process using SS316L stainless steel

    NASA Astrophysics Data System (ADS)

    Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.

    2017-09-01

    Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.

  1. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo

    PubMed Central

    Golightly, Andrew; Wilkinson, Darren J.

    2011-01-01

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583

  2. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  3. Encapsulation of brewing yeast in alginate/chitosan matrix: lab-scale optimization of lager beer fermentation.

    PubMed

    Naydenova, Vessela; Badova, Mariyana; Vassilev, Stoyan; Iliev, Vasil; Kaneva, Maria; Kostov, Georgi

    2014-03-04

    Two mathematical models were developed for studying the effect of main fermentation temperature ( T MF ), immobilized cell mass ( M IC ) and original wort extract (OE) on beer fermentation with alginate-chitosan microcapsules with a liquid core. During the experiments, the investigated parameters were varied in order to find the optimal conditions for beer fermentation with immobilized cells. The basic beer characteristics, i.e. extract, ethanol, biomass concentration, pH and colour, as well as the concentration of aldehydes and vicinal diketones, were measured. The results suggested that the process parameters represented a powerful tool in controlling the fermentation time. Subsequently, the optimized process parameters were used to produce beer in laboratory batch fermentation. The system productivity was also investigated and the data were used for the development of another mathematical model.

  4. Encapsulation of brewing yeast in alginate/chitosan matrix: lab-scale optimization of lager beer fermentation

    PubMed Central

    Naydenova, Vessela; Badova, Mariyana; Vassilev, Stoyan; Iliev, Vasil; Kaneva, Maria; Kostov, Georgi

    2014-01-01

    Two mathematical models were developed for studying the effect of main fermentation temperature (T MF), immobilized cell mass (M IC) and original wort extract (OE) on beer fermentation with alginate-chitosan microcapsules with a liquid core. During the experiments, the investigated parameters were varied in order to find the optimal conditions for beer fermentation with immobilized cells. The basic beer characteristics, i.e. extract, ethanol, biomass concentration, pH and colour, as well as the concentration of aldehydes and vicinal diketones, were measured. The results suggested that the process parameters represented a powerful tool in controlling the fermentation time. Subsequently, the optimized process parameters were used to produce beer in laboratory batch fermentation. The system productivity was also investigated and the data were used for the development of another mathematical model. PMID:26019512

  5. New Ultrasonic Controller and Characterization System for Low Temperature Drying Process Intensification

    NASA Astrophysics Data System (ADS)

    Andrés, R. R.; Blanco, A.; Acosta, V. M.; Riera, E.; Martínez, I.; Pinto, A.

    Process intensification constitutes a high interesting and promising industrial area. It aims to modify conventional processes or develop new technologies in order to reduce energy needs, increase yields and improve product quality. It has been demonstrated by this research group (CSIC) that power ultrasound have a great potential in food drying processes. The effects associated with the application of power ultrasound can enhance heat and mass transfer and may constitute a way for process intensification. The objective of this work has been the design and development of a new ultrasonic system for the power characterization of piezoelectric plate-transducers, as excitation, monitoring, analysis, control and characterization of their nonlinear response. For this purpose, the system proposes a new, efficient and economic approach that separates the effect of different parameters of the process like excitation, medium and transducer parameters and variables (voltage, current, frequency, impedance, vibration velocity, acoustic pressure and temperature) by observing the electrical, mechanical, acoustical and thermal behavior, and controlling the vibrational state.

  6. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  7. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    NASA Astrophysics Data System (ADS)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.

  8. Sludge stabilization through aerobic digestion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, R.B.; Smith, D.G.; Bennett, E.R.

    1979-10-01

    The aerobic digestion process with certain modifications is evaluated as an alternative for sludge processing capable of developing a product with characteristics required for land application. Environmental conditions, including temperature, solids concentration, and digestion time, that affect the aerobic digestion of a mixed primary sludge-trickling filter humus are investigated. Variations in these parameters that influence the characteristics of digested sludge are determined, and the parameters are optimized to: provide the maximum rate of volatile solids reduction; develop a stable, nonodorous product sludge; and provide the maximum rate of oxidation of the nitrogenous material present in the feed sludge. (3 diagrams,more » 9 graphs, 15 references, 3 tables)« less

  9. The practical use of simplicity in developing ground water models

    USGS Publications Warehouse

    Hill, M.C.

    2006-01-01

    The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.

  10. Development of an aerosol microphysical module: Aerosol Two-dimensional bin module for foRmation and Aging Simulation (ATRAS)

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Koike, M.; Kondo, Y.; Fast, J. D.; Takigawa, M.

    2014-09-01

    Number concentrations, size distributions, and mixing states of aerosols are essential parameters for accurate estimations of aerosol direct and indirect effects. In this study, we develop an aerosol module, designated the Aerosol Two-dimensional bin module for foRmation and Aging Simulation (ATRAS), that can explicitly represent these parameters by considering new particle formation (NPF), black carbon (BC) aging, and secondary organic aerosol (SOA) processes. A two-dimensional bin representation is used for particles with dry diameters from 40 nm to 10 μm to resolve both aerosol sizes (12 bins) and BC mixing states (10 bins) for a total of 120 bins. The particles with diameters between 1 and 40 nm are resolved using additional eight size bins to calculate NPF. The ATRAS module is implemented in the WRF-Chem model and applied to examine the sensitivity of simulated mass, number, size distributions, and optical and radiative parameters of aerosols to NPF, BC aging, and SOA processes over East Asia during the spring of 2009. The BC absorption enhancement by coating materials is about 50% over East Asia during the spring, and the contribution of SOA processes to the absorption enhancement is estimated to be 10-20% over northern East Asia and 20-35% over southern East Asia. A clear north-south contrast is also found between the impacts of NPF and SOA processes on cloud condensation nuclei (CCN) concentrations: NPF increases CCN concentrations at higher supersaturations (smaller particles) over northern East Asia, whereas SOA increases CCN concentrations at lower supersaturations (larger particles) over southern East Asia. The application of ATRAS in East Asia also shows that the impact of each process on each optical and radiative parameter depends strongly on the process and the parameter in question. The module can be used in the future as a benchmark model to evaluate the accuracy of simpler aerosol models and examine interactions between NPF, BC aging, and SOA processes under different meteorological conditions and emissions.

  11. Parametric study of two planar high power flexible solar array concepts

    NASA Technical Reports Server (NTRS)

    Garba, J. A.; Kudija, D. A.; Zeldin, B.; Costogue, E. N.

    1978-01-01

    The design parameters examined were: frequency, aspect ratio, packaging constraints, and array blanket flatness. Specific power-to-mass ratios for both solar arrays as a function of array frequency and array width were developed and plotted. Summaries of the baseline design data, developed equations, the computer program operation, plots of the parameters, and the process for using the information as a design manual are presented.

  12. Continuous separation of copper ions from a mixture of heavy metal ions using a three-zone carousel process packed with metal ion-imprinted polymer.

    PubMed

    Jo, Se-Hee; Lee, See-Young; Park, Kyeong-Mok; Yi, Sung Chul; Kim, Dukjoon; Mun, Sungyong

    2010-11-05

    In this study, a three-zone carousel process based on a proper molecular imprinted polymer (MIP) resin was developed for continuous separation of Cu(2+) from Mn(2+) and Co(2+). For this task, the Cu (II)-imprinted polymer (Cu-MIP) resin was synthesized first and used to pack the chromatographic columns of a three-zone carousel process. Prior to the experiment of the carousel process based on the Cu-MIP resin (MIP-carousel process), a series of single-column experiments were performed to estimate the intrinsic parameters of the three heavy metal ions and to find out the appropriate conditions of regeneration and re-equilibration. The results from these single-column experiments and the additional computer simulations were then used for determination of the operating parameters of the MIP-carousel process under consideration. Based on the determined operating parameters, the MIP-carousel experiments were carried out. It was confirmed from the experimental results that the proposed MIP-carousel process was markedly effective in separating Cu(2+) from Mn(2+) and Co(2+) in a continuous mode with high purity and a relatively small loss. Thus, the MIP-carousel process developed in this study deserves sufficient attention in materials processing industries or metal-related industries, where the selective separation of heavy metal ions with the same charge has been a major concern. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Integrated Application of Quality-by-Design Principles to Drug Product Development: A Case Study of Brivanib Alaninate Film-Coated Tablets.

    PubMed

    Badawy, Sherif I F; Narang, Ajit S; LaMarche, Keirnan R; Subramanian, Ganeshkumar A; Varia, Sailesh A; Lin, Judy; Stevens, Tim; Shah, Pankaj A

    2016-01-01

    Modern drug product development is expected to follow quality-by-design (QbD) paradigm. At the same time, although there are several issue-specific examples in the literature that demonstrate the application of QbD principles, a holistic demonstration of the application of QbD principles to drug product development and control strategy, is lacking. This article provides an integrated case study on the systematic application of QbD to product development and demonstrates the implementation of QbD concepts in the different aspects of product and process design for brivanib alaninate film-coated tablets. Using a risk-based approach, the strategy for development entailed identification of product critical quality attributes (CQAs), assessment of risks to the CQAs, and performing experiments to understand and mitigate identified risks. Quality risk assessments and design of experiments were performed to understand the quality of the input raw materials required for a robust formulation and the impact of manufacturing process parameters on CQAs. In addition to the material property and process parameter controls, the proposed control strategy includes use of process analytical technology and conventional analytical tests to control in-process material attributes and ensure quality of the final product. Copyright © 2016. Published by Elsevier Inc.

  14. Automated data acquisition technology development:Automated modeling and control development

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  15. Predictive Modeling and Optimization of Vibration-assisted AFM Tip-based Nanomachining

    NASA Astrophysics Data System (ADS)

    Kong, Xiangcheng

    The tip-based vibration-assisted nanomachining process offers a low-cost, low-effort technique in fabricating nanometer scale 2D/3D structures in sub-100 nm regime. To understand its mechanism, as well as provide the guidelines for process planning and optimization, we have systematically studied this nanomachining technique in this work. To understand the mechanism of this nanomachining technique, we firstly analyzed the interaction between the AFM tip and the workpiece surface during the machining process. A 3D voxel-based numerical algorithm has been developed to calculate the material removal rate as well as the contact area between the AFM tip and the workpiece surface. As a critical factor to understand the mechanism of this nanomachining process, the cutting force has been analyzed and modeled. A semi-empirical model has been proposed by correlating the cutting force with the material removal rate, which was validated using experimental data from different machining conditions. With the understanding of its mechanism, we have developed guidelines for process planning of this nanomachining technique. To provide the guideline for parameter selection, the effect of machining parameters on the feature dimensions (depth and width) has been analyzed. Based on ANOVA test results, the feature width is only controlled by the XY vibration amplitude, while the feature depth is affected by several machining parameters such as setpoint force and feed rate. A semi-empirical model was first proposed to predict the machined feature depth under given machining condition. Then, to reduce the computation intensity, linear and nonlinear regression models were also proposed and validated using experimental data. Given the desired feature dimensions, feasible machining parameters could be provided using these predictive feature dimension models. As the tip wear is unavoidable during the machining process, the machining precision will gradually decrease. To maintain the machining quality, the guideline for when to change the tip should be provided. In this study, we have developed several metrics to detect tip wear, such as tip radius and the pull-off force. The effect of machining parameters on the tip wear rate has been studied using these metrics, and the machining distance before a tip must be changed has been modeled using these machining parameters. Finally, the optimization functions have been built for unit production time and unit production cost subject to realistic constraints, and the optimal machining parameters can be found by solving these functions.

  16. Friction Pull Plug Welding in Aluminum Alloys

    NASA Technical Reports Server (NTRS)

    Brooke, Shane A.; Bradford, Vann; Burkholder, Jonathon

    2011-01-01

    NASA fs Marshall Space Flight Center (MSFC) has recently invested much time and effort into the process development of Friction Pull Plug Welding (FPPW). FPPW, is a welding process similar to Friction Push Plug Welding in that, there is a small rotating part (plug) being spun and simultaneously pulled (forged) into a larger part. These two processes differ, in that push plug welding requires an internal reaction support, while pull plug welding reacts to the load externally. FPPW was originally conceived as a post proof repair technique for External Tank. FPPW was easily selected as the primary process used to close out the termination hole on the Constellation Program fs ARES I Upper Stage circumferential Self ] Reacting Friction Stir Welds (SR ]FSW). The versatility of FPPW allows it to also be used as a repair technique for both SR ]FSW and Conventional Friction Stir Welds. To date, all MSFC led development has been concentrated on aluminum alloys (2195, 2219, and 2014). Much work has been done to fully understand and characterize the process fs limitations. A heavy emphasis has been spent on plug design, to match the various weldland thicknesses and alloy combinations. This presentation will summarize these development efforts including weld parameter development, process control, parameter sensitivity studies, plug repair techniques, material properties including tensile, fracture and failure analysis.

  17. Friction Pull Plug Welding in Aluminum Alloys

    NASA Technical Reports Server (NTRS)

    Brooke, Shane A.; Bradford, Vann

    2012-01-01

    NASA's Marshall Space Flight Center (MSFC) has recently invested much time and effort into the process development of Friction Pull Plug Welding (FPPW). FPPW, is a welding process similar to Friction Push Plug Welding in that, there is a small rotating part (plug) being spun and simultaneously pulled (forged) into a larger part. These two processes differ, in that push plug welding requires an internal reaction support, while pull plug welding reacts to the load externally. FPPW was originally conceived as a post proof repair technique for the Space Shuttle fs External Tank. FPPW was easily selected as the primary weld process used to close out the termination hole on the Constellation Program's ARES I Upper Stage circumferential Self-Reacting Friction Stir Welds (SR-FSW). The versatility of FPPW allows it to also be used as a repair technique for both SR-FSW and Conventional Friction Stir Welds. To date, all MSFC led development has been concentrated on aluminum alloys (2195, 2219, and 2014). Much work has been done to fully understand and characterize the process's limitations. A heavy emphasis has been spent on plug design, to match the various weldland thicknesses and alloy combinations. This presentation will summarize these development efforts including weld parameter development, process control, parameter sensitivity studies, plug repair techniques, material properties including tensile, fracture and failure analysis.

  18. A hybrid artificial neural network as a software sensor for optimal control of a wastewater treatment process.

    PubMed

    Choi, D J; Park, H

    2001-11-01

    For control and automation of biological treatment processes, lack of reliable on-line sensors to measure water quality parameters is one of the most important problems to overcome. Many parameters cannot be measured directly with on-line sensors. The accuracy of existing hardware sensors is also not sufficient and maintenance problems such as electrode fouling often cause trouble. This paper deals with the development of software sensor techniques that estimate the target water quality parameter from other parameters using the correlation between water quality parameters. We focus our attention on the preprocessing of noisy data and the selection of the best model feasible to the situation. Problems of existing approaches are also discussed. We propose a hybrid neural network as a software sensor inferring wastewater quality parameter. Multivariate regression, artificial neural networks (ANN), and a hybrid technique that combines principal component analysis as a preprocessing stage are applied to data from industrial wastewater processes. The hybrid ANN technique shows an enhancement of prediction capability and reduces the overfitting problem of neural networks. The result shows that the hybrid ANN technique can be used to extract information from noisy data and to describe the nonlinearity of complex wastewater treatment processes.

  19. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  20. Optimization of process parameters of pulsed TIG welded maraging steel C300

    NASA Astrophysics Data System (ADS)

    Deepak, P.; Jualeash, M. J.; Jishnu, J.; Srinivasan, P.; Arivarasu, M.; Padmanaban, R.; Thirumalini, S.

    2016-09-01

    Pulsed TIG welding technology provides excellent welding performance on thin sections which helps to increase productivity, enhance weld quality, minimize weld costs, and boost operator efficiency and this has drawn the attention of the welding society. Maraging C300 steel is extensively used in defence and aerospace industry and thus its welding becomes an area of paramount importance. In pulsed TIG welding, weld quality depends on the process parameters used. In this work, Pulsed TIG bead-on-plate welding is performed on a 5mm thick maraging C300 plate at different combinations of input parameters: peak current (Ip), base current (Ib) and pulsing frequency (HZ) as per box behnken design with three-levels for each factor. Response surface methodology is utilized for establishing a mathematical model for predicting the weld bead depth. The effect of Ip, Ib and HZ on the weld bead depth is investigated using the developed model. The weld bead depth is found to be affected by all the three parameters. Surface and contour plots developed from regression equation are used to optimize the processing parameters for maximizing the weld bead depth. Optimum values of Ip, Ib and HZ are obtained as 259 A, 120 A and 8 Hz respectively. Using this optimum condition, maximum bead depth of the weld is predicted to be 4.325 mm.

  1. Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.

    PubMed

    Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran

    2018-06-15

    Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A Knowledge Database on Thermal Control in Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Hirasawa, Shigeki; Satoh, Isao

    A prototype version of a knowledge database on thermal control in manufacturing processes, specifically, molding, semiconductor manufacturing, and micro-scale manufacturing has been developed. The knowledge database has search functions for technical data, evaluated benchmark data, academic papers, and patents. The database also displays trends and future roadmaps for research topics. It has quick-calculation functions for basic design. This paper summarizes present research topics and future research on thermal control in manufacturing engineering to collate the information to the knowledge database. In the molding process, the initial mold and melt temperatures are very important parameters. In addition, thermal control is related to many semiconductor processes, and the main parameter is temperature variation in wafers. Accurate in-situ temperature measurment of wafers is important. And many technologies are being developed to manufacture micro-structures. Accordingly, the knowledge database will help further advance these technologies.

  3. Fabrication of large area woodpile structure in polymer

    NASA Astrophysics Data System (ADS)

    Gupta, Jaya Prakash; Dutta, Neilanjan; Yao, Peng; Sharkawy, Ahmed S.; Prather, Dennis W.

    2009-02-01

    A fabrication process of three-dimensional Woodpile photonic crystals based on multilayer photolithography from commercially available photo resist SU8 have been demonstrated. A 6-layer, 2 mm × 2mm woodpile has been fabricated. Different factors that influence the spin thickness on multiple resist application have been studied. The fabrication method used removes, the problem of intermixing, and is more repeatable and robust than the multilayer fabrication techniques for three dimensional photonic crystal structures that have been previously reported. Each layer is developed before next layer photo resist spin, instead of developing the whole structure in the final step as used in multilayer process. The desired thickness for each layer is achieved by the calibration of spin speed and use of different photo resist compositions. Deep UV exposure confinement has been the defining parameter in this process. Layer uniformity for every layer is independent of the previous developed layers and depends on the photo resist planarizing capability, spin parameters and baking conditions. The intermixing problem, which results from the previous layers left uncrossed linked photo resist, is completely removed in this process as the previous layers are fully developed, avoiding any intermixing between the newly spun and previous layers. Also this process gives the freedom to redo every spin any number of times without affecting the previously made structure, which is not possible in other multilayer process where intermediate developing is not performed.

  4. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  5. A Computational Study on Porosity Evolution in Parts Produced by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Tan, J. L.; Tang, C.; Wong, C. H.

    2018-06-01

    Selective laser melting (SLM) is a powder-bed additive manufacturing process that uses laser to melt powders, layer by layer to generate a functional 3D part. There are many different parameters, such as laser power, scanning speed, and layer thickness, which play a role in determining the quality of the printed part. These parameters contribute to the energy density applied on the powder bed. Defects arise when insufficient or excess energy density is applied. A common defect in these cases is the presence of porosity. This paper studies the formation of porosities when inappropriate energy densities are used. A computational model was developed to simulate the melting and solidification process of SS316L powders in the SLM process. Three different sets of process parameters were used to produce 800-µm-long melt tracks, and the characteristics of the porosities were analyzed. It was found that when low energy density parameters were used, the pores were found to be irregular in shapes and were located near the top surface of the powder bed. However, when high energy density parameters were used, the pores were either elliptical or spherical in shapes and were usually located near the bottom of the keyholes.

  6. Box-Behnken Design of Experiments Investigation of Hydroxyapatite Synthesis for Orthopedic Applications

    NASA Astrophysics Data System (ADS)

    Kehoe, S.; Stokes, J.

    2011-03-01

    Physicochemical properties of hydroxyapatite (HAp) synthesized by the chemical precipitation method are heavily dependent on the chosen process parameters. A Box-Behnken three-level experimental design was therefore, chosen to determine the optimum set of process parameters and their effect on various HAp characteristics. These effects were quantified using design of experiments (DoE) to develop mathematical models using the Box-Behnken design, in terms of the chemical precipitation process parameters. Findings from this research show that the HAp possessing optimum powder characteristics for orthopedic application via a thermal spray technique can therefore be prepared using the following chemical precipitation process parameters: reaction temperature 60 °C, ripening time 48 h, and stirring speed 1500 rpm using high reagent concentrations. Ripening time and stirring speed significantly affected the final phase purity for the experimental conditions of the Box-Behnken design. An increase in both the ripening time (36-48 h) and stirring speed (1200-1500 rpm) was found to result in an increase of phase purity from 47(±2)% to 85(±2)%. Crystallinity, crystallite size, lattice parameters, and mean particle size were also optimized within the research to find desired settings to achieve results suitable for FDA regulations.

  7. Improved model reduction and tuning of fractional-order PI(λ)D(μ) controllers for analytical rule extraction with genetic programming.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava

    2012-03-01

    Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Multiphase porous media modelling: A novel approach to predicting food processing performance.

    PubMed

    Khan, Md Imran H; Joardder, M U H; Kumar, Chandan; Karim, M A

    2018-03-04

    The development of a physics-based model of food processing is essential to improve the quality of processed food and optimize energy consumption. Food materials, particularly plant-based food materials, are complex in nature as they are porous and have hygroscopic properties. A multiphase porous media model for simultaneous heat and mass transfer can provide a realistic understanding of transport processes and thus can help to optimize energy consumption and improve food quality. Although the development of a multiphase porous media model for food processing is a challenging task because of its complexity, many researchers have attempted it. The primary aim of this paper is to present a comprehensive review of the multiphase models available in the literature for different methods of food processing, such as drying, frying, cooking, baking, heating, and roasting. A critical review of the parameters that should be considered for multiphase modelling is presented which includes input parameters, material properties, simulation techniques and the hypotheses. A discussion on the general trends in outcomes, such as moisture saturation, temperature profile, pressure variation, and evaporation patterns, is also presented. The paper concludes by considering key issues in the existing multiphase models and future directions for development of multiphase models.

  9. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    USDA-ARS?s Scientific Manuscript database

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  10. Product development using process monitoring and NDE data fusion

    NASA Astrophysics Data System (ADS)

    Peterson, Todd; Bossi, Richard H.

    1998-03-01

    Composite process/product development relies on both process monitoring information and nondestructive evaluation measurements for determining application suitability. In the past these activities have been performed and analyzed independently. Our present approach is to present the process monitoring and NDE data together in a data fusion workstation. This methodology leads to final product acceptance based on a combined process monitoring and NDE criteria. The data fusion work station combines process parameter and NDE data in a single workspace enabling all the data to be used in the acceptance/rejection decision process. An example application is the induction welding process, a unique joining method for assembling primary composite structure, that offers significant cost and weight advantages over traditional fasted structure. The determination of the required time, temperature and pressure conditions used in the process to achieve a complete weld is being aided by the use of ultrasonic inspection techniques. Full waveform ultrasonic inspection data is employed to evaluate the quality of spar cap to skin fit, an essential element of the welding process, and is processed to find a parameter that can be used for weld acceptance. Certification of the completed weld incorporates the data fusion methodology.

  11. A Bayesian Uncertainty Framework for Conceptual Snowmelt and Hydrologic Models Applied to the Tenderfoot Creek Experimental Forest

    NASA Astrophysics Data System (ADS)

    Smith, T.; Marshall, L.

    2007-12-01

    In many mountainous regions, the single most important parameter in forecasting the controls on regional water resources is snowpack (Williams et al., 1999). In an effort to bridge the gap between theoretical understanding and functional modeling of snow-driven watersheds, a flexible hydrologic modeling framework is being developed. The aim is to create a suite of models that move from parsimonious structures, concentrated on aggregated watershed response, to those focused on representing finer scale processes and distributed response. This framework will operate as a tool to investigate the link between hydrologic model predictive performance, uncertainty, model complexity, and observable hydrologic processes. Bayesian methods, and particularly Markov chain Monte Carlo (MCMC) techniques, are extremely useful in uncertainty assessment and parameter estimation of hydrologic models. However, these methods have some difficulties in implementation. In a traditional Bayesian setting, it can be difficult to reconcile multiple data types, particularly those offering different spatial and temporal coverage, depending on the model type. These difficulties are also exacerbated by sensitivity of MCMC algorithms to model initialization and complex parameter interdependencies. As a way of circumnavigating some of the computational complications, adaptive MCMC algorithms have been developed to take advantage of the information gained from each successive iteration. Two adaptive algorithms are compared is this study, the Adaptive Metropolis (AM) algorithm, developed by Haario et al (2001), and the Delayed Rejection Adaptive Metropolis (DRAM) algorithm, developed by Haario et al (2006). While neither algorithm is truly Markovian, it has been proven that each satisfies the desired ergodicity and stationarity properties of Markov chains. Both algorithms were implemented as the uncertainty and parameter estimation framework for a conceptual rainfall-runoff model based on the Probability Distributed Model (PDM), developed by Moore (1985). We implement the modeling framework in Stringer Creek watershed in the Tenderfoot Creek Experimental Forest (TCEF), Montana. The snowmelt-driven watershed offers that additional challenge of modeling snow accumulation and melt and current efforts are aimed at developing a temperature- and radiation-index snowmelt model. Auxiliary data available from within TCEF's watersheds are used to support in the understanding of information value as it relates to predictive performance. Because the model is based on lumped parameters, auxiliary data are hard to incorporate directly. However, these additional data offer benefits through the ability to inform prior distributions of the lumped, model parameters. By incorporating data offering different information into the uncertainty assessment process, a cross-validation technique is engaged to better ensure that modeled results reflect real process complexity.

  12. Extensions of Rasch's Multiplicative Poisson Model.

    ERIC Educational Resources Information Center

    Jansen, Margo G. H.; van Duijn, Marijtje A. J.

    1992-01-01

    A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)

  13. Upscaling from research watersheds: an essential stage of trustworthy general-purpose hydrologic model building

    NASA Astrophysics Data System (ADS)

    McNamara, J. P.; Semenova, O.; Restrepo, P. J.

    2011-12-01

    Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.

  14. Manufacturing process modeling for composite materials and structures, Sandia blade reliability collaborative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guest, Daniel A.; Cairns, Douglas S.

    2014-02-01

    The increased use and interest in wind energy over the last few years has necessitated an increase in the manufacturing of wind turbine blades. This increase in manufacturing has in many ways out stepped the current understanding of not only the materials used but also the manufacturing methods used to construct composite laminates. The goal of this study is to develop a list of process parameters which influence the quality of composite laminates manufactured using vacuum assisted resin transfer molding and to evaluate how they influence laminate quality. Known to be primary factors for the manufacturing process are resin flowmore » rate and vacuum pressure. An incorrect balance of these parameters will often cause porosity or voids in laminates that ultimately degrade the strength of the composite. Fiber waviness has also been seen as a major contributor to failures in wind turbine blades and is often the effect of mishandling during the lay-up process. Based on laboratory tests conducted, a relationship between these parameters and laminate quality has been established which will be a valuable tool in developing best practices and standard procedures for the manufacture of wind turbine blade composites.« less

  15. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  16. Process development for robust removal of aggregates using cation exchange chromatography in monoclonal antibody purification with implementation of quality by design.

    PubMed

    Xu, Zhihao; Li, Jason; Zhou, Joe X

    2012-01-01

    Aggregate removal is one of the most important aspects in monoclonal antibody (mAb) purification. Cation-exchange chromatography (CEX), a widely used polishing step in mAb purification, is able to clear both process-related impurities and product-related impurities. In this study, with the implementation of quality by design (QbD), a process development approach for robust removal of aggregates using CEX is described. First, resin screening studies were performed and a suitable CEX resin was chosen because of its relatively better selectivity and higher dynamic binding capacity. Second, a pH-conductivity hybrid gradient elution method for the CEX was established, and the risk assessment for the process was carried out. Third, a process characterization study was used to evaluate the impact of the potentially important process parameters on the process performance with respect to aggregate removal. Accordingly, a process design space was established. Aggregate level in load is the critical parameter. Its operating range is set at 0-3% and the acceptable range is set at 0-5%. Equilibration buffer is the key parameter. Its operating range is set at 40 ± 5 mM acetate, pH 5.0 ± 0.1, and acceptable range is set at 40 ± 10 mM acetate, pH 5.0 ± 0.2. Elution buffer, load mass, and gradient elution volume are non-key parameters; their operating ranges and acceptable ranges are equally set at 250 ± 10 mM acetate, pH 6.0 ± 0.2, 45 ± 10 g/L resin, and 10 ± 20% CV respectively. Finally, the process was scaled up 80 times and the impurities removal profiles were revealed. Three scaled-up runs showed that the size-exclusion chromatography (SEC) purity of the CEX pool was 99.8% or above and the step yield was above 92%, thereby proving that the process is both consistent and robust.

  17. Modeling and Experiment of Melt Impregnation of Continuous Fiber-reinforced Thermoplastic with Pins

    NASA Astrophysics Data System (ADS)

    Yang, Jian-Jun; Xin, Chun-Ling; Tang, Ke; Zhang, Zhi-Cheng; Yan, Bao-Rui; Ren, Feng; He, Ya-Dong

    2016-05-01

    Melt impregnation is a crucial method for continuous fiber-reinforced thermoplastic. It was developed several years ago for thermosetting plastic, but it is very popular now in the thermoplastic matrices, with a much higher viscosity. In this paper, we propose a mathematic model based on Darcy's law, which combined with processing parameters and material physical parameters. Then we use this model to predict the influence of processing parameters on the degree of impregnation of the prepreg, and the trend of prediction is consistent with the experimental results. Therefore, the exhaustive numerical study enables to define the optimal processing conditions for a perfect impregnation. The results are shown to be effective tools for finding optimal pulling speed, pin number and pressure for a given fluid/fibers pair.

  18. Examining Mechanical Strength Characteristics of Selective Inhibition Sintered HDPE Specimens Using RSM and Desirability Approach

    NASA Astrophysics Data System (ADS)

    Rajamani, D.; Esakki, Balasubramanian

    2017-09-01

    Selective inhibition sintering (SIS) is a powder based additive manufacturing (AM) technique to produce functional parts with an inexpensive system compared with other AM processes. Mechanical properties of SIS fabricated parts are of high dependence on various process parameters importantly layer thickness, heat energy, heater feedrate, and printer feedrate. In this paper, examining the influence of these process parameters on evaluating mechanical properties such as tensile and flexural strength using Response Surface Methodology (RSM) is carried out. The test specimens are fabricated using high density polyethylene (HDPE) and mathematical models are developed to correlate the control factors to the respective experimental design response. Further, optimal SIS process parameters are determined using desirability approach to enhance the mechanical properties of HDPE specimens. Optimization studies reveal that, combination of high heat energy, low layer thickness, medium heater feedrate and printer feedrate yielded superior mechanical strength characteristics.

  19. Fault detection in heavy duty wheels by advanced vibration processing techniques and lumped parameter modeling

    NASA Astrophysics Data System (ADS)

    Malago`, M.; Mucchi, E.; Dalpiaz, G.

    2016-03-01

    Heavy duty wheels are used in applications such as automatic vehicles and are mainly composed of a polyurethane tread glued to a cast iron hub. In the manufacturing process, the adhesive application between tread and hub is a critical assembly phase, since it is completely made by an operator and a contamination of the bond area may happen. Furthermore, the presence of rust on the hub surface can contribute to worsen the adherence interface, reducing the operating life. In this scenario, a quality control procedure for fault detection to be used at the end of the manufacturing process has been developed. This procedure is based on vibration processing techniques and takes advantages of the results of a lumped parameter model. Indicators based on cyclostationarity can be considered as key parameters to be adopted in a monitoring test station at the end of the production line due to their not deterministic characteristics.

  20. Performance of the Extravehicular Mobility Unit (EMU) Airlock Coolant Loop Remediation (A/L CLR) Hardware - Final

    NASA Technical Reports Server (NTRS)

    Steele, John W.; Rector, Tony; Gazda, Daniel; Lewis, John

    2011-01-01

    An EMU water processing kit (Airlock Coolant Loop Recovery -- A/L CLR) was developed as a corrective action to Extravehicular Mobility Unit (EMU) coolant flow disruptions experienced on the International Space Station (ISS) in May of 2004 and thereafter. A conservative duty cycle and set of use parameters for A/L CLR use and component life were initially developed and implemented based on prior analysis results and analytical modeling. Several initiatives were undertaken to optimize the duty cycle and use parameters of the hardware. Examination of post-flight samples and EMU Coolant Loop hardware provided invaluable information on the performance of the A/L CLR and has allowed for an optimization of the process. The intent of this paper is to detail the evolution of the A/L CLR hardware, efforts to optimize the duty cycle and use parameters, and the final recommendations for implementation in the post-Shuttle retirement era.

  1. Development and evaluation of paclitaxel nanoparticles using a quality-by-design approach.

    PubMed

    Yerlikaya, Firat; Ozgen, Aysegul; Vural, Imran; Guven, Olgun; Karaagaoglu, Ergun; Khan, Mansoor A; Capan, Yilmaz

    2013-10-01

    The aims of this study were to develop and characterize paclitaxel nanoparticles, to identify and control critical sources of variability in the process, and to understand the impact of formulation and process parameters on the critical quality attributes (CQAs) using a quality-by-design (QbD) approach. For this, a risk assessment study was performed with various formulation and process parameters to determine their impact on CQAs of nanoparticles, which were determined to be average particle size, zeta potential, and encapsulation efficiency. Potential risk factors were identified using an Ishikawa diagram and screened by Plackett-Burman design and finally nanoparticles were optimized using Box-Behnken design. The optimized formulation was further characterized by Fourier transform infrared spectroscopy, X-ray diffractometry, differential scanning calorimetry, scanning electron microscopy, atomic force microscopy, and gas chromatography. It was observed that paclitaxel transformed from crystalline state to amorphous state while totally encapsulating into the nanoparticles. The nanoparticles were spherical, smooth, and homogenous with no dichloromethane residue. In vitro cytotoxicity test showed that the developed nanoparticles are more efficient than free paclitaxel in terms of antitumor activity (more than 25%). In conclusion, this study demonstrated that understanding formulation and process parameters with the philosophy of QbD is useful for the optimization of complex drug delivery systems. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Rapid permeation measurement system for the production control of monolayer and multilayer films

    NASA Astrophysics Data System (ADS)

    Botos, J.; Müller, K.; Heidemeyer, P.; Kretschmer, K.; Bastian, M.; Hochrein, T.

    2014-05-01

    Plastics have been used for packaging films for a long time. Until now the development of new formulations for film applications, including process optimization, has been a time-consuming and cost-intensive process for gases like oxygen (O2) or carbon dioxide (CO2). By using helium (He) the permeation measurement can be accelerated from hours or days to a few minutes. Therefore a manometric measuring system for tests according to ISO 15105-1 is coupled with a mass spectrometer to determine the helium flow rate and to calculate the helium permeation rate. Due to the accelerated determination the permeation quality of monolayer and multilayer films can be measured atline. Such a system can be used to predict for example the helium permeation rate of filled polymer films. Defined quality limits for the permeation rate can be specified as well as the prompt correction of process parameters if the results do not meet the specification. This method for process control was tested on a pilot line with a corotating twin-screw extruder for monolayer films. Selected process parameters were varied iteratively without changing the material formulation to obtain the best process parameter set and thus the lowest permeation rate. Beyond that the influence of different parameters on the helium permeation rate was examined on monolayer films. The results were evaluated conventional as well as with artificial neuronal networks in order to determine the non-linear correlation between all process parameters.

  3. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  4. Process of prototyping coronary stents from biodegradable Fe-Mn alloys.

    PubMed

    Hermawan, Hendra; Mantovani, Diego

    2013-11-01

    Biodegradable stents are considered to be a recent innovation, and their feasibility and applicability have been proven in recent years. Research in this area has focused on materials development and biological studies, rather than on how to transform the developed biodegradable materials into the stent itself. Currently available stent technology, the laser cutting-based process, might be adapted to fabricate biodegradable stents. In this work, the fabrication, characterization and testing of biodegradable Fe-Mn stents are described. A standard process for fabricating and testing stainless steel 316L stents was referred to. The influence of process parameters on the physical, metallurgical and mechanical properties of the stents, and the quality of the produced stents, were investigated. It was found that some steps of the standard process such as laser cutting can be directly applied, but changes to parameters are needed for annealing, and alternatives are needed to replace electropolishing. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  5. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  6. Modelling of peak temperature during friction stir processing of magnesium alloy AZ91

    NASA Astrophysics Data System (ADS)

    Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir processing (FSP) is a solid state processing technique with potential to modify the properties of the material through microstructural modification. The study of heat transfer in FSP aids in the identification of defects like flash, inadequate heat input, poor material flow and mixing etc. In this paper, transient temperature distribution during FSP of magnesium alloy AZ91 was simulated using finite element modelling. The numerical model results were validated using the experimental results from the published literature. The model was used to predict the peak temperature obtained during FSP for various process parameter combinations. The simulated peak temperature results were used to develop a statistical model. The effect of process parameters namely tool rotation speed, tool traverse speed and shoulder diameter of the tool on the peak temperature was investigated using the developed statistical model. It was found that peak temperature was directly proportional to tool rotation speed and shoulder diameter and inversely proportional to tool traverse speed.

  7. Making the purchase decision: factors other than price.

    PubMed

    Lyons, D M

    1992-05-01

    Taking price out of the limelight and concentrating on customer relations, mutual respect, and build-in/buy-in; involving the user; developing communication and evaluation processes; and being process oriented to attain the results needed require commitment on the part of administration and materiel management. There must be a commitment of time to develop the process, commitment of resources to work through the process, and a commitment of support to enhance the process. With those three parameters in place, price will no longer be the only factor in the purchasing decision.

  8. Spectral estimation of received phase in the presence of amplitude scintillation

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.

    1988-01-01

    A technique is demonstrated for obtaining the spectral parameters of the received carrier phase in the presence of carrier amplitude scintillation, by means of a digital phased locked loop. Since the random amplitude fluctuations generate time-varying loop characteristics, straightforward processing of the phase detector output does not provide accurate results. The method developed here performs a time-varying inverse filtering operation on the corrupted observables, thus recovering the original phase process and enabling accurate estimation of its underlying parameters.

  9. Low Hydrogen Embrittlement (LHE) Zinc-Nickel (Zn-Ni) Qualification Test Result and Process Parameters Development

    DTIC Science & Technology

    2011-02-09

    www.ES3inc.com ● 1669 E. 1400 S ● Clearfield, UT 84015 (801) 926-1150 ● fax (801) 926-1155 Tri- Chromium ...conversion coating (CC) (Hexavalent vs. Trivalent ) and parameters: ▪ Baking before and after conversion coating • Hexavalent CC: must be applied...after bake • Trivalent CC: can be applied before or after bake (process time savings) ▪ Paint adhesion performance per ASTM D3359 • Hex-CC

  10. Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.

  11. Multi Objective Optimization of Multi Wall Carbon Nanotube Based Nanogrinding Wheel Using Grey Relational and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Sethuramalingam, Prabhu; Vinayagam, Babu Kupusamy

    2016-07-01

    Carbon nanotube mixed grinding wheel is used in the grinding process to analyze the surface characteristics of AISI D2 tool steel material. Till now no work has been carried out using carbon nanotube based grinding wheel. Carbon nanotube based grinding wheel has excellent thermal conductivity and good mechanical properties which are used to improve the surface finish of the workpiece. In the present study, the multi response optimization of process parameters like surface roughness and metal removal rate of grinding process of single wall carbon nanotube (CNT) in mixed cutting fluids is undertaken using orthogonal array with grey relational analysis. Experiments are performed with designated grinding conditions obtained using the L9 orthogonal array. Based on the results of the grey relational analysis, a set of optimum grinding parameters is obtained. Using the analysis of variance approach the significant machining parameters are found. Empirical model for the prediction of output parameters has been developed using regression analysis and the results are compared empirically, for conditions of with and without CNT grinding wheel in grinding process.

  12. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  13. The solution of private problems for optimization heat exchangers parameters

    NASA Astrophysics Data System (ADS)

    Melekhin, A.

    2017-11-01

    The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  14. Estimation of water quality parameters of inland and coastal waters with the use of a toolkit for processing of remote sensing data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekker, A.G.; Hoogenboom, H.J.; Rijkeboer, M.

    1997-06-01

    Deriving thematic maps of water quality parameters from a remote sensing image requires a number of processing steps, such as calibration, atmospheric correction, air/water interface correction, and application of water quality algorithms. A prototype software environment has recently been developed that enables the user to perform and control these processing steps. Main parts of this environment are: (i) access to the MODTRAN 3 radiative transfer code for removing atmospheric and air-water interface influences, (ii) a tool for analyzing of algorithms for estimating water quality and (iii) a spectral database, containing apparent and inherent optical properties and associated water quality parameters.more » The use of the software is illustrated by applying implemented algorithms for estimating chlorophyll to data from a spectral library of Dutch inland waters with CHL ranging from 1 to 500 pg 1{sup -1}. The algorithms currently implemented in the Toolkit software are recommended for optically simple waters, but for optically complex waters development of more advanced retrieval methods is required.« less

  15. An integrated toolbox for processing and analysis of remote sensing data of inland and coastal waters - atmospheric correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haan, J.F. de; Kokke, J.M.M.; Hoogenboom, H.J.

    1997-06-01

    Deriving thematic maps of water quality parameters from a remote sensing image requires a number of processing steps, such as calibration, atmospheric correction, air-water interface correction, and application of water quality algorithms. A prototype version of an integrated software environment has recently been developed that enables the user to perform and control these processing steps. Major parts of this environment are: (i) access to the MODTRAN 3 radiative transfer code, (ii) a database of water quality algorithms, and (iii) a spectral library of Dutch coastal and inland waters, containing subsurface irradiance reflectance spectra and associated water quality parameters. The atmosphericmore » correction part of this environment is discussed here. It is shown that this part can be used to accurately retrieve spectral signatures of inland water for wavelengths between 450 and 750 nm, provided in situ measurements are used to determine atmospheric model parameters. Assessment of the usefulness of the completely integrated software system in an operational environment requires a revised version that is presently being developed.« less

  16. Kinetics of Sub-Micron Grain Size Refinement in 9310 Steel

    NASA Astrophysics Data System (ADS)

    Kozmel, Thomas; Chen, Edward Y.; Chen, Charlie C.; Tin, Sammy

    2014-05-01

    Recent efforts have focused on the development of novel manufacturing processes capable of producing microstructures dominated by sub-micron grains. For structural applications, grain refinement has been shown to enhance mechanical properties such as strength, fatigue resistance, and fracture toughness. Through control of the thermo-mechanical processing parameters, dynamic recrystallization mechanisms were used to produce microstructures consisting of sub-micron grains in 9310 steel. Starting with initial bainitic grain sizes of 40 to 50 μm, various levels of grain refinement were observed following hot deformation of 9310 steel samples at temperatures and strain rates ranging from 755 K to 922 K (482 °C and 649 °C) and 1 to 0.001/s, respectively. The resulting deformation microstructures were characterized using scanning electron microscopy and electron backscatter diffraction techniques to quantify the extent of carbide coarsening and grain refinement occurring during deformation. Microstructural models based on the Zener-Holloman parameter were developed and modified to include the effect of the ferrite/carbide interactions within the system. These models were shown to effectively correlate microstructural attributes to the thermal mechanical processing parameters.

  17. Energetic investigation of the adsorption process of CH4, C2H6 and N2 on activated carbon: Numerical and statistical physics treatment

    NASA Astrophysics Data System (ADS)

    Ben Torkia, Yosra; Ben Yahia, Manel; Khalfaoui, Mohamed; Al-Muhtaseb, Shaheen A.; Ben Lamine, Abdelmottaleb

    2014-01-01

    The adsorption energy distribution (AED) function of a commercial activated carbon (BDH-activated carbon) was investigated. For this purpose, the integral equation is derived by using a purely analytical statistical physics treatment. The description of the heterogeneity of the adsorbent is significantly clarified by defining the parameter N(E). This parameter represents the energetic density of the spatial density of the effectively occupied sites. To solve the integral equation, a numerical method was used based on an adequate algorithm. The Langmuir model was adopted as a local adsorption isotherm. This model is developed by using the grand canonical ensemble, which allows defining the physico-chemical parameters involved in the adsorption process. The AED function is estimated by a normal Gaussian function. This method is applied to the adsorption isotherms of nitrogen, methane and ethane at different temperatures. The development of the AED using a statistical physics treatment provides an explanation of the gas molecules behaviour during the adsorption process and gives new physical interpretations at microscopic levels.

  18. Design Optimization of Microalloyed Steels Using Thermodynamics Principles and Neural-Network-Based Modeling

    NASA Astrophysics Data System (ADS)

    Mohanty, Itishree; Chintha, Appa Rao; Kundu, Saurabh

    2018-06-01

    The optimization of process parameters and composition is essential to achieve the desired properties with minimal additions of alloying elements in microalloyed steels. In some cases, it may be possible to substitute such steels for those which are more richly alloyed. However, process control involves a larger number of parameters, making the relationship between structure and properties difficult to assess. In this work, neural network models have been developed to estimate the mechanical properties of steels containing Nb + V or Nb + Ti. The outcomes have been validated by thermodynamic calculations and plant data. It has been shown that subtle thermodynamic trends can be captured by the neural network model. Some experimental rolling data have also been used to support the model, which in addition has been applied to calculate the costs of optimizing microalloyed steel. The generated pareto fronts identify many combinations of strength and elongation, making it possible to select composition and process parameters for a range of applications. The ANN model and the optimization model are being used for prediction of properties in a running plant and for development of new alloys, respectively.

  19. Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Anderson, B. J.; Justus, C. G.; Batts, G. W.

    2001-01-01

    Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.

  20. Hubert: Software for efficient analysis of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2016-10-01

    Combination of short data acquisition time and local investigation of a solid state through hyperfine parameters makes nuclear forward scattering (NFS) a unique experimental technique for investigation of fast processes. However, the total number of acquired NFS time spectra may be very high. Therefore an efficient way of the data evaluation is needed. In this paper we report the development of Hubert software package as a response to the rapidly developing field of in-situ NFS experiments. Hubert offers several useful features for data files processing and could significantly shorten the evaluation time by using a simple connection between the neighboring time spectra through their input and output parameter values.

  1. Radar systems for the water resources mission, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, R. K.; Claassen, J. P.; Erickson, R. L.; Fong, R. K. T.; Hanson, B. C.; Komen, M. J.; Mcmillan, S. B.; Parashar, S. K.

    1976-01-01

    The state of the art determination was made for radar measurement of: soil moisture, snow, standing and flowing water, lake and river ice, determination of required spacecraft radar parameters, study of synthetic-aperture radar systems to meet these parametric requirements, and study of techniques for on-board processing of the radar data. Significant new concepts developed include the following: scanning synthetic-aperture radar to achieve wide-swath coverage; single-sideband radar; and comb-filter range-sequential, range-offset SAR processing. The state of the art in radar measurement of water resources parameters is outlined. The feasibility for immediate development of a spacecraft water resources SAR was established. Numerous candidates for the on-board processor were examined.

  2. Application of a mechanistic model as a tool for on-line monitoring of pilot scale filamentous fungal fermentation processes-The importance of evaporation effects.

    PubMed

    Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Sin, Gürkan; Gernaey, Krist V

    2017-03-01

    A mechanistic model-based soft sensor is developed and validated for 550L filamentous fungus fermentations operated at Novozymes A/S. The soft sensor is comprised of a parameter estimation block based on a stoichiometric balance, coupled to a dynamic process model. The on-line parameter estimation block models the changing rates of formation of product, biomass, and water, and the rate of consumption of feed using standard, available on-line measurements. This parameter estimation block, is coupled to a mechanistic process model, which solves the current states of biomass, product, substrate, dissolved oxygen and mass, as well as other process parameters including k L a, viscosity and partial pressure of CO 2 . State estimation at this scale requires a robust mass model including evaporation, which is a factor not often considered at smaller scales of operation. The model is developed using a historical data set of 11 batches from the fermentation pilot plant (550L) at Novozymes A/S. The model is then implemented on-line in 550L fermentation processes operated at Novozymes A/S in order to validate the state estimator model on 14 new batches utilizing a new strain. The product concentration in the validation batches was predicted with an average root mean sum of squared error (RMSSE) of 16.6%. In addition, calculation of the Janus coefficient for the validation batches shows a suitably calibrated model. The robustness of the model prediction is assessed with respect to the accuracy of the input data. Parameter estimation uncertainty is also carried out. The application of this on-line state estimator allows for on-line monitoring of pilot scale batches, including real-time estimates of multiple parameters which are not able to be monitored on-line. With successful application of a soft sensor at this scale, this allows for improved process monitoring, as well as opening up further possibilities for on-line control algorithms, utilizing these on-line model outputs. Biotechnol. Bioeng. 2017;114: 589-599. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Optimization of process parameters for a quasi-continuous tablet coating system using design of experiments.

    PubMed

    Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah

    2011-03-01

    The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists

  4. Continuous manufacturing of solid lipid nanoparticles by hot melt extrusion.

    PubMed

    Patil, Hemlata; Kulkarni, Vijay; Majumdar, Soumyajit; Repka, Michael A

    2014-08-25

    Solid lipid nanoparticles (SLN) can either be produced by hot homogenization of melted lipids at higher temperatures or by a cold homogenization process. This paper proposes and demonstrates the formulation of SLN for pharmaceutical applications by combining two processes: hot melt extrusion (HME) technology for melt-emulsification and high-pressure homogenization (HPH) for size reduction. This work aimed at developing continuous and scalable processes for SLN by mixing a lipid and aqueous phase containing an emulsifier in the extruder barrel at temperatures above the melting point of the lipid and further reducing the particle size of emulsion by HPH linked to HME in a sequence. The developed novel platform demonstrated better process control and size reduction compared to the conventional process of hot homogenization (batch process). Varying the process parameters enabled the production of SLN below 200 nm (for 60 mg/ml lipid solution at a flow rate of 100ml/min). Among the several process parameters investigated, the lipid concentration, residence time and screw design played major roles in influencing the size of the SLN. This new process demonstrates the potential use of hot melt extrusion technology for continuous and large-scale production of SLN. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Development of low-stress Iridium coatings for astronomical x-ray mirrors

    NASA Astrophysics Data System (ADS)

    Döhring, Thorsten; Probst, Anne-Catherine; Stollenwerk, Manfred; Wen, Mingwu; Proserpio, Laura

    2016-07-01

    Previously used mirror technologies are not suitable for the challenging needs of future X-ray telescopes. This is why the required high precision mirror manufacturing triggers new technical developments around the world. Some aspects of X-ray mirrors production are studied within the interdisciplinary project INTRAAST, a German acronym for "industry transfer of astronomical mirror technologies". The project is embedded in a cooperation of Aschaffenburg University of Applied Sciences and the Max-Planck-Institute for extraterrestrial Physics. One important task is the development of low-stress Iridium coatings for X-ray mirrors based on slumped thin glass substrates. The surface figure of the glass substrates is measured before and after the coating process by optical methods. Correlating the surface shape deformation to the parameters of coating deposition, here especially to the Argon sputtering pressure, allows for an optimization of the process. The sputtering parameters also have an influence on the coating layer density and on the micro-roughness of the coatings, influencing their X-ray reflection properties. Unfortunately the optimum coating process parameters seem to be contrarious: low Argon pressure resulted in better micro-roughness and higher density, whereas higher pressure leads to lower coating stress. Therefore additional measures like intermediate coating layers and temperature treatment will be considered for further optimization. The technical approach for the low-stress Iridium coating development, the experimental equipment, and the obtained first experimental results are presented within this paper.

  6. Effects of process variables on the properties of YBa2Cu3O(7-x) ceramics formed by investment casting

    NASA Technical Reports Server (NTRS)

    Hooker, M. W.; Taylor, T. D.; Leigh, H. D.; Wise, S. A.; Buckley, J. D.; Vasquez, P.; Buck, G. M.; Hicks, L. P.

    1993-01-01

    An investment casting process has been developed to produce net-shape, superconducting ceramics. In this work, a factorial experiment was performed to determine the critical process parameters for producing cast YBa2Cu3O7 ceramics with optimum properties. An analysis of variance procedure indicated that the key variables in casting superconductive ceramics are the particle size distribution and sintering temperature. Additionally, the interactions between the sintering temperature and the other process parameters (e.g., particle size distribution and the use of silver dopants) were also found to influence the density, porosity, and critical current density of the fired ceramics.

  7. Low-Cost Detection of Thin Film Stress during Fabrication

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy A.

    2015-01-01

    NASA's Marshall Space Flight Center has developed a simple, cost-effective optical method for thin film stress measurements during growth and/or subsequent annealing processes. Stress arising in thin film fabrication presents production challenges for electronic devices, sensors, and optical coatings; it can lead to substrate distortion and deformation, impacting the performance of thin film products. NASA's technique measures in-situ stress using a simple, noncontact fiber optic probe in the thin film vacuum deposition chamber. This enables real-time monitoring of stress during the fabrication process and allows for efficient control of deposition process parameters. By modifying process parameters in real time during fabrication, thin film stress can be optimized or controlled, improving thin film product performance.

  8. A combinaison of UV curing technology with ATL process

    NASA Astrophysics Data System (ADS)

    Balbzioui, I.; Hasiaoui, B.; Barbier, G.; L'hostis, G.; Laurent, F.; Ibrahim, A.; Durand, B.

    2017-10-01

    In order to reduce the time and the cost of manufacturing composite, UV curing technology combined with automated tape placement process (ATL) based on reverse approach by working with a fixed head was studied in this article. First, a brief description of the developed head placement is presented. Mechanical properties are then evaluated by varying process parameters, including compaction force and tape placement speed. Finally, a parametric study is carried out to identify suitable materials and process parameters to manufacture a photo composite material with high mechanical performances. The obtained results show that UV curing is a very good alternative for thermal polymerization because of its fast cure speed due to less dependency on temperature.

  9. Software and Hardware System for Fast Processes Study When Preparing Foundation Beds of Oil and Gas Facilities

    NASA Astrophysics Data System (ADS)

    Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.

    2018-04-01

    Analysis of existing technologies for preparing foundation beds of oil and gas buildings and structures has revealed the lack of reasoned recommendations on the selection of rational technical and technological parameters of compaction. To study the nature of the dynamics of fast processes during compaction of foundation beds of oil and gas facilities, a specialized software and hardware system was developed. The method of calculating the basic technical parameters of the equipment for recording fast processes is presented, as well as the algorithm for processing the experimental data. The performed preliminary studies confirmed the accuracy of the decisions made and the calculations performed.

  10. AmapSim: a structural whole-plant simulator based on botanical knowledge and designed to host external functional models.

    PubMed

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-05-01

    AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Simulations were performed on tomato plants to demonstrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment.

  11. AmapSim: A Structural Whole-plant Simulator Based on Botanical Knowledge and Designed to Host External Functional Models

    PubMed Central

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-01-01

    Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310

  12. Machining of bone: Analysis of cutting force and surface roughness by turning process.

    PubMed

    Noordin, M Y; Jiawkok, N; Ndaruhadi, P Y M W; Kurniawan, D

    2015-11-01

    There are millions of orthopedic surgeries and dental implantation procedures performed every year globally. Most of them involve machining of bones and cartilage. However, theoretical and analytical study on bone machining is lagging behind its practice and implementation. This study views bone machining as a machining process with bovine bone as the workpiece material. Turning process which makes the basis of the actually used drilling process was experimented. The focus is on evaluating the effects of three machining parameters, that is, cutting speed, feed, and depth of cut, to machining responses, that is, cutting forces and surface roughness resulted by the turning process. Response surface methodology was used to quantify the relation between the machining parameters and the machining responses. The turning process was done at various cutting speeds (29-156 m/min), depths of cut (0.03 -0.37 mm), and feeds (0.023-0.11 mm/rev). Empirical models of the resulted cutting force and surface roughness as the functions of cutting speed, depth of cut, and feed were developed. Observation using the developed empirical models found that within the range of machining parameters evaluated, the most influential machining parameter to the cutting force is depth of cut, followed by feed and cutting speed. The lowest cutting force was obtained at the lowest cutting speed, lowest depth of cut, and highest feed setting. For surface roughness, feed is the most significant machining condition, followed by cutting speed, and with depth of cut showed no effect. The finest surface finish was obtained at the lowest cutting speed and feed setting. © IMechE 2015.

  13. Co-extrusion of semi-finished aluminium-steel compounds

    NASA Astrophysics Data System (ADS)

    Thürer, S. E.; Uhe, J.; Golovko, O.; Bonk, C.; Bouguecha, A.; Klose, C.; Behrens, B.-A.; Maier, H. J.

    2017-10-01

    The combination of light metals and steels allows for new lightweight components with wear-resistant functional surfaces. Within the Collaborative Research Centre 1153 novel process chains are developed for the manufacture of such hybrid components. Here, the production process of a hybrid bearing bushing made of the aluminium alloy EN AW-6082 and the case-hardened steel 20MnCr5 is developed. Hybrid semi-finished products are an attractive alternative to conventional ones resulting from massive forming processes where the individual components are joined after the forming process. The actual hybrid semi-finished products were manufactured using a lateral angular co-extrusion (LACE) process. The bearing bushings are subsequently produced by die forging. In the present study, a tool concept for the LACE process is described, which renders the continuous joining of a steel rod with an aluminium tube possible. During the LACE process, the rod is fed into the extrusion die at an angle of approx. 90°. Metallographic analysis of the hybrid profile showed that the mechanical bonding between the different materials begins about 75 mm after the edge of the aluminium sheath. In order to improve the bonding strength, the steel rod is to be preheated during extrusion. Systematic investigations using a dilatometer, considering the maximum possible co-extrusion process parameters, were carried out. The variable parameters for the dilatometer experiments were determined by numerical simulation. In order to form a bond between the materials, the oxide layer needs to be disrupted during the co-extrusion process. In an attempt to better understand this effect, a modified sample geometry with chamfered steel was developed for the dilatometer experiments. The influence of the process parameters on the formation of the intermetallic phase at the interface was analysed by scanning electron microscopy and X-ray diffraction. This article, which was originally published online on 16 October 2017, contained an error in the press ratio, where 9:1 should be 6:1. The corrected ratio appears in the Corrigendum attached to the pdf.

  14. Dynamic single photon emission computed tomography—basic principles and cardiac applications

    PubMed Central

    Gullberg, Grant T; Reutter, Bryan W; Sitek, Arkadiusz; Maltz, Jonathan S; Budinger, Thomas F

    2011-01-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time–activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time–activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements. PMID:20858925

  15. TOPICAL REVIEW: Dynamic single photon emission computed tomography—basic principles and cardiac applications

    NASA Astrophysics Data System (ADS)

    Gullberg, Grant T.; Reutter, Bryan W.; Sitek, Arkadiusz; Maltz, Jonathan S.; Budinger, Thomas F.

    2010-10-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements.

  16. Identification of Spey engine dynamics in the augmentor wing jet STOL research aircraft from flight data

    NASA Technical Reports Server (NTRS)

    Dehoff, R. L.; Reed, W. B.; Trankle, T. L.

    1977-01-01

    The development and validation of a spey engine model is described. An analysis of the dynamical interactions involved in the propulsion unit is presented. The model was reduced to contain only significant effects, and was used, in conjunction with flight data obtained from an augmentor wing jet STOL research aircraft, to develop initial estimates of parameters in the system. The theoretical background employed in estimating the parameters is outlined. The software package developed for processing the flight data is described. Results are summarized.

  17. Non-dimensional groups in the description of finite-amplitude sound propagation through aerosols

    NASA Technical Reports Server (NTRS)

    Scott, D. S.

    1976-01-01

    Several parameters, which have fairly transparent physical interpretations, appear in the analytic description of finite-amplitude sound propagation through aerosols. Typically, each of these parameters characterizes, in some sense, either the sound or the aerosol. It also turns out that fairly obvious combinations of these parameters yield non-dimensional groups which, in turn, characterize the nature of the acoustic-aerosol interaction. This theme is developed in order to illustrate how a quick examination of such parameters and groups can yield information about the nature of the processes involved, without the necessity of extensive mathematical analysis. This concept is developed primarily from the viewpoint of sound propagation through aerosols, although complimentary acoustic-aerosol interaction phenomena are briefly noted.

  18. An Attempt of Formalizing the Selection Parameters for Settlements Generalization in Small-Scales

    NASA Astrophysics Data System (ADS)

    Karsznia, Izabela

    2014-12-01

    The paper covers one of the most important problems concerning context-sensitive settlement selection for the purpose of the small-scale maps. So far, no formal parameters for small-scale settlements generalization have been specified, hence the problem seems to be an important and innovative challenge. It is also crucial from the practical point of view as it is necessary to develop appropriate generalization algorithms for the purpose of the General Geographic Objects Database generalization which is the essential Spatial Data Infrastructure component in Poland. The author proposes and verifies quantitative generalization parameters for the purpose of the settlement selection process in small-scale maps. The selection of settlements was carried out in two research areas - in Lower Silesia and Łódź Province. Based on the conducted analysis appropriate contextual-sensitive settlements selection parameters have been defined. Particular effort has been made to develop a methodology of quantitative settlements selection which would be useful in the automation processes and that would make it possible to keep specifics of generalized objects unchanged.

  19. ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, Muthumari; Tamang, Santosh

    2017-08-01

    Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.

  20. Regarding to the Variance Analysis of Regression Equation of the Surface Roughness obtained by End Milling process of 7136 Aluminium Alloy

    NASA Astrophysics Data System (ADS)

    POP, A. B.; ȚÎȚU, M. A.

    2016-11-01

    In the metal cutting process, surface quality is intrinsically related to the cutting parameters and to the cutting tool geometry. At the same time, metal cutting processes are closely related to the machining costs. The purpose of this paper is to reduce manufacturing costs and processing time. A study was made, based on the mathematical modelling of the average of the absolute value deviation (Ra) resulting from the end milling process on 7136 aluminium alloy, depending on cutting process parameters. The novel element brought by this paper is the 7136 aluminium alloy type, chosen to conduct the experiments, which is a material developed and patented by Universal Alloy Corporation. This aluminium alloy is used in the aircraft industry to make parts from extruded profiles, and it has not been studied for the proposed research direction. Based on this research, a mathematical model of surface roughness Ra was established according to the cutting parameters studied in a set experimental field. A regression analysis was performed, which identified the quantitative relationships between cutting parameters and the surface roughness. Using the variance analysis ANOVA, the degree of confidence for the achieved results by the regression equation was determined, and the suitability of this equation at every point of the experimental field.

  1. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  2. Identification of open quantum systems from observable time traces

    DOE PAGES

    Zhang, Jun; Sarovar, Mohan

    2015-05-27

    Estimating the parameters that dictate the dynamics of a quantum system is an important task for quantum information processing and quantum metrology, as well as fundamental physics. In our paper we develop a method for parameter estimation for Markovian open quantum systems using a temporal record of measurements on the system. Furthermore, the method is based on system realization theory and is a generalization of our previous work on identification of Hamiltonian parameters.

  3. Kepler: A Search for Terrestrial Planets - SOC 9.3 DR25 Pipeline Parameter Configuration Reports

    NASA Technical Reports Server (NTRS)

    Campbell, Jennifer R.

    2017-01-01

    This document describes the manner in which the pipeline and algorithm parameters for the Kepler Science Operations Center (SOC) science data processing pipeline were managed. This document is intended for scientists and software developers who wish to better understand the software design for the final Kepler codebase (SOC 9.3) and the effect of the software parameters on the Data Release (DR) 25 archival products.

  4. Harnessing the Sun for development: Actions for consideration by the international community at the UN Conference on New and Renewable Sources of Energy for promoting the use of renewable energy in developing countries

    NASA Astrophysics Data System (ADS)

    Jhirad, D. J.; Mubayi, V.; Weingart, J.

    1981-08-01

    The technical and economic evidence is reviewed for solar industrial process heat, highlighting the fact that financial parameters such as tax credits and depreciation allowance play a very large role in determining the economic competitiveness of solar investments. An analysis of the energy (and oil) consumed in providing industrial process heat in a number of selected developing countries is presented. Solar industrial process heat technology is discussed including the operating experience of several demonstration plants in the US Solar ponds are also described briefly. A financial and economic analysis of solar industrial process heat systems under different assumptions on future oil prices and various financial parameters is given. Financial analyses are summarized for a solar industrial process heat retrofit of a brewery in Zimbabwe and a high efficiency system operating in financial conditions typical of the US and a number of other industrialized nations. A set of recommended policy actions for countries wishing to enhance the commercial feasibility of renewable energy technologies in the commercial and industrial sections is presented.

  5. Method of Individual Forecasting of Technical State of Logging Machines

    NASA Astrophysics Data System (ADS)

    Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.

    2018-03-01

    Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.

  6. Simulation of dendritic growth reveals necessary and sufficient parameters to describe the shapes of dendritic trees

    NASA Astrophysics Data System (ADS)

    Trottier, Olivier; Ganguly, Sujoy; Bowne-Anderson, Hugo; Liang, Xin; Howard, Jonathon

    For the last 120 years, the development of neuronal shapes has been of great interest to the scientific community. Over the last 30 years, significant work has been done on the molecular processes responsible for dendritic development. In our ongoing research, we use the class IV sensory neurons of the Drosophila melanogaster larva as a model system to understand the growth of dendritic arbors. Our main goal is to elucidate the mechanisms that the neuron uses to determine the shape of its dendritic tree. We have observed the development of the class IV neuron's dendritic tree in the larval stage and have concluded that morphogenesis is defined by 3 distinct processes: 1) branch growth, 2) branching and 3) branch retraction. As the first step towards understanding dendritic growth, we have implemented these three processes in a computational model. Our simulations are able to reproduce the branch length distribution, number of branches and fractal dimension of the class IV neurons for a small range of parameters.

  7. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    NASA Astrophysics Data System (ADS)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  8. Software Analytical Instrument for Assessment of the Process of Casting Slabs

    NASA Astrophysics Data System (ADS)

    Franěk, Zdeněk; Kavička, František; Štětina, Josef; Masarik, Miloš

    2010-06-01

    The paper describes the original proposal of ways of solution and function of the program equipment for assessment of the process of casting slabs. The program system LITIOS was developed and implemented in EVRAZ Vitkovice Steel Ostrava on the equipment of continuous casting of steel (further only ECC). This program system works on the data warehouse of technological parameters of casting and quality parameters of slabs. It enables an ECC technologist to analyze the course of casting melt and with using statistics methods to set the influence of single technological parameters on the duality of final slabs. The system also enables long term monitoring and optimization of the production.

  9. A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer

    PubMed Central

    Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie

    2014-01-01

    Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727

  10. A computer model for liquid jet atomization in rocket thrust chambers

    NASA Astrophysics Data System (ADS)

    Giridharan, M. G.; Lee, J. G.; Krishnan, A.; Yang, H. Q.; Ibrahim, E.; Chuech, S.; Przekwas, A. J.

    1991-12-01

    The process of atomization has been used as an efficient means of burning liquid fuels in rocket engines, gas turbine engines, internal combustion engines, and industrial furnaces. Despite its widespread application, this complex hydrodynamic phenomenon has not been well understood, and predictive models for this process are still in their infancy. The difficulty in simulating the atomization process arises from the relatively large number of parameters that influence it, including the details of the injector geometry, liquid and gas turbulence, and the operating conditions. In this study, numerical models are developed from first principles, to quantify factors influencing atomization. For example, the surface wave dynamics theory is used for modeling the primary atomization and the droplet energy conservation principle is applied for modeling the secondary atomization. The use of empirical correlations has been minimized by shifting the analyses to fundamental levels. During applications of these models, parametric studies are performed to understand and correlate the influence of relevant parameters on the atomization process. The predictions of these models are compared with existing experimental data. The main tasks of this study were the following: development of a primary atomization model; development of a secondary atomization model; development of a model for impinging jets; development of a model for swirling jets; and coupling of the primary atomization model with a CFD code.

  11. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  12. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  13. Model-based high-throughput design of ion exchange protein chromatography.

    PubMed

    Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo

    2016-08-12

    This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Simulation of aerobic and anaerobic biodegradation processes at a crude oil spill site

    USGS Publications Warehouse

    Essaid, Hedeff I.; Bekins, Barbara A.; Godsy, E. Michael; Warren, Ean; Baedecker, Mary Jo; Cozzarelli, Isabelle M.

    1995-01-01

    A two-dimensional, multispecies reactive solute transport model with sequential aerobic and anaerobic degradation processes was developed and tested. The model was used to study the field-scale solute transport and degradation processes at the Bemidji, Minnesota, crude oil spill site. The simulations included the biodegradation of volatile and nonvolatile fractions of dissolved organic carbon by aerobic processes, manganese and iron reduction, and methanogenesis. Model parameter estimates were constrained by published Monod kinetic parameters, theoretical yield estimates, and field biomass measurements. Despite the considerable uncertainty in the model parameter estimates, results of simulations reproduced the general features of the observed groundwater plume and the measured bacterial concentrations. In the simulation, 46% of the total dissolved organic carbon (TDOC) introduced into the aquifer was degraded. Aerobic degradation accounted for 40% of the TDOC degraded. Anaerobic processes accounted for the remaining 60% of degradation of TDOC: 5% by Mn reduction, 19% by Fe reduction, and 36% by methanogenesis. Thus anaerobic processes account for more than half of the removal of DOC at this site.

  15. Modeling microbiological and chemical processes in municipal solid waste bioreactor, Part II: Application of numerical model BIOKEMOD-3P.

    PubMed

    Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh

    2010-02-01

    Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.

  16. Single Plant Root System Modeling under Soil Moisture Variation

    NASA Astrophysics Data System (ADS)

    Yabusaki, S.; Fang, Y.; Chen, X.; Scheibe, T. D.

    2016-12-01

    A prognostic Virtual Plant-Atmosphere-Soil System (vPASS) model is being developed that integrates comprehensively detailed mechanistic single plant modeling with microbial, atmospheric, and soil system processes in its immediate environment. Three broad areas of process module development are targeted: Incorporating models for root growth and function, rhizosphere interactions with bacteria and other organisms, litter decomposition and soil respiration into established porous media flow and reactive transport models Incorporating root/shoot transport, growth, photosynthesis and carbon allocation process models into an integrated plant physiology model Incorporating transpiration, Volatile Organic Compounds (VOC) emission, particulate deposition and local atmospheric processes into a coupled plant/atmosphere model. The integrated plant ecosystem simulation capability is being developed as open source process modules and associated interfaces under a modeling framework. The initial focus addresses the coupling of root growth, vascular transport system, and soil under drought scenarios. Two types of root water uptake modeling approaches are tested: continuous root distribution and constitutive root system architecture. The continuous root distribution models are based on spatially averaged root development process parameters, which are relatively straightforward to accommodate in the continuum soil flow and reactive transport module. Conversely, the constitutive root system architecture models use root growth rates, root growth direction, and root branching to evolve explicit root geometries. The branching topologies require more complex data structures and additional input parameters. Preliminary results are presented for root model development and the vascular response to temporal and spatial variations in soil conditions.

  17. Overview of Icing Physics Relevant to Scaling

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2005-01-01

    An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.

  18. Integrating artificial and human intelligence into tablet production process.

    PubMed

    Gams, Matjaž; Horvat, Matej; Ožek, Matej; Luštrek, Mitja; Gradišek, Anton

    2014-12-01

    We developed a new machine learning-based method in order to facilitate the manufacturing processes of pharmaceutical products, such as tablets, in accordance with the Process Analytical Technology (PAT) and Quality by Design (QbD) initiatives. Our approach combines the data, available from prior production runs, with machine learning algorithms that are assisted by a human operator with expert knowledge of the production process. The process parameters encompass those that relate to the attributes of the precursor raw materials and those that relate to the manufacturing process itself. During manufacturing, our method allows production operator to inspect the impacts of various settings of process parameters within their proven acceptable range with the purpose of choosing the most promising values in advance of the actual batch manufacture. The interaction between the human operator and the artificial intelligence system provides improved performance and quality. We successfully implemented the method on data provided by a pharmaceutical company for a particular product, a tablet, under development. We tested the accuracy of the method in comparison with some other machine learning approaches. The method is especially suitable for analyzing manufacturing processes characterized by a limited amount of data.

  19. Hyperspectral recognition of processing tomato early blight based on GA and SVM

    NASA Astrophysics Data System (ADS)

    Yin, Xiaojun; Zhao, SiFeng

    2013-03-01

    Processing tomato early blight seriously affect the yield and quality of its.Determine the leaves spectrum of different disease severity level of processing tomato early blight.We take the sensitive bands of processing tomato early blight as support vector machine input vector.Through the genetic algorithm(GA) to optimize the parameters of SVM, We could recognize different disease severity level of processing tomato early blight.The result show:the sensitive bands of different disease severity levels of processing tomato early blight is 628-643nm and 689-692nm.The sensitive bands are as the GA and SVM input vector.We get the best penalty parameters is 0.129 and kernel function parameters is 3.479.We make classification training and testing by polynomial nuclear,radial basis function nuclear,Sigmoid nuclear.The best classification model is the radial basis function nuclear of SVM. Training accuracy is 84.615%,Testing accuracy is 80.681%.It is combined GA and SVM to achieve multi-classification of processing tomato early blight.It is provided the technical support of prediction processing tomato early blight occurrence, development and diffusion rule in large areas.

  20. Development of a distributed-parameter mathematical model for simulation of cryogenic wind tunnels

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1983-01-01

    A one-dimensional distributed-parameter dynamic model of a cryogenic wind tunnel was developed which accounts for internal and external heat transfer, viscous momentum losses, and slotted-test-section dynamics. Boundary conditions imposed by liquid-nitrogen injection, gas venting, and the tunnel fan were included. A time-dependent numerical solution to the resultant set of partial differential equations was obtained on a CDC CYBER 203 vector-processing digital computer at a usable computational rate. Preliminary computational studies were performed by using parameters of the Langley 0.3-Meter Transonic Cryogenic Tunnel. Studies were performed by using parameters from the National Transonic Facility (NTF). The NTF wind-tunnel model was used in the design of control loops for Mach number, total temperature, and total pressure and for determining interactions between the control loops. It was employed in the application of optimal linear-regulator theory and eigenvalue-placement techniques to develop Mach number control laws.

  1. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  2. Process characterization and Design Space definition.

    PubMed

    Hakemeyer, Christian; McKnight, Nathan; St John, Rick; Meier, Steven; Trexler-Schmidt, Melody; Kelley, Brian; Zettl, Frank; Puskeiler, Robert; Kleinjans, Annika; Lim, Fred; Wurth, Christine

    2016-09-01

    Quality by design (QbD) is a global regulatory initiative with the goal of enhancing pharmaceutical development through the proactive design of pharmaceutical manufacturing process and controls to consistently deliver the intended performance of the product. The principles of pharmaceutical development relevant to QbD are described in the ICH guidance documents (ICHQ8-11). An integrated set of risk assessments and their related elements developed at Roche/Genentech were designed to provide an overview of product and process knowledge for the production of a recombinant monoclonal antibody (MAb). This chapter describes the tools used for the characterization and validation of MAb manufacturing process under the QbD paradigm. This comprises risk assessments for the identification of potential Critical Process Parameters (pCPPs), statistically designed experimental studies as well as studies assessing the linkage of the unit operations. Outcome of the studies is the classification of process parameters according to their criticality and the definition of appropriate acceptable ranges of operation. The process and product knowledge gained in these studies can lead to the approval of a Design Space. Additionally, the information gained in these studies are used to define the 'impact' which the manufacturing process can have on the variability of the CQAs, which is used to define the testing and monitoring strategy. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  3. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  4. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  5. Simulation of Structural Transformations in Heating of Alloy Steel

    NASA Astrophysics Data System (ADS)

    Kurkin, A. S.; Makarov, E. L.; Kurkin, A. B.; Rubtsov, D. E.; Rubtsov, M. E.

    2017-07-01

    Amathematical model for computer simulation of structural transformations in an alloy steel under the conditions of the thermal cycle of multipass welding is presented. The austenitic transformation under the heating and the processes of decomposition of bainite and martensite under repeated heating are considered. Amethod for determining the necessary temperature-time parameters of the model from the chemical composition of the steel is described. Published data are processed and the results used to derive regression models of the temperature ranges and parameters of transformation kinetics of alloy steels. The method developed is used in computer simulation of the process of multipass welding of pipes by the finite-element method.

  6. Analytical Modelling and Optimization of the Temperature-Dependent Dynamic Mechanical Properties of Fused Deposition Fabricated Parts Made of PC-ABS.

    PubMed

    Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal

    2016-11-04

    Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM.

  7. Analytical Modelling and Optimization of the Temperature-Dependent Dynamic Mechanical Properties of Fused Deposition Fabricated Parts Made of PC-ABS

    PubMed Central

    Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal

    2016-01-01

    Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM. PMID:28774019

  8. Application of quality by design concepts in the development of fluidized bed granulation and tableting processes.

    PubMed

    Djuris, Jelena; Medarevic, Djordje; Krstic, Marko; Djuric, Zorica; Ibric, Svetlana

    2013-06-01

    This study illustrates the application of experimental design and multivariate data analysis in defining design space for granulation and tableting processes. According to the quality by design concepts, critical quality attributes (CQAs) of granules and tablets, as well as critical parameters of granulation and tableting processes, were identified and evaluated. Acetaminophen was used as the model drug, and one of the study aims was to investigate the possibility of the development of immediate- or extended-release acetaminophen tablets. Granulation experiments were performed in the fluid bed processor using polyethylene oxide polymer as a binder in the direct granulation method. Tablets were compressed in the laboratory excenter tablet press. The first set of experiments was organized according to Plackett-Burman design, followed by the full factorial experimental design. Principal component analysis and partial least squares regression were applied as the multivariate analysis techniques. By using these different methods, CQAs and process parameters were identified and quantified. Furthermore, an in-line method was developed to monitor the temperature during the fluidized bed granulation process, to foresee possible defects in granules CQAs. Various control strategies that are based on the process understanding and assure desired quality attributes of the product are proposed. Copyright © 2013 Wiley Periodicals, Inc.

  9. Food drying process by power ultrasound.

    PubMed

    de la Fuente-Blanco, S; Riera-Franco de Sarabia, E; Acosta-Aparicio, V M; Blanco-Blanco, A; Gallego-Juárez, J A

    2006-12-22

    Drying processes, which have a great significance in the food industry, are frequently based on the use of thermal energy. Nevertheless, such methods may produce structural changes in the products. Consequently, a great emphasis is presently given to novel treatments where the quality will be preserved. Such is the case of the application of high-power ultrasound which represents an emergent and promising technology. During the last few years, we have been involved in the development of an ultrasonic dehydration process, based on the application of the ultrasonic vibration in direct contact with the product. Such a process has been the object of a detailed study at laboratory stage on the influence of the different parameters involved. This paper deals with the development and testing of a prototype system for the application and evaluation of the process at a pre-industrial stage. Such prototype is based on a high-power rectangular plate transducer, working at a frequency of 20 kHz, with a power capacity of about 100 W. In order to study mechanical and thermal effects, the system is provided with a series of sensors which permit monitoring the parameters of the process. Specific software has also been developed to facilitate data collection and analysis. The system has been tested with vegetable samples.

  10. Development of a Rational Modeling Approach for the Design, and Optimization of the Multifiltration Unit. Volume 1

    NASA Technical Reports Server (NTRS)

    Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.

    1996-01-01

    This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.

  11. Influence of Wire Electrical Discharge Machining (WEDM) process parameters on surface roughness

    NASA Astrophysics Data System (ADS)

    Yeakub Ali, Mohammad; Banu, Asfana; Abu Bakar, Mazilah

    2018-01-01

    In obtaining the best quality of engineering components, the quality of machined parts surface plays an important role. It improves the fatigue strength, wear resistance, and corrosion of workpiece. This paper investigates the effects of wire electrical discharge machining (WEDM) process parameters on surface roughness of stainless steel using distilled water as dielectric fluid and brass wire as tool electrode. The parameters selected are voltage open, wire speed, wire tension, voltage gap, and off time. Empirical model was developed for the estimation of surface roughness. The analysis revealed that off time has a major influence on surface roughness. The optimum machining parameters for minimum surface roughness were found to be at a 10 V open voltage, 2.84 μs off time, 12 m/min wire speed, 6.3 N wire tension, and 54.91 V voltage gap.

  12. Characterizing a porous road pavement using surface impedance measurement: a guided numerical inversion procedure.

    PubMed

    Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel

    2013-12-01

    This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.

  13. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  14. Development of an aerosol microphysical module: Aerosol Two-dimensional bin module for foRmation and Aging Simulation (ATRAS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsui, H.; Koike, Makoto; Kondo, Yutaka

    2014-09-30

    Number concentrations, size distributions, and mixing states of aerosols are essential parameters for accurate estimation of aerosol direct and indirect effects. In this study, we developed an aerosol module, designated Aerosol Two-dimensional bin module for foRmation and Aging Simulation (ATRAS), that can represent these parameters explicitly by considering new particle formation (NPF), black carbon (BC) aging, and secondary organic aerosol (SOA) processes. A two-dimensional bin representation is used for particles with dry diameters from 40 nm to 10 µm to resolve both aerosol size (12 bins) and BC mixing state (10 bins) for a total of 120 bins. The particlesmore » with diameters from 1 to 40 nm are resolved using an additional 8 size bins to calculate NPF. The ATRAS module was implemented in the WRF-chem model and applied to examine the sensitivity of simulated mass, number, size distributions, and optical and radiative parameters of aerosols to NPF, BC aging and SOA processes over East Asia during the spring of 2009. BC absorption enhancement by coating materials was about 50% over East Asia during the spring, and the contribution of SOA processes to the absorption enhancement was estimated to be 10 – 20% over northern East Asia and 20 – 35% over southern East Asia. A clear north-south contrast was also found between the impacts of NPF and SOA processes on cloud condensation nuclei (CCN) concentrations: NPF increased CCN concentrations at higher supersaturations (smaller particles) over northern East Asia, whereas SOA increased CCN concentrations at lower supersaturations (larger particles) over southern East Asia. Application of ATRAS to East Asia also showed that the impact of each process on each optical and radiative parameter depended strongly on the process and the parameter in question. The module can be used in the future as a benchmark model to evaluate the accuracy of simpler aerosol models and examine interactions between NPF, BC aging, and SOA processes under different meteorological conditions and emissions.« less

  15. Radar target classification studies: Software development and documentation

    NASA Astrophysics Data System (ADS)

    Kamis, A.; Garber, F.; Walton, E.

    1985-09-01

    Three computer programs were developed to process and analyze calibrated radar returns. The first program, called DATABASE, was developed to create and manage a random accessed data base. The second program, called FTRAN DB, was developed to process horizontal and vertical polarizations radar returns into different formats (i.e., time domain, circular polarizations and polarization parameters). The third program, called RSSE, was developed to simulate a variety of radar systems and to evaluate their ability to identify radar returns. Complete computer listings are included in the appendix volumes.

  16. The use of Graphic User Interface for development of a user-friendly CRS-Stack software

    NASA Astrophysics Data System (ADS)

    Sule, Rachmat; Prayudhatama, Dythia; Perkasa, Muhammad D.; Hendriyana, Andri; Fatkhan; Sardjito; Adriansyah

    2017-04-01

    The development of a user-friendly Common Reflection Surface (CRS) Stack software that has been built by implementing Graphical User Interface (GUI) is described in this paper. The original CRS-Stack software developed by WIT Consortium is compiled in the unix/linux environment, which is not a user-friendly software, so that a user must write the commands and parameters manually in a script file. Due to this limitation, the CRS-Stack become a non popular method, although applying this method is actually a promising way in order to obtain better seismic sections, which have better reflector continuity and S/N ratio. After obtaining successful results that have been tested by using several seismic data belong to oil companies in Indonesia, it comes to an idea to develop a user-friendly software in our own laboratory. Graphical User Interface (GUI) is a type of user interface that allows people to interact with computer programs in a better way. Rather than typing commands and module parameters, GUI allows the users to use computer programs in much simple and easy. Thus, GUI can transform the text-based interface into graphical icons and visual indicators. The use of complicated seismic unix shell script can be avoided. The Java Swing GUI library is used to develop this CRS-Stack GUI. Every shell script that represents each seismic process is invoked from Java environment. Besides developing interactive GUI to perform CRS-Stack processing, this CRS-Stack GUI is design to help geophysicists to manage a project with complex seismic processing procedures. The CRS-Stack GUI software is composed by input directory, operators, and output directory, which are defined as a seismic data processing workflow. The CRS-Stack processing workflow involves four steps; i.e. automatic CMP stack, initial CRS-Stack, optimized CRS-Stack, and CRS-Stack Supergather. Those operations are visualized in an informative flowchart with self explanatory system to guide the user inputting the parameter values for each operation. The knowledge of CRS-Stack processing procedure is still preserved in the software, which is easy and efficient to be learned. The software will still be developed in the future. Any new innovative seismic processing workflow will also be added into this GUI software.

  17. Development of a copula-based particle filter (CopPF) approach for hydrologic data assimilation under consideration of parameter interdependence

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, G. H.; Baetz, B. W.; Li, Y. P.; Huang, K.

    2017-06-01

    In this study, a copula-based particle filter (CopPF) approach was developed for sequential hydrological data assimilation by considering parameter correlation structures. In CopPF, multivariate copulas are proposed to reflect parameter interdependence before the resampling procedure with new particles then being sampled from the obtained copulas. Such a process can overcome both particle degeneration and sample impoverishment. The applicability of CopPF is illustrated with three case studies using a two-parameter simplified model and two conceptual hydrologic models. The results for the simplified model indicate that model parameters are highly correlated in the data assimilation process, suggesting a demand for full description of their dependence structure. Synthetic experiments on hydrologic data assimilation indicate that CopPF can rejuvenate particle evolution in large spaces and thus achieve good performances with low sample size scenarios. The applicability of CopPF is further illustrated through two real-case studies. It is shown that, compared with traditional particle filter (PF) and particle Markov chain Monte Carlo (PMCMC) approaches, the proposed method can provide more accurate results for both deterministic and probabilistic prediction with a sample size of 100. Furthermore, the sample size would not significantly influence the performance of CopPF. Also, the copula resampling approach dominates parameter evolution in CopPF, with more than 50% of particles sampled by copulas in most sample size scenarios.

  18. Basic research on design analysis methods for rotorcraft vibrations

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1991-01-01

    The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.

  19. Development of process parameters for 22 nm PMOS using 2-D analytical modeling

    NASA Astrophysics Data System (ADS)

    Maheran, A. H. Afifah; Menon, P. S.; Ahmad, I.; Shaari, S.; Faizah, Z. A. Noor

    2015-04-01

    The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (ILEAK) on PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO2) and tungsten silicide (WSix). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum ILEAK where the maximum predicted ILEAK value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device's leakage current. The absolute process parameters combination results in ILEAK mean value of 3.96821 nA/µm where is far lower than the predicted value.

  20. Development of process parameters for 22 nm PMOS using 2-D analytical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheran, A. H. Afifah; Menon, P. S.; Shaari, S.

    2015-04-24

    The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (I{sub LEAK}) onmore » PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO{sub 2}) and tungsten silicide (WSi{sub x}). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum I{sub LEAK} where the maximum predicted I{sub LEAK} value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device’s leakage current. The absolute process parameters combination results in I{sub LEAK} mean value of 3.96821 nA/µm where is far lower than the predicted value.« less

  1. Optimization of Process Parameters for High Efficiency Laser Forming of Advanced High Strength Steels within Metallurgical Constraints

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, Ghazal; Griffiths, Jonathan; Dearden, Geoff; Edwardson, Stuart P.

    Laser forming (LF) has been shown to be a viable alternative to form automotive grade advanced high strength steels (AHSS). Due to their high strength, heat sensitivity and low conventional formability show early fractures, larger springback, batch-to-batch inconsistency and high tool wear. In this paper, optimisation of the LF process parameters has been conducted to further understand the impact of a surface heat treatment on DP1000. A FE numerical simulation has been developed to analyse the dynamic thermo-mechanical effects. This has been verified against empirical data. The goal of the optimisation has been to develop a usable process window for the LF of AHSS within strict metallurgical constraints. Results indicate it is possible to LF this material, however a complex relationship has been found between the generation and maintenance of hardness values in the heated zone. A laser surface hardening effect has been observed that could be beneficial to the efficiency of the process.

  2. Strategic planning for the International Space Station

    NASA Technical Reports Server (NTRS)

    Griner, Carolyn S.

    1990-01-01

    The concept for utilization and operations planning for the International Space Station Freedom was developed in a NASA Space Station Operations Task Force in 1986. Since that time the concept has been further refined to definitize the process and products required to integrate the needs of the international user community with the operational capabilities of the Station in its evolving configuration. The keystone to the process is the development of individual plans by the partners, with the parameters and formats common to the degree that electronic communications techniques can be effectively utilized, while maintaining the proper level and location of configuration control. The integration, evaluation, and verification of the integrated plan, called the Consolidated Operations and Utilization Plan (COUP), is being tested in a multilateral environment to prove out the parameters, interfaces, and process details necessary to produce the first COUP for Space Station in 1991. This paper will describe the concept, process, and the status of the multilateral test case.

  3. Fast machine-learning online optimization of ultra-cold-atom experiments.

    PubMed

    Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R

    2016-05-16

    We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.

  4. Behavioral Competence as a Positive Youth Development Construct: A Conceptual Review

    PubMed Central

    Ma, Hing Keung

    2012-01-01

    Behavioral competence is delineated in terms of four parameters: (a) Moral and Social Knowledge, (b) Social Skills, (c) Positive Characters and Positive Attributes, and (d) Behavioral Decision Process and Action Taking. Since Ma's other papers in this special issue have already discussed the moral and social knowledge as well as the social skills associated in detail, this paper focuses on the last two parameters. It is hypothesized that the following twelve positive characters are highly related to behavioral competence: humanity, intelligence, courage, conscience, autonomy, respect, responsibility, naturalness, loyalty, humility, assertiveness, and perseverance. Large-scale empirical future studies should be conducted to substantiate the predictive validity of the complete set of these positive characters. The whole judgment and behavioral decision process is constructed based on the information processing approach. The direction of future studies should focus more on the complex input, central control, and output subprocesses and the interactions among these sub-processes. The understanding of the formation of behavior is crucial to whole-person education and positive youth development. PMID:22645434

  5. Fast machine-learning online optimization of ultra-cold-atom experiments

    PubMed Central

    Wigley, P. B.; Everitt, P. J.; van den Hengel, A.; Bastian, J. W.; Sooriyabandara, M. A.; McDonald, G. D.; Hardman, K. S.; Quinlivan, C. D.; Manju, P.; Kuhn, C. C. N.; Petersen, I. R.; Luiten, A. N.; Hope, J. J.; Robins, N. P.; Hush, M. R.

    2016-01-01

    We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system. PMID:27180805

  6. Technique for Determination of Rational Boundaries in Combining Construction and Installation Processes Based on Quantitative Estimation of Technological Connections

    NASA Astrophysics Data System (ADS)

    Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.

    2017-11-01

    The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.

  7. The Soil Model Development and Intercomparison Panel (SoilMIP) of the International Soil Modeling Consortium (ISMC)

    NASA Astrophysics Data System (ADS)

    Vanderborght, Jan; Priesack, Eckart

    2017-04-01

    The Soil Model Development and Intercomparison Panel (SoilMIP) is an initiative of the International Soil Modeling Consortium. Its mission is to foster the further development of soil models that can predict soil functions and their changes (i) due to soil use and land management and (ii) due to external impacts of climate change and pollution. Since soil functions and soil threats are diverse but linked with each other, the overall aim is to develop holistic models that represent the key functions of the soil system and the links between them. These models should be scaled up and integrated in terrestrial system models that describe the feedbacks between processes in the soil and the other terrestrial compartments. We propose and illustrate a few steps that could be taken to achieve these goals. A first step is the development of scenarios that compare simulations by models that predict the same or different soil services. Scenarios can be considered at three different levels of comparisons: scenarios that compare the numerics (accuracy but also speed) of models, scenarios that compare the effect of differences in process descriptions, and scenarios that compare simulations with experimental data. A second step involves the derivation of metrics or summary statistics that effectively compare model simulations and disentangle parameterization from model concept differences. These metrics can be used to evaluate how more complex model simulations can be represented by simpler models using an appropriate parameterization. A third step relates to the parameterization of models. Application of simulation models implies that appropriate model parameters have to be defined for a range of environmental conditions and locations. Spatial modelling approaches are used to derive parameter distributions. Considering that soils and their properties emerge from the interaction between physical, chemical and biological processes, the combination of spatial models with process models would lead to consistent parameter distributions correlations and could potentially represent self-organizing processes in soils and landscapes.

  8. Dimensionless Analysis and Mathematical Modeling of Electromagnetic Levitation (EML) of Metals

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Shi, Zhe; Li, Donghui; Yang, Yindong; Zhang, Guifang; McLean, Alexander; Chattopadhyay, Kinnor

    2016-02-01

    Electromagnetic levitation (EML), a contactless metal melting method, can be used to produce ultra-pure metals and alloys. In the EML process, the levitation force exerted on the droplet is of paramount importance and is affected by many parameters. In this paper, the relationship between levitation force and parameters affecting the levitation process were investigated by dimensionless analysis. The general formula developed by dimensionless analysis was tested and evaluated by numerical modeling. This technique can be employed to design levitation systems for a variety of materials.

  9. Smart laser hole drilling for gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Laraque, Edy

    1991-04-01

    A smart laser drilling system, which incorporates air flow inspection-in-process of the holes and intelligent real-time process parameter corrections, is described. The system along with good laser parameter developments is proved to be efficient for producing cooling holes which meet the highest aeronautical standards. To date, the system is used for percussion drilling of combustion chamber cooling holes. The system is considered to be very economical due to the drilling-on-the-fly capability that is capable of drilling up to 3 holes of 0.025-in. dia. per second.

  10. Multi-Criteria selection of technology for processing ore raw materials

    NASA Astrophysics Data System (ADS)

    Gorbatova, E. A.; Emelianenko, E. A.; Zaretckii, M. V.

    2017-10-01

    The development of Computer-Aided Process Planning (CAPP) for the Ore Beneficiation process is considered. The set of parameters to define the quality of the Ore Beneficiation process is identified. The ontological model of CAPP for the Ore Beneficiation process is described. The hybrid choice method of the most appropriate variant of the Ore Beneficiation process based on the Logical Conclusion Rules and the Fuzzy Multi-Criteria Decision Making (MCDM) approach is proposed.

  11. ISRU System Model Tool: From Excavation to Oxygen Production

    NASA Technical Reports Server (NTRS)

    Santiago-Maldonado, Edgardo; Linne, Diane L.

    2007-01-01

    In the late 80's, conceptual designs for an in situ oxygen production plant were documented in a study by Eagle Engineering [1]. In the "Summary of Findings" of this study, it is clearly pointed out that: "reported process mass and power estimates lack a consistent basis to allow comparison." The study goes on to say: "A study to produce a set of process mass, power, and volume requirements on a consistent basis is recommended." Today, approximately twenty years later, as humans plan to return to the moon and venture beyond, the need for flexible up-to-date models of the oxygen extraction production process has become even more clear. Multiple processes for the production of oxygen from lunar regolith are being investigated by NASA, academia, and industry. Three processes that have shown technical merit are molten regolith electrolysis, hydrogen reduction, and carbothermal reduction. These processes have been selected by NASA as the basis for the development of the ISRU System Model Tool (ISMT). In working to develop up-to-date system models for these processes NASA hopes to accomplish the following: (1) help in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the excavation and oxygen production processes, and (4) provide estimates on energy and power requirements, mass and volume of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters. Also, as confidence and high fidelity is achieved with each component's model, new techniques and processes can be introduced and analyzed at a fraction of the cost of traditional hardware development and test approaches. A first generation ISRU System Model Tool has been used to provide inputs to the Lunar Architecture Team studies.

  12. Development of a General Form CO 2 and Brine Flux Input Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansoor, K.; Sun, Y.; Carroll, S.

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less

  13. Quantification of chromatin condensation level by image processing.

    PubMed

    Irianto, Jerome; Lee, David A; Knight, Martin M

    2014-03-01

    The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Thermoplastic matrix composite processing model

    NASA Technical Reports Server (NTRS)

    Dara, P. H.; Loos, A. C.

    1985-01-01

    The effects the processing parameters pressure, temperature, and time have on the quality of continuous graphite fiber reinforced thermoplastic matrix composites were quantitatively accessed by defining the extent to which intimate contact and bond formation has occurred at successive ply interfaces. Two models are presented predicting the extents to which the ply interfaces have achieved intimate contact and cohesive strength. The models are based on experimental observation of compression molded laminates and neat resin conditions, respectively. Identified as the mechanism explaining the phenomenon by which the plies bond to themselves is the theory of autohesion (or self diffusion). Theoretical predictions from the Reptation Theory between autohesive strength and contact time are used to explain the effects of the processing parameters on the observed experimental strengths. The application of a time-temperature relationship for autohesive strength predictions is evaluated. A viscoelastic compression molding model of a tow was developed to explain the phenomenon by which the prepreg ply interfaces develop intimate contact.

  15. Prediction of multi performance characteristics of wire EDM process using grey ANFIS

    NASA Astrophysics Data System (ADS)

    Kumanan, Somasundaram; Nair, Anish

    2017-09-01

    Super alloys are used to fabricate components in ultra-supercritical power plants. These hard to machine materials are processed using non-traditional machining methods like Wire cut electrical discharge machining and needs attention. This paper details about multi performance optimization of wire EDM process using Grey ANFIS. Experiments are designed to establish the performance characteristics of wire EDM such as surface roughness, material removal rate, wire wear rate and geometric tolerances. The control parameters are pulse on time, pulse off time, current, voltage, flushing pressure, wire tension, table feed and wire speed. Grey relational analysis is employed to optimise the multi objectives. Analysis of variance of the grey grades is used to identify the critical parameters. A regression model is developed and used to generate datasets for the training of proposed adaptive neuro fuzzy inference system. The developed prediction model is tested for its prediction ability.

  16. Process Parameter Evaluation and Optimization for Advanced Material Development Final Report CRADA No. TC-1234-96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrubesh, L.; McGann, T. W.

    This project was established as a three-year collaboration to produce and characterize · silica aerogels prepared by a Rapid Supercritical Extraction (RSCE) process to meet . BNA, Inc. application requirements. The objectives of this project were to study the parameters necessary to produce optimized aerogel parts with narrowly specified properties and establish the range and limits of the process for producing such aerogels. The project also included development of new aerogel materials useful for high temperature applications. The results of the project were expected to set the conditions necessary to produce quantities of aerogels having particular specifications such as size,more » shape, density, and mechanical strength. BNA, Inc. terminated the project on April 7, 1999, 10-months prior to the anticipated completion date, due to termination of corporate funding for the project. The technical accomplishments achieved are outlined in Paragraph C below.« less

  17. [The history of development of evolutionary methods in St. Petersburg school of computer simulation in biology].

    PubMed

    Menshutkin, V V; Kazanskiĭ, A B; Levchenko, V F

    2010-01-01

    The history of rise and development of evolutionary methods in Saint Petersburg school of biological modelling is traced and analyzed. Some pioneering works in simulation of ecological and evolutionary processes, performed in St.-Petersburg school became an exemplary ones for many followers in Russia and abroad. The individual-based approach became the crucial point in the history of the school as an adequate instrument for construction of models of biological evolution. This approach is natural for simulation of the evolution of life-history parameters and adaptive processes in populations and communities. In some cases simulated evolutionary process was used for solving a reverse problem, i. e., for estimation of uncertain life-history parameters of population. Evolutionary computations is one more aspect of this approach application in great many fields. The problems and vistas of ecological and evolutionary modelling in general are discussed.

  18. A method for predicting optimized processing parameters for surfacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupont, J.N.; Marder, A.R.

    1994-12-31

    Welding is used extensively for surfacing applications. To operate a surfacing process efficiently, the variables must be optimized to produce low levels of dilution with the substrate while maintaining high deposition rates. An equation for dilution in terms of the welding variables, thermal efficiency factors, and thermophysical properties of the overlay and substrate was developed by balancing energy and mass terms across the welding arc. To test the validity of the resultant dilution equation, the PAW, GTAW, GMAW, and SAW processes were used to deposit austenitic stainless steel onto carbon steel over a wide range of parameters. Arc efficiency measurementsmore » were conducted using a Seebeck arc welding calorimeter. Melting efficiency was determined based on knowledge of the arc efficiency. Dilution was determined for each set of processing parameters using a quantitative image analysis system. The pertinent equations indicate dilution is a function of arc power (corrected for arc efficiency), filler metal feed rate, melting efficiency, and thermophysical properties of the overlay and substrate. With the aid of the dilution equation, the effect of processing parameters on dilution is presented by a new processing diagram. A new method is proposed for determining dilution from welding variables. Dilution is shown to depend on the arc power, filler metal feed rate, arc and melting efficiency, and the thermophysical properties of the overlay and substrate. Calculated dilution levels were compared with measured values over a large range of processing parameters and good agreement was obtained. The results have been applied to generate a processing diagram which can be used to: (1) predict the maximum deposition rate for a given arc power while maintaining adequate fusion with the substrate, and (2) predict the resultant level of dilution with the substrate.« less

  19. Investigation of Advanced Processed Single-Crystal Turbine Blade Alloys

    NASA Technical Reports Server (NTRS)

    Peters, B. J.; Biondo, C. M.; DeLuca, D. P.

    1995-01-01

    This investigation studied the influence of thermal processing and microstructure on the mechanical properties of the single-crystal, nickel-based superalloys PWA 1482 and PWA 1484. The objective of the program was to develop an improved single-crystal turbine blade alloy that is specifically tailored for use in hydrogen fueled rocket engine turbopumps. High-gradient casting, hot isostatic pressing (HIP), and alternate heat treatment (HT) processing parameters were developed to produce pore-free, eutectic-free microstructures with different (gamma)' precipitate morphologies. Test materials were cast in high thermal gradient solidification (greater than 30 C/cm (137 F/in.)) casting furnaces for reduced dendrite arm spacing, improved chemical homogeneity, and reduced interdendritic pore size. The HIP processing was conducted in 40 cm (15.7 in.) diameter production furnaces using a set of parameters selected from a trial matrix study. Metallography was conducted on test samples taken from each respective trial run to characterize the as-HIP microstructure. Post-HIP alternate HT processes were developed for each of the two alloys. The goal of the alternate HT processing was to fully solution the eutectic gamma/(gamma)' phase islands and to develop a series of modified (gamma)' morphologies for subsequent characterization testing. This was accomplished by slow cooling through the (gamma)' solvus at controlled rates to precipitate volume fractions of large (gamma)'. Post-solution alternate HT parameters were established for each alloy providing additional volume fractions of finer precipitates. Screening tests included tensile, high-cycle fatigue (HCF), smooth and notched low-cycle fatigue (LCF), creep, and fatigue crack growth evaluations performed in air and high pressure (34.5 MPa (5 ksi)) hydrogen at room and elevated temperature. Under the most severe embrittling conditions (HCF and smooth and notched LCF in 34.5 MPa (5 ksi) hydrogen at 20 C (68 F), screening test results showed increases in fatigue life typically on the order of 1OX, when compared to the current Space Shuttle Main Engine (SSME) Alternate Turbopump (AT) blade alloy (PWA 1480).

  20. An improved probit method for assessment of domino effect to chemical process equipment caused by overpressure.

    PubMed

    Mingguang, Zhang; Juncheng, Jiang

    2008-10-30

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.

  1. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  2. Validation of column-based chromatography processes for the purification of proteins. Technical report No. 14.

    PubMed

    2008-01-01

    PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.

  3. Evaluating the process parameters of the dry coating process using a 2(5-1) factorial design.

    PubMed

    Kablitz, Caroline Désirée; Urbanetz, Nora Anne

    2013-02-01

    A recent development of coating technology is dry coating, where polymer powder and liquid plasticizer are layered on the cores without using organic solvents or water. Several studies evaluating the process were introduced in literature, however, little information about the critical process parameters (CPPs) is given. Aim of the study was the investigation and optimization of CPPs with respect to one of the critical quality attributes (CQAs), the coating efficiency of the dry coating process in a rotary fluid bed. Theophylline pellets were coated with hydroxypropyl methylcellulose acetate succinate as enteric film former and triethyl citrate and acetylated monoglyceride as plasticizer. A 2(5-1) design of experiments (DOEs) was created investigating five independent process parameters namely coating temperature, curing temperature, feeding/spraying rate, air flow and rotor speed. The results were evaluated by multilinear regression using the software Modde(®) 7. It is shown, that generally, low feeding/spraying rates and low rotor speeds increase coating efficiency. High coating temperatures enhance coating efficiency, whereas medium curing temperatures have been found to be optimum in terms of coating efficiency. This study provides a scientific base for the design of efficient dry coating processes with respect to coating efficiency.

  4. Study on voids of epoxy matrix composites sandwich structure parts

    NASA Astrophysics Data System (ADS)

    He, Simin; Wen, Youyi; Yu, Wenjun; Liu, Hong; Yue, Cheng; Bao, Jing

    2017-03-01

    Void is the most common tiny defect of composite materials. Porosity is closely related to composite structure property. The voids forming behaviour in the composites sandwich structural parts with the carbon fiber reinforced epoxy resin skins was researched by adjusting the manufacturing process parameters. The composites laminate with different porosities were prepared with the different process parameter. The ultrasonic non-destructive measurement method for the porosity was developed and verified through microscopic examination. The analysis results show that compaction pressure during the manufacturing process had influence on the porosity in the laminate area. Increasing the compaction pressure and compaction time will reduce the porosity of the laminates. The bond-line between honeycomb core and carbon fiber reinforced epoxy resin skins were also analyzed through microscopic examination. The mechanical properties of sandwich structure composites were studied. The optimization process parameters and porosity ultrasonic measurement method for composites sandwich structure have been applied to the production of the composite parts.

  5. Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection.

    PubMed

    Caiazzo, Fabrizia; Caggiano, Alessandra

    2018-04-20

    Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti⁻6Al⁻4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data.

  6. Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection

    PubMed Central

    2018-01-01

    Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti–6Al–4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data. PMID:29677114

  7. User's guide: Programs for processing altimeter data over inland seas

    NASA Technical Reports Server (NTRS)

    Au, A. Y.; Brown, R. D.; Welker, J. E.

    1989-01-01

    The programs described were developed to process GEODYN-formatted satellite altimeter data, and to apply the processed results to predict geoid undulations and gravity anomalies of inland sea areas. These programs are written in standard FORTRAN 77 and are designed to run on the NSESCC IBM 3081(MVS) computer. Because of the experimental nature of these programs they are tailored to the geographical area analyzed. The attached program listings are customized for processing the altimeter data over the Black Sea. Users interested in the Caspian Sea data are expected to modify each program, although the required modifications are generally minor. Program control parameters are defined in the programs via PARAMETER statements and/or DATA statements. Other auxiliary parameters, such as labels, are hard-wired into the programs. Large data files are read in or written out through different input or output units. The program listings of these programs are accompanied by sample IBM job control language (JCL) images. Familiarity with IBM JCL and the TEMPLATE graphic package is assumed.

  8. Development Of Simulation Model For Fluid Catalytic Cracking

    NASA Astrophysics Data System (ADS)

    Ghosh, Sobhan

    2010-10-01

    Fluid Catalytic Cracking (FCC) is the most widely used secondary conversion process in the refining industry, for producing gasoline, olefins, and middle distillate from heavier petroleum fractions. There are more than 500 units in the world with a total processing capacity of about 17 to 20% of the crude capacity. FCC catalyst is the highest consumed catalyst in the process industry. On one hand, FCC is quite flexible with respect to it's ability to process wide variety of crudes with a flexible product yield pattern, and on the other hand, the interdependence of the major operating parameters makes the process extremely complex. An operating unit is self balancing and some fluctuations in the independent parameters are automatically adjusted by changing the temperatures and flow rates at different sections. However, a good simulation model is very useful to the refiner to get the best out of the process, in terms of selection of the best catalyst, to cope up with the day to day changing of the feed quality and the demands of different products from FCC unit. In addition, a good model is of great help in designing the process units and peripherals. A simple empirical model is often adequate to monitor the day to day operations, but they are not of any use in handling the other problems such as, catalyst selection or, design / modification of the plant. For this, a kinetic based rigorous model is required. Considering the complexity of the process, large number of chemical species undergoing "n" number of parallel and consecutive reactions, it is virtually impossible to develop a simulation model based on the kinetic parameters. The most common approach is to settle for a semi empirical model. We shall take up the key issues for developing a FCC model and the contribution of such models in the optimum operation of the plant.

  9. Reduced order model based on principal component analysis for process simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Y.; Malacina, A.; Biegler, L.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less

  10. Development of analysis technique to predict the material behavior of blowing agent

    NASA Astrophysics Data System (ADS)

    Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo

    2014-11-01

    In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.

  11. Modeling and experimental characterization of Blackglas(TM) polymer pyrolysis to ceramic and thermodynamic characterization of Blackglas(TM) ceramic

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2000-10-01

    The transformation of Blackglas(TM) polymer to ceramic is characterized by TGA-RGA/MS, Si29 and C13 NMR. Si29 NMR reveals a dependence between the postcure temperature and the microstructure of the resin. The postcure temperature that appears to give optimal mechanical and oxidative properties of Blackglas(TM) ceramic is around 150°C. The pyrolysis processing models, which are the Lumped Parameters Model (LPM), the Mechanistic Kinetic Model (MKM) and the Redistribution Reaction Model (RRM), are developed to provide an effective window of processing parameters rather than a costly, time-consuming trial and error approach. The Lumped Parameters Model (LPM) is developed to study the effects of various parameters such as temperature, curing conditions and heating rates on mass loss during the pyrolysis of resin and green composites. It can be used for the model-predictive control of the pyrolysis process; The Mechanistic Kinetic Model (MKM) is developed on the basis of known chemistry and architecture of the polysiloxane for the transformation of Blackglas(TM) polymer to ceramic and the evolution of gases. The effects of various heating protocols on the outgassing kinetics have been studied to develop an optimum protocol for a rapid pyrolysis process which gives a composite with desirable mechanical properties; The Redistribution Reaction Model (RRM) is proposed to describe how the microcompositions of silicon oxycarbide change with respect to temperature, and to the ratio O/Si in the polymer precursor. A Thermodynamic Additivity Model (TAM) is developed to estimate the heat capacity, standard heat of formation and entropy of Blackglas(TM) ceramic by means of the Neumann Kopp rule and the available thermodynamic data of the Si-C and Si-O systems. Thermal stability of this ceramic is investigated by constructing predominance diagrams, and it is shown that the internal degradation reactions, which account for a significant loss of strength, will proceed further in the Blackglas(TM) matrix than in the Nicalon fibers. This probably will induce failure in the matrix at lower temperatures than in the fibers. The predominance diagrams also explain the high temperature oxidation, reduction and volatilization experiments on silicon and silicon carbide in high vacuum.

  12. Development of AHPDST Vulnerability Indexing Model for Groundwater Vulnerability Assessment Using Hydrogeophysical Derived Parameters and GIS Application

    NASA Astrophysics Data System (ADS)

    Mogaji, K. A.

    2017-04-01

    Producing a bias-free vulnerability assessment map model is significantly needed for planning a scheme of groundwater quality protection. This study developed a GIS-based AHPDST vulnerability index model for producing groundwater vulnerability model map in the hard rock terrain, Nigeria by exploiting the potentials of analytic hierarchy process (AHP) and Dempster-Shafer theory (DST) data mining models. The acquired borehole and geophysical data in the study area were processed to derive five groundwater vulnerability conditioning factors (GVCFs), namely recharge rate, aquifer transmissivity, hydraulic conductivity, transverse resistance and longitudinal conductance. The produced GVCFs' thematic maps were multi-criterially analyzed by employing the mechanisms of AHP and DST models to determine the normalized weight ( W) parameter for the GVCFs and mass function factors (MFFs) parameter for the GVCFs' thematic maps' class boundaries, respectively. Based on the application of the weighted linear average technique, the determined W and MFFs parameters were synthesized to develop groundwater vulnerability potential index (GVPI)-based AHPDST model algorithm. The developed model was applied to establish four GVPI mass/belief function indices. The estimates based on the applied GVPI belief function indices were processed in GIS environment to create prospective groundwater vulnerability potential index maps. The most representative of the resulting vulnerability maps (the GVPIBel map) was considered for producing the groundwater vulnerability potential zones (GVPZ) map for the area. The produced GVPZ map established 48 and 52% of the areal extent to be covered by the lows/moderate and highs vulnerable zones, respectively. The success and the prediction rates of the produced GVPZ map were determined using the relative operating characteristics technique to give 82.3 and 77.7%, respectively. The analyzed results reveal that the developed GVPI-based AHPDST model algorithm is capable of producing efficient groundwater vulnerability potential zones prediction map and characterizing the predicted zones uncertainty via the DST mechanism processes in the area. The produced GVPZ map in this study can be used by decision makers to formulate appropriate groundwater management strategies and the approach may be well opted in other hard rock regions of the world, especially in economically poor nations.

  13. Process modeling and parameter optimization using radial basis function neural network and genetic algorithm for laser welding of dissimilar materials

    NASA Astrophysics Data System (ADS)

    Ai, Yuewei; Shao, Xinyu; Jiang, Ping; Li, Peigen; Liu, Yang; Yue, Chen

    2015-11-01

    The welded joints of dissimilar materials have been widely used in automotive, ship and space industries. The joint quality is often evaluated by weld seam geometry, microstructures and mechanical properties. To obtain the desired weld seam geometry and improve the quality of welded joints, this paper proposes a process modeling and parameter optimization method to obtain the weld seam with minimum width and desired depth of penetration for laser butt welding of dissimilar materials. During the process, Taguchi experiments are conducted on the laser welding of the low carbon steel (Q235) and stainless steel (SUS301L-HT). The experimental results are used to develop the radial basis function neural network model, and the process parameters are optimized by genetic algorithm. The proposed method is validated by a confirmation experiment. Simultaneously, the microstructures and mechanical properties of the weld seam generated from optimal process parameters are further studied by optical microscopy and tensile strength test. Compared with the unoptimized weld seam, the welding defects are eliminated in the optimized weld seam and the mechanical properties are improved. The results show that the proposed method is effective and reliable for improving the quality of welded joints in practical production.

  14. Continuous welding of unidirectional fiber reinforced thermoplastic tape material

    NASA Astrophysics Data System (ADS)

    Schledjewski, Ralf

    2017-10-01

    Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.

  15. PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Shiyuan; Huang, Jianhua Z.; Long, James

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less

  16. Modeling and optimization of joint quality for laser transmission joint of thermoplastic using an artificial neural network and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Zhang, Cheng; Li, Pin; Wang, Kai; Hu, Yang; Zhang, Peng; Liu, Huixia

    2012-11-01

    A central composite rotatable experimental design(CCRD) is conducted to design experiments for laser transmission joining of thermoplastic-Polycarbonate (PC). The artificial neural network was used to establish the relationships between laser transmission joining process parameters (the laser power, velocity, clamp pressure, scanning number) and joint strength and joint seam width. The developed mathematical models are tested by analysis of variance (ANOVA) method to check their adequacy and the effects of process parameters on the responses and the interaction effects of key process parameters on the quality are analyzed and discussed. Finally, the desirability function coupled with genetic algorithm is used to carry out the optimization of the joint strength and joint width. The results show that the predicted results of the optimization are in good agreement with the experimental results, so this study provides an effective method to enhance the joint quality.

  17. Consumer perception of dry-cured sheep meat products: Influence of process parameters under different evoked contexts.

    PubMed

    de Andrade, Juliana Cunha; Nalério, Elen Silveira; Giongo, Citieli; de Barcellos, Marcia Dutra; Ares, Gastón; Deliza, Rosires

    2017-08-01

    The development of high-quality air-dried cured sheep meat products adapted to meet consumer demands represent an interesting option to add value to the meat of adult animals. The present study aimed to evaluate the influence of process parameters on consumer choice of two products from sheep meat under different evoked contexts, considering product concepts. A total of 375 Brazilian participants completed a choice-based conjoint task with three 2-level variables for each product: maturation time, smoking, and sodium reduction for dry-cured sheep ham, and natural antioxidant, smoking, and sodium reduction for sheep meat coppa. A between-subjects experimental design was used to evaluate the influence of consumption context on consumer choices. All the process parameters significantly influenced consumer choice. However, their relative importance was affected by evoked context. Copyright © 2017. Published by Elsevier Ltd.

  18. Materials Development for Auxiliary Components for Large Compact Mo/Au TES Arrays

    NASA Technical Reports Server (NTRS)

    Finkbeiner, F. m.; Chervenak, J. A.; Bandler, S. R.; Brekosky, R.; Brown, A. D.; Figueroa-Feliciano, E.; Iyomoto, N.; Kelley, R. L.; Kilbourne, C. A.; Porter, F. S.; hide

    2007-01-01

    We describe our current fabrication process for arrays of superconducting transition edge sensor microcalorimeters, which incorporates superconducting Mo/Au bilayers and micromachined silicon structures. We focus on materials and integration methods for array heatsinking with our bilayer and micromachining processes. The thin superconducting molybdenum bottom layer strongly influences the superconducting behavior and overall film characteristics of our molybdenum/gold transition-edge sensors (TES). Concurrent with our successful TES microcalorimeter array development, we have started to investigate the thin film properties of molybdenum monolayers within a given phase space of several important process parameters. The monolayers are sputtered or electron-beam deposited exclusively on LPCVD silicon nitride coated silicon wafers. In our current bilayer process, molybdenum is electron-beam deposited at high wafer temperatures in excess of 500 degrees C. Identifying process parameters that yield high quality bilayers at a significantly lower temperature will increase options for incorporating process-sensitive auxiliary array components (AAC) such as array heat sinking and electrical interconnects into our overall device process. We are currently developing two competing technical approaches for heat sinking large compact TES microcalorimeter arrays. Our efforts to improve array heat sinking and mitigate thermal cross-talk between pixels include copper backside deposition on completed device chips and copper-filled micro-trenches surface-machined into wafers. In addition, we fabricated prototypes of copper through-wafer microvias as a potential way to read out the arrays. We present an overview on the results of our molybdenum monolayer study and its implications concerning our device fabrication. We discuss the design, fabrication process, and recent test results of our AAC development.

  19. Numerical Simulation of Thermal Performance of Glass-Fibre-Reinforced Polymer

    NASA Astrophysics Data System (ADS)

    Zhao, Yuchao; Jiang, Xu; Zhang, Qilin; Wang, Qi

    2017-10-01

    Glass-Fibre-Reinforced Polymer (GFRP), as a developing construction material, has a rapidly increasing application in civil engineering especially bridge engineering area these years, mainly used as decorating materials and reinforcing bars for now. Compared with traditional construction material, these kinds of composite material have obvious advantages such as high strength, low density, resistance to corrosion and ease of processing. There are different processing methods to form members, such as pultrusion and resin transfer moulding (RTM) methods, which process into desired shape directly through raw material; meanwhile, GFRP, as a polymer composite, possesses several particular physical and mechanical properties, and the thermal property is one of them. The matrix material, polymer, performs special after heated and endue these composite material a potential hot processing property, but also a poor fire resistance. This paper focuses on thermal performance of GFRP as panels and corresponding researches are conducted. First, dynamic thermomechanical analysis (DMA) experiment is conducted to obtain the glass transition temperature (Tg) of the object GFRP, and the curve of bending elastic modulus with temperature is calculated according to the experimental data. Then compute and estimate the values of other various thermal parameters through DMA experiment and other literatures, and conduct numerical simulation under two condition respectively: (1) the heat transfer process of GFRP panel in which the panel would be heated directly on the surface above Tg, and the hot processing under this temperature field; (2) physical and mechanical performance of GFRP panel under fire condition. Condition (1) is mainly used to guide the development of high temperature processing equipment, and condition (2) indicates that GFRP’s performance under fire is unsatisfactory, measures must be taken when being adopted. Since composite materials’ properties differ from each other and their high temperature parameters can’t be obtained through common methods, some parameters are estimated, the simulation is to guide the actual high temperature experiment, and the parameters will also be adjusted by then.

  20. Information Use Differences in Hot and Cold Risk Processing: When Does Information About Probability Count in the Columbia Card Task?

    PubMed

    Markiewicz, Łukasz; Kubińska, Elżbieta

    2015-01-01

    This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks.

  1. Information Use Differences in Hot and Cold Risk Processing: When Does Information About Probability Count in the Columbia Card Task?

    PubMed Central

    Markiewicz, Łukasz; Kubińska, Elżbieta

    2015-01-01

    Objective: This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Methods: Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Results: Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks. PMID:26635652

  2. Idea and implementation studies of populating TOPO250 component with the data from TOPO10 - generalization of geographic information in the BDG database. (Polish Title: Koncepcja i studium implementacji procesu zasilania komponentu TOPO250 danymi TOPO10 - generalizacja informacji geograficznej w bazie danych BDG )

    NASA Astrophysics Data System (ADS)

    Olszewski, R.; Pillich-Kolipińska, A.; Fiedukowicz, A.

    2013-12-01

    Implementation of INSPIRE Directive in Poland requires not only legal transposition but also development of a number of technological solutions. The one of such tasks, associated with creation of Spatial Information Infrastructure in Poland, is developing a complex model of georeference database. Significant funding for GBDOT project enables development of the national basic topographical database as a multiresolution database (MRDB). Effective implementation of this type of database requires developing procedures for generalization of geographic information (generalization of digital landscape model - DLM), which, treating TOPO10 component as the only source for creation of TOPO250 component, will allow keeping conceptual and classification consistency between those database elements. To carry out this task, the implementation of the system's concept (prepared previously for Head Office of Geodesy and Cartography) is required. Such system is going to execute the generalization process using constrained-based modeling and allows to keep topological relationships between the objects as well as between the object classes. Full implementation of the designed generalization system requires running comprehensive tests which would help with its calibration and parameterization of the generalization procedures (related to the character of generalized area). Parameterization of this process will allow determining the criteria of specific objects selection, simplification algorithms as well as the operation order. Tests with the usage of differentiated, related to the character of the area, generalization process parameters become nowadays the priority issue. Parameters are delivered to the system in the form of XML files, which, with the help of dedicated tool, are generated from the spreadsheet files (XLS) filled in by user. Using XLS file makes entering and modifying the parameters easier. Among the other elements defined by the external parametric files there are: criteria of object selection, metric parameters of generalization algorithms (e.g. simplification or aggregation) and the operations' sequence. Testing on the trial areas of diverse character will allow developing the rules of generalization process' realization, its parameterization with the proposed tool within the multiresolution reference database. The authors have attempted to develop a generalization process' parameterization for a number of different trial areas. The generalization of the results will contribute to the development of a holistic system of generalized reference data stored in the national geodetic and cartographic resources.

  3. On the identifiability of inertia parameters of planar Multi-Body Space Systems

    NASA Astrophysics Data System (ADS)

    Nabavi-Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2018-04-01

    This work describes a new formulation to study the identifiability characteristics of Serially Linked Multi-body Space Systems (SLMBSS). The process exploits the so called "Lagrange Formulation" to develop a linear form of Equations of Motion w.r.t the system Inertia Parameters (IPs). Having developed a specific form of regressor matrix, we aim to expedite the identification process. The new approach allows analytical as well as numerical identification and identifiability analysis for different SLMBSSs' configurations. Moreover, the explicit forms of SLMBSSs identifiable parameters are derived by analyzing the identifiability characteristics of the robot. We further show that any SLMBSS designed with Variable Configurations Joint allows all IPs to be identifiable through comparing two successive identification outcomes. This feature paves the way to design new class of SLMBSS for which accurate identification of all IPs is at hand. Different case studies reveal that proposed formulation provides fast and accurate results, as required by the space applications. Further studies might be necessary for cases where planar-body assumption becomes inaccurate.

  4. Model development for naphthenic acids ozonation process.

    PubMed

    Al Jibouri, Ali Kamel H; Wu, Jiangning

    2015-02-01

    Naphthenic acids (NAs) are toxic constituents of oil sands process-affected water (OSPW) which is generated during the extraction of bitumen from oil sands. NAs consist mainly of carboxylic acids which are generally biorefractory. For the treatment of OSPW, ozonation is a very beneficial method. It can significantly reduce the concentration of NAs and it can also convert NAs from biorefractory to biodegradable. In this study, a factorial design (2(4)) was used for the ozonation of OSPW to study the influences of the operating parameters (ozone concentration, oxygen/ozone flow rate, pH, and mixing) on the removal of a model NAs in a semi-batch reactor. It was found that ozone concentration had the most significant effect on the NAs concentration compared to other parameters. An empirical model was developed to correlate the concentration of NAs with ozone concentration, oxygen/ozone flow rate, and pH. In addition, a theoretical analysis was conducted to gain the insight into the relationship between the removal of NAs and the operating parameters.

  5. An Experimental Investigation into the Optimal Processing Conditions for the CO2 Laser Cladding of 20 MnCr5 Steel Using Taguchi Method and ANN

    NASA Astrophysics Data System (ADS)

    Mondal, Subrata; Bandyopadhyay, Asish.; Pal, Pradip Kumar

    2010-10-01

    This paper presents the prediction and evaluation of laser clad profile formed by means of CO2 laser applying Taguchi method and the artificial neural network (ANN). Laser cladding is one of the surface modifying technologies in which the desired surface characteristics of any component can be achieved such as good corrosion resistance, wear resistance and hardness etc. Laser is used as a heat source to melt the anti-corrosive powder of Inconel-625 (Super Alloy) to give a coating on 20 MnCr5 substrate. The parametric study of this technique is also attempted here. The data obtained from experiments have been used to develop the linear regression equation and then to develop the neural network model. Moreover, the data obtained from regression equations have also been used as supporting data to train the neural network. The artificial neural network (ANN) is used to establish the relationship between the input/output parameters of the process. The established ANN model is then indirectly integrated with the optimization technique. It has been seen that the developed neural network model shows a good degree of approximation with experimental data. In order to obtain the combination of process parameters such as laser power, scan speed and powder feed rate for which the output parameters become optimum, the experimental data have been used to develop the response surfaces.

  6. Friction Stir Welding in Wrought and Cast Aluminum Alloys: Weld Quality Evaluation and Effects of Processing Parameters on Microstructure and Mechanical Properties

    NASA Astrophysics Data System (ADS)

    Pan, Yi; Lados, Diana A.

    2017-04-01

    Friction stir welding (FSW) is a solid-state process widely used for joining similar and dissimilar materials for critical applications in the transportation sector. Understanding the effects of the process on microstructure and mechanical properties is critical in design for structural integrity. In this study, four aluminum alloy systems (wrought 6061-T651 and cast A356, 319, and A390) were processed in both as-fabricated and pre-weld heat-treated (T6) conditions using various processing parameters. The effects of processing and heat treatment on the resulting microstructures, macro-/micro-hardness, and tensile properties were systematically investigated and mechanistically correlated to changes in grain size, characteristic phases, and strengthening precipitates. Tensile tests were performed at room temperature both along and across the welding zones. A new method able to evaluate weld quality (using a weld quality index) was developed based on the stress concentration calculated under tensile loading. Optimum processing parameter domains that provide both defect-free welds and good mechanical properties were determined for each alloy and associated with the thermal history of the process. These results were further related to characteristic microstructural features, which can be used for component design and materials/process optimization.

  7. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  8. 3D-liquid chromatography as a complex mixture characterization tool for knowledge-based downstream process development.

    PubMed

    Hanke, Alexander T; Tsintavi, Eleni; Ramirez Vazquez, Maria Del Pilar; van der Wielen, Luuk A M; Verhaert, Peter D E M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel

    2016-09-01

    Knowledge-based development of chromatographic separation processes requires efficient techniques to determine the physicochemical properties of the product and the impurities to be removed. These characterization techniques are usually divided into approaches that determine molecular properties, such as charge, hydrophobicity and size, or molecular interactions with auxiliary materials, commonly in the form of adsorption isotherms. In this study we demonstrate the application of a three-dimensional liquid chromatography approach to a clarified cell homogenate containing a therapeutic enzyme. Each separation dimension determines a molecular property relevant to the chromatographic behavior of each component. Matching of the peaks across the different separation dimensions and against a high-resolution reference chromatogram allows to assign the determined parameters to pseudo-components, allowing to determine the most promising technique for the removal of each impurity. More detailed process design using mechanistic models requires isotherm parameters. For this purpose, the second dimension consists of multiple linear gradient separations on columns in a high-throughput screening compatible format, that allow regression of isotherm parameters with an average standard error of 8%. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1283-1291, 2016. © 2016 American Institute of Chemical Engineers.

  9. Interactive effects of aging parameters of AA6056

    NASA Astrophysics Data System (ADS)

    Dehghani, Kamran; Nekahi, Atiye

    2012-10-01

    The effect of thermomechanical treatment on the aging behavior of AA6056 aluminum alloy was modeled using response surface methodology (RSM). Two models were developed to predict the final yield stress (FYS) and elongation amounts as well as the optimum conditions of aging process. These were done based on the interactive effects of applied thermomechanical parameters. The optimum condition predicted by the model to attain the maximum strength was pre-aging at 80 °C for 15 h, followed by 70% cold work and subsequent final aging at 165 °C for 4 h, which resulted in the FYS of about 480 MPa. As for the elongation, the optimum condition was pre-aging at 80 °C for 15 h, followed by 30% cold work and final-aging at 165 °C for 4 h, which led to 21% elongation. To verify the suggested optimum conditions, the tests were carried out confirming the high accuracy (above 94%) of the RSM technique as well as the developed models. It is shown that the RSM can be used successfully to optimize the aging process, to determine the significance of aging parameters and to model the combination effect of process variables on the aging behavior of AA6056.

  10. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  11. Mathematical Modeling of Ammonia Electro-Oxidation on Polycrystalline Pt Deposited Electrodes

    NASA Astrophysics Data System (ADS)

    Diaz Aldana, Luis A.

    The ammonia electrolysis process has been proposed as a feasible way for electrochemical generation of fuel grade hydrogen (H2). Ammonia is identified as one of the most suitable energy carriers due to its high hydrogen density, and its safe and efficient distribution chain. Moreover, the fact that this process can be applied even at low ammonia concentration feedstock opens its application to wastewater treatment along with H 2 co-generation. In the ammonia electrolysis process, ammonia is electro-oxidized in the anode side to produce N2 while H2 is evolved from water reduction in the cathode. A thermodynamic energy requirement of just five percent of the energy used in hydrogen production from water electrolysis is expected from ammonia electrolysis. However, the absence of a complete understanding of the reaction mechanism and kinetics involved in the ammonia electro-oxidation has not yet allowed the full commercialization of this process. For that reason, a kinetic model that can be trusted in the design and scale up of the ammonia electrolyzer needs to be developed. This research focused on the elucidation of the reaction mechanism and kinetic parameters for the ammonia electro-oxidation. The definition of the most relevant elementary reactions steps was obtained through the parallel analysis of experimental data and the development of a mathematical model of the ammonia electro-oxidation in a well defined hydrodynamic system, such as the rotating disk electrode (RDE). Ammonia electro-oxidation to N 2 as final product was concluded to be a slow surface confined process where parallel reactions leading to the deactivation of the catalyst are present. Through the development of this work it was possible to define a reaction mechanism and values for the kinetic parameters for ammonia electro-oxidation that allow an accurate representation of the experimental observations on a RDE system. Additionally, the validity of the reaction mechanism and kinetic parameters were supplemented by means of process scale up, performance evaluation, and hydrodynamic analysis in a flow cell electrolyzer. An adequate simulation of the flow electrolyzer performance was accomplished using the obtained kinetic parameters.

  12. Kinetics modelling of color deterioration during thermal processing of tomato paste with the use of response surface methodology

    NASA Astrophysics Data System (ADS)

    Ganje, Mohammad; Jafari, Seid Mahdi; Farzaneh, Vahid; Malekjani, Narges

    2018-06-01

    To study the kinetics of color degradation, the tomato paste was designed to be processed at three different temperatures including 60, 70 and 80 °C for 25, 50, 75 and 100 min. a/b ratio, total color difference, saturation index and hue angle were calculated with the use of three main color parameters including L (lightness), a (redness-greenness) and b (yellowness-blueness) values. Kinetics of color degradation was developed by Arrhenius equation and the alterations were modelled with the use of response surface methodology (RSM). It was detected that all of the studied responses followed a first order reaction kinetics with an exception in TCD parameter (zeroth order). TCD and a/b respectively with the highest and lowest activation energy presented the highest sensitivity to the temperature alterations. The maximum and minimum rates of alterations were observed by TCD and b parameters, respectively. It was obviously determined that all of the studied parameters (responses) were affected by the selected independent parameters.

  13. An advanced technique for the prediction of decelerator system dynamics.

    NASA Technical Reports Server (NTRS)

    Talay, T. A.; Morris, W. D.; Whitlock, C. H.

    1973-01-01

    An advanced two-body six-degree-of-freedom computer model employing an indeterminate structures approach has been developed for the parachute deployment process. The program determines both vehicular and decelerator responses to aerodynamic and physical property inputs. A better insight into the dynamic processes that occur during parachute deployment has been developed. The model is of value in sensitivity studies to isolate important parameters that affect the vehicular response.

  14. Theory of a general class of dissipative processes.

    NASA Technical Reports Server (NTRS)

    Hale, J. K.; Lasalle, J. P.; Slemrod, M.

    1972-01-01

    Development of a theory of periodic processes that is of sufficient generality for being applied to systems defined by partial differential equations (distributed parameter systems) and functional differential equations of the retarded and neutral type (hereditary systems), as well as to systems arising in the theory of elasticity. In particular, the attempt is made to develop a meaningful general theory of dissipative periodic systems with a wide range of applications.

  15. Moment-Based Physical Models of Broadband Clutter due to Aggregations of Fish

    DTIC Science & Technology

    2013-09-30

    statistical models for signal-processing algorithm development. These in turn will help to develop a capability to statistically forecast the impact of...aggregations of fish based on higher-order statistical measures describable in terms of physical and system parameters. Environmentally , these models...processing. In this experiment, we had good ground truth on (1) and (2), and had control over (3) and (4) except for environmentally -imposed restrictions

  16. Modeling the Secondary Drying Stage of Freeze Drying: Development and Validation of an Excel-Based Model.

    PubMed

    Sahni, Ekneet K; Pikal, Michael J

    2017-03-01

    Although several mathematical models of primary drying have been developed over the years, with significant impact on the efficiency of process design, models of secondary drying have been confined to highly complex models. The simple-to-use Excel-based model developed here is, in essence, a series of steady state calculations of heat and mass transfer in the 2 halves of the dry layer where drying time is divided into a large number of time steps, where in each time step steady state conditions prevail. Water desorption isotherm and mass transfer coefficient data are required. We use the Excel "Solver" to estimate the parameters that define the mass transfer coefficient by minimizing the deviations in water content between calculation and a calibration drying experiment. This tool allows the user to input the parameters specific to the product, process, container, and equipment. Temporal variations in average moisture contents and product temperatures are outputs and are compared with experiment. We observe good agreement between experiments and calculations, generally well within experimental error, for sucrose at various concentrations, temperatures, and ice nucleation temperatures. We conclude that this model can serve as an important process development tool for process design and manufacturing problem-solving. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. Microstructure based procedure for process parameter control in rolling of aluminum thin foils

    NASA Astrophysics Data System (ADS)

    Johannes, Kronsteiner; Kabliman, Evgeniya; Klimek, Philipp-Christoph

    2018-05-01

    In present work, a microstructure based procedure is used for a numerical prediction of strength properties for Al-Mg-Sc thin foils during a hot rolling process. For this purpose, the following techniques were developed and implemented. At first, a toolkit for a numerical analysis of experimental stress-strain curves obtained during a hot compression testing by a deformation dilatometer was developed. The implemented techniques allow for the correction of a temperature increase in samples due to adiabatic heating and for the determination of a yield strength needed for the separation of the elastic and plastic deformation regimes during numerical simulation of multi-pass hot rolling. At the next step, an asymmetric Hot Rolling Simulator (adjustable table inlet/outlet height as well as separate roll infeed) was developed in order to match the exact processing conditions of a semi-industrial rolling procedure. At each element of a finite element mesh the total strength is calculated by in-house Flow Stress Model based on evolution of mean dislocation density. The strength values obtained by numerical modelling were found in a reasonable agreement with results of tensile tests for thin Al-Mg-Sc foils. Thus, the proposed simulation procedure might allow to optimize the processing parameters with respect to the microstructure development.

  18. Ventilation equations for improved exothermic process control.

    PubMed

    McKernan, John L; Ellenbecker, Michael J

    2007-04-01

    Exothermic or heated processes create potentially unsafe work environments for an estimated 5-10 million American workers each year. Excessive heat and process contaminants have the potential to cause acute health effects such as heat stroke, and chronic effects such as manganism in welders. Although millions of workers are exposed to exothermic processes, insufficient attention has been given to continuously improving engineering technologies for these processes to provide effective and efficient control. Currently there is no specific occupational standard established by OSHA regarding exposure to heat from exothermic processes, therefore it is important to investigate techniques that can mitigate known and potential adverse occupational health effects. The current understanding of engineering controls for exothermic processes is primarily based on a book chapter written by W. C. L. Hemeon in 1955. Improvements in heat transfer and meteorological theory necessary to design improved process controls have occurred since this time. The research presented involved a review of the physical properties, heat transfer and meteorological theories governing buoyant air flow created by exothermic processes. These properties and theories were used to identify parameters and develop equations required for the determination of buoyant volumetric flow to assist in improving ventilation controls. Goals of this research were to develop and describe a new (i.e. proposed) flow equation, and compare it to currently accepted ones by Hemeon and the American Conference of Governmental Industrial Hygienists (ACGIH). Numerical assessments were conducted to compare solutions from the proposed equations for plume area, mean velocity and flow to those from the ACGIH and Hemeon. Parameters were varied for the dependent variables and solutions from the proposed, ACGIH, and Hemeon equations for plume area, mean velocity and flow were analyzed using a randomized complete block statistical design (ANOVA). Results indicate that the proposed plume mean velocity equation provides significantly greater means than either the ACGIH or Hemeon equations throughout the range of parameters investigated. The proposed equations for plume area and flow also provide significantly greater means than either the ACGIH or Hemeon equations at distances >1 m above exothermic processes. With an accurate solution for the total volumetric flow, ventilation engineers and practicing industrial hygienists are equipped with the necessary information to design and size hoods, as well as place them at an optimal distance from the source to provide adequate control of the rising plume. The equations developed will allow researchers and practitioners to determine the critical control parameters for exothermic processes, such as the exhaust flow necessary to improve efficacy and efficiency, while ensuring adequate worker protection.

  19. Nickel-Phosphorous Development for Total Solar Irradiance Measurement

    NASA Astrophysics Data System (ADS)

    Carlesso, F.; Berni, L. A.; Vieira, L. E. A.; Savonov, G. S.; Nishimori, M.; Dal Lago, A.; Miranda, E.

    2017-10-01

    The development of an absolute radiometer instrument is currently a effort at INPE for TSI measurements. In this work, we describe the development of black Ni-P coatings for TSI radiometers absorptive cavities. We present a study of the surface blackening process and the relationships between morphological structure, chemical composition and coating absorption. Ni-P deposits with different phosphorous content were obtained by electroless techniques on aluminum substrates with a thin zincate layer. Appropriate phosphorus composition and etching parameters process produce low reflectance black coatings.

  20. Advanced multivariate data analysis to determine the root cause of trisulfide bond formation in a novel antibody–peptide fusion

    PubMed Central

    Goldrick, Stephen; Holmes, William; Bond, Nicholas J.; Lewis, Gareth; Kuiper, Marcel; Turner, Richard

    2017-01-01

    ABSTRACT Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody–peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high‐throughput (HT) micro‐bioreactor system (AmbrTM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on‐line and off‐line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale‐up. Biotechnol. Bioeng. 2017;114: 2222–2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:28500668

  1. Multi-step high-throughput conjugation platform for the development of antibody-drug conjugates.

    PubMed

    Andris, Sebastian; Wendeler, Michaela; Wang, Xiangyang; Hubbuch, Jürgen

    2018-07-20

    Antibody-drug conjugates (ADCs) form a rapidly growing class of biopharmaceuticals which attracts a lot of attention throughout the industry due to its high potential for cancer therapy. They combine the specificity of a monoclonal antibody (mAb) and the cell-killing capacity of highly cytotoxic small molecule drugs. Site-specific conjugation approaches involve a multi-step process for covalent linkage of antibody and drug via a linker. Despite the range of parameters that have to be investigated, high-throughput methods are scarcely used so far in ADC development. In this work an automated high-throughput platform for a site-specific multi-step conjugation process on a liquid-handling station is presented by use of a model conjugation system. A high-throughput solid-phase buffer exchange was successfully incorporated for reagent removal by utilization of a batch cation exchange step. To ensure accurate screening of conjugation parameters, an intermediate UV/Vis-based concentration determination was established including feedback to the process. For conjugate characterization, a high-throughput compatible reversed-phase chromatography method with a runtime of 7 min and no sample preparation was developed. Two case studies illustrate the efficient use for mapping the operating space of a conjugation process. Due to the degree of automation and parallelization, the platform is capable of significantly reducing process development efforts and material demands and shorten development timelines for antibody-drug conjugates. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Concurrent Software Engineering Project

    ERIC Educational Resources Information Center

    Stankovic, Nenad; Tillo, Tammam

    2009-01-01

    Concurrent engineering or overlapping activities is a business strategy for schedule compression on large development projects. Design parameters and tasks from every aspect of a product's development process and their interdependencies are overlapped and worked on in parallel. Concurrent engineering suffers from negative effects such as excessive…

  3. Commercialization of the Conversion of Bagasse to Ethanol. Summary quarterly report for the period January-September 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2000-02-01

    These studies were intended to further refine sugar yield parameters which effect sugar yield such as feedstock particle size, debris, acid soak time, temperature, dewatering, and pretreatment conditions (such as temperature, reaction time, percentage solids concentration, acid concentration), liquid-solids separation, and detoxification parameters (such as time temperature and mixing of detoxification ingredients). Validate and refine parameters, which affect ethanol yield such as detoxification conditions mentioned above, and to fermenter conditions such as temperature, pH adjustment, aeration, nutrients, and charging sequence. Materials of construction will be evaluated also. Evaluate stillage to determine clarification process and suitability for recycle; evaluate lignocellulosic cakemore » for thermal energy recovery to produce heat and electricity for the process; and Support Studies at UF - Toxin Amelioration and Fermentation; TVA work will provide pre-hydroylsates for the evaluation of BCI proprietary methods of toxin amelioration. Pre-hydrolysates from batch studies will allow the determination of the range of allowable hydrolyze conditions that can be used to produce a fermentable sugar stream. This information is essential to guide selection of process parameters for refinement and validation in the continuous pretreatment reactor, and for overall process design. Additional work will be conducted at UFRFI to develop improved strains that are resistant to inhibitors. The authors are quite optimistic about the long-term prospects for this advancement having recently developed strains with a 25%--50% increase in ethanol production. The biocatalyst platform selected originally, genetically engineered Escherichia coli B, has proven to be quite robust and adaptable.« less

  4. A feasibility investigation for modeling and optimization of temperature in bone drilling using fuzzy logic and Taguchi optimization methodology.

    PubMed

    Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar

    2014-11-01

    Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.

  5. Reconstruction of atmospheric pollutant concentrations from remote sensing data - An application of distributed parameter observer theory

    NASA Technical Reports Server (NTRS)

    Koda, M.; Seinfeld, J. H.

    1982-01-01

    The reconstruction of a concentration distribution from spatially averaged and noise-corrupted data is a central problem in processing atmospheric remote sensing data. Distributed parameter observer theory is used to develop reconstructibility conditions for distributed parameter systems having measurements typical of those in remote sensing. The relation of the reconstructibility condition to the stability of the distributed parameter observer is demonstrated. The theory is applied to a variety of remote sensing situations, and it is found that those in which concentrations are measured as a function of altitude satisfy the conditions of distributed state reconstructibility.

  6. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  7. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  8. Distribution and avoidance of debris on epoxy resin during UV ns-laser scanning processes

    NASA Astrophysics Data System (ADS)

    Veltrup, Markus; Lukasczyk, Thomas; Ihde, Jörg; Mayer, Bernd

    2018-05-01

    In this paper the distribution of debris generated by a nanosecond UV laser (248 nm) on epoxy resin and the prevention of the corresponding re-deposition effects by parameter selection for a ns-laser scanning process were investigated. In order to understand the mechanisms behind the debris generation, in-situ particle measurements were performed during laser treatment. These measurements enabled the determination of the ablation threshold of the epoxy resin as well as the particle density and size distribution in relation to the applied laser parameters. The experiments showed that it is possible to reduce debris on the surface with an adapted selection of pulse overlap with respect to laser fluence. A theoretical model for the parameter selection was developed and tested. Based on this model, the correct choice of laser parameters with reduced laser fluence resulted in a surface without any re-deposited micro-particles.

  9. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  10. Bayesian calibration of the Community Land Model using surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less

  11. Future heavy duty trucking engine requirements

    NASA Technical Reports Server (NTRS)

    Strawhorn, L. W.; Suski, V. A.

    1985-01-01

    Developers of advanced heavy duty diesel engines are engaged in probing the opportunities presented by new materials and techniques. This process is technology driven, but there is neither assurance that the eventual users of the engines so developed will be comfortable with them nor, indeed, that those consumers will continue to exist in either the same form, or numbers as they do today. To ensure maximum payoff of research dollars, the equipment development process must consider user needs. This study defines motor carrier concerns, cost tolerances, and the engine parameters which match the future projected industry needs. The approach taken to do that is to be explained and the results presented. The material to be given comes basically from a survey of motor carrier fleets. It provides indications of the role of heavy duty vehicles in the 1998 period and their desired maintenance and engine performance parameters.

  12. [Interconnection of stress and physical development processes in young persons].

    PubMed

    Barbarash, N A; Kuvshinkov, D Iu; Tul'chinskiĭ, M Ia

    2003-01-01

    The physical development (PD) rates, constitutional peculiarities and an integral level of different manifestations of stress-reactivity (SR) were evaluated in 201 students of Medical Academy (73 males and 138 females), aged 17-21; the above parameters were tested by the color method of Luscher, by Teylor's anxiety assessment, by "Individual Minute" measurements, by the iridoscopic count of iris nervous rings, by the "Mathematical Count" technique and by calculating the index of regulatory systems' tension according to the heart rate variability. The highest total SR index, including the SR cardiac manifestations was found in youth to correlate with the lowest PD index. The integral SR level correlated, in youth, inversely with the PD parameters. Such relations are more pronounced in individuals of the abdominal somatic type. The mechanisms and biological significance of SR correlations with the processes of growth and development are under discussion.

  13. Statistical Optimization of Reactive Plasma Cladding to Synthesize a WC-Reinforced Fe-Based Alloy Coating

    NASA Astrophysics Data System (ADS)

    Wang, Miqi; Zhou, Zehua; Wu, Lintao; Ding, Ying; Xu, Feilong; Wang, Zehua

    2018-04-01

    A new compound Fe-W-C powder for reactive plasma cladding was fabricated by precursor carbonization process using sucrose as a precursor. The application of quadratic general rotary unitized design was highlighted to develop a mathematical model to predict and accomplish the desired surface hardness of plasma-cladded coating. The microstructure and microhardness of the coating with optimal parameters were also investigated. According to the developed empirical model, the optimal process parameters were determined as follows: 1.4 for C/W atomic ratio, 20 wt.% for W content, 130 A for scanning current and 100 mm/min (1.67 mm/s) for scanning rate. The confidence level of the model was 99% according to the results of the F-test and lack-of-fit test. Microstructural study showed that the dendritic structure was comprised of a mechanical mixture of α-Fe and carbides, while the interdendritic structure was a eutectic of α-Fe and carbides in the composite coating with optimal parameters. WC phase generation can be confirmed from the XRD pattern. Due to good preparation parameters, the average microhardness of cladded coating can reach 1120 HV0.1, which was four times the substrate microhardness.

  14. Development of a Premium Quality Plasma-derived IVIg (IQYMUNE®) Utilizing the Principles of Quality by Design-A Worked-through Case Study.

    PubMed

    Paolantonacci, Philippe; Appourchaux, Philippe; Claudel, Béatrice; Ollivier, Monique; Dennett, Richard; Siret, Laurent

    2018-01-01

    Polyvalent human normal immunoglobulins for intravenous use (IVIg), indicated for rare and often severe diseases, are complex plasma-derived protein preparations. A quality by design approach has been used to develop the Laboratoire Français du Fractionnement et des Biotechnologies new-generation IVIg, targeting a high level of purity to generate an enhanced safety profile while maintaining a high level of efficacy. A modular approach of quality by design was implemented consisting of five consecutive steps to cover all the stages from the product design to the final product control strategy.A well-defined target product profile was translated into 27 product quality attributes that formed the basis of the process design. In parallel, a product risk analysis was conducted and identified 19 critical quality attributes among the product quality attributes. Process risk analysis was carried out to establish the links between process parameters and critical quality attributes. Twelve critical steps were identified, and for each of these steps a risk mitigation plan was established.Among the different process risk mitigation exercises, five process robustness studies were conducted at qualified small scale with a design of experiment approach. For each process step, critical process parameters were identified and, for each critical process parameter, proven acceptable ranges were established. The quality risk management and risk mitigation outputs, including verification of proven acceptable ranges, were used to design the process verification exercise at industrial scale.Finally, the control strategy was established using a mix, or hybrid, of the traditional approach plus elements of the quality by design enhanced approach, as illustrated, to more robustly assign material and process controls and in order to securely meet product specifications.The advantages of this quality by design approach were improved process knowledge for industrial design and process validation and a clear justification of the process and product specifications as a basis for control strategy and future comparability exercises. © PDA, Inc. 2018.

  15. Critical zone evolution and the origins of organised complexity in watersheds

    NASA Astrophysics Data System (ADS)

    Harman, C.; Troch, P. A.; Pelletier, J.; Rasmussen, C.; Chorover, J.

    2012-04-01

    The capacity of the landscape to store and transmit water is the result of a historical trajectory of landscape, soil and vegetation development, much of which is driven by hydrology itself. Progress in geomorphology and pedology has produced models of surface and sub-surface evolution in soil-mantled uplands. These dissected, denuding modeled landscapes are emblematic of the kinds of dissipative self-organized flow structures whose hydrologic organization may also be understood by low-dimensional hydrologic models. They offer an exciting starting-point for examining the mapping between the long-term controls on landscape evolution and the high-frequency hydrologic dynamics. Here we build on recent theoretical developments in geomorphology and pedology to try to understand how the relative rates of erosion, sediment transport and soil development in a landscape determine catchment storage capacity and the relative dominance of runoff process, flow pathways and storage-discharge relationships. We do so by using a combination of landscape evolution models, hydrologic process models and data from a variety of sources, including the University of Arizona Critical Zone Observatory. A challenge to linking the landscape evolution and hydrologic model representations is the vast differences in the timescales implicit in the process representations. Furthermore the vast array of processes involved makes parameterization of such models an enormous challenge. The best data-constrained geomorphic transport and soil development laws only represent hydrologic processes implicitly, through the transport and weathering rate parameters. In this work we propose to avoid this problem by identifying the relationship between the landscape and soil evolution parameters and macroscopic climate and geological controls. These macroscopic controls (such as the aridity index) have two roles: 1) they express the water and energy constraints on the long-term evolution of the landscape system, and 2) they bound the range of plausible short-term hydroclimatic regimes that may drive a particular landscape's hydrologic dynamics. To ensure that the hydrologic dynamics implicit in the evolutionary parameters are compatible with the dynamics observed in the hydrologic modeling, a set of consistency checks based on flow process dominance are developed.

  16. LPV gain-scheduled control of SCR aftertreatment systems

    NASA Astrophysics Data System (ADS)

    Meisami-Azad, Mona; Mohammadpour, Javad; Grigoriadis, Karolos M.; Harold, Michael P.; Franchek, Matthew A.

    2012-01-01

    Hydrocarbons, carbon monoxide and some of other polluting emissions produced by diesel engines are usually lower than those produced by gasoline engines. While great strides have been made in the exhaust aftertreatment of vehicular pollutants, the elimination of nitrogen oxide (NO x ) from diesel vehicles is still a challenge. The primary reason is that diesel combustion is a fuel-lean process, and hence there is significant unreacted oxygen in the exhaust. Selective catalytic reduction (SCR) is a well-developed technology for power plants and has been recently employed for reducing NO x emissions from automotive sources and in particular, heavy-duty diesel engines. In this article, we develop a linear parameter-varying (LPV) feedforward/feedback control design method for the SCR aftertreatment system to decrease NO x emissions while keeping ammonia slippage to a desired low level downstream the catalyst. The performance of the closed-loop system obtained from the interconnection of the SCR system and the output feedback LPV control strategy is then compared with other control design methods including sliding mode, and observer-based static state-feedback parameter-varying control. To reduce the computational complexity involved in the control design process, the number of LPV parameters in the developed quasi-LPV (qLPV) model is reduced by applying the principal component analysis technique. An LPV feedback/feedforward controller is then designed for the qLPV model with reduced number of scheduling parameters. The designed full-order controller is further simplified to a first-order transfer function with a parameter-varying gain and pole. Finally, simulation results using both a low-order model and a high-fidelity and high-order model of SCR reactions in GT-POWER interfaced with MATLAB/SIMULINK illustrate the high NO x conversion efficiency of the closed-loop SCR system using the proposed parameter-varying control law.

  17. Optimization of laser welding thin-gage galvanized steel via response surface methodology

    NASA Astrophysics Data System (ADS)

    Zhao, Yangyang; Zhang, Yansong; Hu, Wei; Lai, Xinmin

    2012-09-01

    The increasing demand of light weight and durability makes thin-gage galvanized steels (<0.6 mm) attractive for future automotive applications. Laser welding, well known for its deep penetration, high speed and small heat affected zone, provides a potential solution for welding thin-gage galvanized steels in automotive industry. In this study, the effect of the laser welding parameters (i.e. laser power, welding speed, gap and focal position) on the weld bead geometry (i.e. weld depth, weld width and surface concave) of 0.4 mm-thick galvanized SAE1004 steel in a lap joint configuration has been investigated by experiments. The process windows of the concerned process parameters were therefore determined. Then, response surface methodology (RSM) was used to develop models to predict the relationship between the processing parameters and the laser weld bead profile and identify the correct and optimal combination of the laser welding input variables to obtain superior weld joint. Under the optimal welding parameters, defect-free weld were produced, and the average aspect ratio increased about 30%, from 0.62 to 0.83.

  18. Developing a CD-CBM Anticipatory Approach for Cavitation - Defining a Model Descriptor Consistent Between Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, G.O.; Dress, W.B.; Kercel, S.W.

    1999-05-10

    A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used asmore » a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.« less

  19. Biomachining - A new approach for micromachining of metals

    NASA Astrophysics Data System (ADS)

    Vigneshwaran, S. C. Sakthi; Ramakrishnan, R.; Arun Prakash, C.; Sashank, C.

    2018-04-01

    Machining is the process of removal of material from workpiece. Machining can be done by physical, chemical or biological methods. Though physical and chemical methods have been widely used in machining process, they have their own disadvantages such as development of heat affected zone and usage of hazardous chemicals. Biomachining is the machining process in which bacteria is used to remove material from the metal parts. Chemolithotrophic bacteria such as Acidothiobacillus ferroxidans has been used in biomachining of metals like copper, iron etc. These bacteria are used because of their property of catalyzing the oxidation of inorganic substances. Biomachining is a suitable process for micromachining of metals. This paper reviews the biomachining process and various mechanisms involved in biomachining. This paper also briefs about various parameters/factors to be considered in biomachining and also the effect of those parameters on metal removal rate.

  20. Sliding mode control: an approach to regulate nonlinear chemical processes

    PubMed

    Camacho; Smith

    2000-01-01

    A new approach for the design of sliding mode controllers based on a first-order-plus-deadtime model of the process, is developed. This approach results in a fixed structure controller with a set of tuning equations as a function of the characteristic parameters of the model. The controller performance is judged by simulations on two nonlinear chemical processes.

  1. Additive Manufacturing of IN100 Superalloy Through Scanning Laser Epitaxy for Turbine Engine Hot-Section Component Repair: Process Development, Modeling, Microstructural Characterization, and Process Control

    NASA Astrophysics Data System (ADS)

    Acharya, Ranadip; Das, Suman

    2015-09-01

    This article describes additive manufacturing (AM) of IN100, a high gamma-prime nickel-based superalloy, through scanning laser epitaxy (SLE), aimed at the creation of thick deposits onto like-chemistry substrates for enabling repair of turbine engine hot-section components. SLE is a metal powder bed-based laser AM technology developed for nickel-base superalloys with equiaxed, directionally solidified, and single-crystal microstructural morphologies. Here, we combine process modeling, statistical design-of-experiments (DoE), and microstructural characterization to demonstrate fully metallurgically bonded, crack-free and dense deposits exceeding 1000 μm of SLE-processed IN100 powder onto IN100 cast substrates produced in a single pass. A combined thermal-fluid flow-solidification model of the SLE process compliments DoE-based process development. A customized quantitative metallography technique analyzes digital cross-sectional micrographs and extracts various microstructural parameters, enabling process model validation and process parameter optimization. Microindentation measurements show an increase in the hardness by 10 pct in the deposit region compared to the cast substrate due to microstructural refinement. The results illustrate one of the very few successes reported for the crack-free deposition of IN100, a notoriously "non-weldable" hot-section alloy, thus establishing the potential of SLE as an AM method suitable for hot-section component repair and for future new-make components in high gamma-prime containing crack-prone nickel-based superalloys.

  2. Meltlets(®) of soy isoflavones: process optimization and the effect of extrusion spheronization process parameters on antioxidant activity.

    PubMed

    Deshmukh, Ketkee; Amin, Purnima

    2013-07-01

    In the current research work an attempt was made to develop "Melt in mouth pellets" (Meltlets(®)) containing 40% herbal extract of soy isoflavones that served to provide antioxidants activity in menopausal women. The process of extrusion-spheronization was optimized for extruder speed, extruder screen size, spheronization speed, and time. While doing so the herbal extract incorporated in the pellet matrix was subjected to various processing conditions such as the effect of the presence of other excipients, mixing or kneading to prepare wet mass, heat generated during the process of extrusion, spheronization, and drying. Thus, the work further investigates the effect of these processing parameters on the antioxidant activity of the soy isoflavone herbal extract incorporated in the formula. Thereby, the antioxidant activity of the soya bean herbal extract, Meltlets(®) and of the placebo pellets was evaluated using DPPH free radical scavenging assay and total reduction capacity.

  3. Simulation Modeling of Software Development Processes

    NASA Technical Reports Server (NTRS)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  4. Final Report: Superconducting Joints Between (RE)Ba 2Cu 3O 7-x Coated Conductors via Electric Field Assisted Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Justin

    Here we report the results from a project aimed at developing a fully superconducting joint between two REBCO coated conductors using electric field processing (EFP). Due to a reduction in the budget and time period of this contract, we reduced the project scope and focused first on the key scientific issues for forming a strong bond between conductors, and subsequently focused on improving through-the-joint transport. A modified timeline and task list is shown in Table 1, summarizing accomplishments to date. In the first period, we accomplished initial surface characterization as well as rounds of EFP experiments to begin to understandmore » processing parameters which produce well-bonded tapes. In the second phase, we explored the effects of two fundamental EFP parameters, voltage and pressure, and the limitations they place on the process. In the third phase, we achieved superconducting joints and established base characteristics of both the bonding process and the types of tapes best suited to this process. Finally, we investigated some of the parameters related to kinetics which appeared inhibit joint quality and performance.« less

  5. Process development and exergy cost sensitivity analysis of a hybrid molten carbonate fuel cell power plant and carbon dioxide capturing process

    NASA Astrophysics Data System (ADS)

    Mehrpooya, Mehdi; Ansarinasab, Hojat; Moftakhari Sharifzadeh, Mohammad Mehdi; Rosen, Marc A.

    2017-10-01

    An integrated power plant with a net electrical power output of 3.71 × 105 kW is developed and investigated. The electrical efficiency of the process is found to be 60.1%. The process includes three main sub-systems: molten carbonate fuel cell system, heat recovery section and cryogenic carbon dioxide capturing process. Conventional and advanced exergoeconomic methods are used for analyzing the process. Advanced exergoeconomic analysis is a comprehensive evaluation tool which combines an exergetic approach with economic analysis procedures. With this method, investment and exergy destruction costs of the process components are divided into endogenous/exogenous and avoidable/unavoidable parts. Results of the conventional exergoeconomic analyses demonstrate that the combustion chamber has the largest exergy destruction rate (182 MW) and cost rate (13,100 /h). Also, the total process cost rate can be decreased by reducing the cost rate of the fuel cell and improving the efficiency of the combustion chamber and heat recovery steam generator. Based on the total avoidable endogenous cost rate, the priority for modification is the heat recovery steam generator, a compressor and a turbine of the power plant, in rank order. A sensitivity analysis is done to investigate the exergoeconomic factor parameters through changing the effective parameter variations.

  6. Development of a corn and soybean labeling procedure for use with profile parameter classification

    NASA Technical Reports Server (NTRS)

    Magness, E. R. (Principal Investigator)

    1982-01-01

    Some essential processes for the development of a green-number-based logic for identifying (labeling) crops in LANDSAT imagery are documented. The supporting data and subsequent conclusions that resulted from development of a specific labeling logic for corn and soybean crops in the United States are recorded.

  7. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820

  8. Product design for energy reduction in concurrent engineering: An Inverted Pyramid Approach

    NASA Astrophysics Data System (ADS)

    Alkadi, Nasr M.

    Energy factors in product design in concurrent engineering (CE) are becoming an emerging dimension for several reasons; (a) the rising interest in "green design and manufacturing", (b) the national energy security concerns and the dramatic increase in energy prices, (c) the global competition in the marketplace and global climate change commitments including carbon tax and emission trading systems, and (d) the widespread recognition of the need for sustainable development. This research presents a methodology for the intervention of energy factors in concurrent engineering product development process to significantly reduce the manufacturing energy requirement. The work presented here is the first attempt at integrating the design for energy in concurrent engineering framework. It adds an important tool to the DFX toolbox for evaluation of the impact of design decisions on the product manufacturing energy requirement early during the design phase. The research hypothesis states that "Product Manufacturing Energy Requirement is a Function of Design Parameters". The hypothesis was tested by conducting experimental work in machining and heat treating that took place at the manufacturing lab of the Industrial and Management Systems Engineering Department (IMSE) at West Virginia University (WVU) and at a major U.S steel manufacturing plant, respectively. The objective of the machining experiment was to study the effect of changing specific product design parameters (Material type and diameter) and process design parameters (metal removal rate) on a gear head lathe input power requirement through performing defined sets of machining experiments. The objective of the heat treating experiment was to study the effect of varying product charging temperature on the fuel consumption of a walking beams reheat furnace. The experimental work in both directions have revealed important insights into energy utilization in machining and heat-treating processes and its variance based on product, process, and system design parameters. In depth evaluation to how the design and manufacturing normally happen in concurrent engineering provided a framework to develop energy system levels in machining within the concurrent engineering environment using the method of "Inverted Pyramid Approach", (IPA). The IPA features varying levels of output energy based information depending on the input design parameters that is available during each stage (level) of the product design. The experimental work, the in-depth evaluation of design and manufacturing in CE, and the developed energy system levels in machining provided a solid base for the development of the model for the design for energy reduction in CE. The model was used to analyze an example part where 12 evolving designs were thoroughly reviewed to investigate the sensitivity of energy to design parameters in machining. The model allowed product design teams to address manufacturing energy concerns early during the design stage. As a result, ranges for energy sensitive design parameters impacting product manufacturing energy consumption were found in earlier levels. As designer proceeds to deeper levels in the model, this range tightens and results in significant energy reductions.

  9. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  10. Automatic management system for dose parameters in interventional radiology and cardiology.

    PubMed

    Ten, J I; Fernandez, J M; Vaño, E

    2011-09-01

    The purpose of this work was to develop an automatic management system to archive and analyse the major study parameters and patient doses for fluoroscopy guided procedures performed in cardiology and interventional radiology systems. The X-ray systems used for this trial have the capability to export at the end of the procedure and via e-mail the technical parameters of the study and the patient dose values. An application was developed to query and retrieve from a mail server, all study reports sent by the imaging modality and store them on a Microsoft SQL Server data base. The results from 3538 interventional study reports generated by 7 interventional systems were processed. In the case of some technical parameters and patient doses, alarms were added to receive malfunction alerts so as to immediately take appropriate corrective actions.

  11. Processing tracking in jMRUI software for magnetic resonance spectra quantitation reproducibility assurance.

    PubMed

    Jabłoński, Michał; Starčuková, Jana; Starčuk, Zenon

    2017-01-23

    Proton magnetic resonance spectroscopy is a non-invasive measurement technique which provides information about concentrations of up to 20 metabolites participating in intracellular biochemical processes. In order to obtain any metabolic information from measured spectra a processing should be done in specialized software, like jMRUI. The processing is interactive and complex and often requires many trials before obtaining a correct result. This paper proposes a jMRUI enhancement for efficient and unambiguous history tracking and file identification. A database storing all processing steps, parameters and files used in processing was developed for jMRUI. The solution was developed in Java, authors used a SQL database for robust storage of parameters and SHA-256 hash code for unambiguous file identification. The developed system was integrated directly in jMRUI and it will be publically available. A graphical user interface was implemented in order to make the user experience more comfortable. The database operation is invisible from the point of view of the common user, all tracking operations are performed in the background. The implemented jMRUI database is a tool that can significantly help the user to track the processing history performed on data in jMRUI. The created tool is oriented to be user-friendly, robust and easy to use. The database GUI allows the user to browse the whole processing history of a selected file and learn e.g. what processing lead to the results, where the original data are stored, to obtain the list of all processing actions performed on spectra.

  12. Inverse analysis of water profile in starch by non-contact photopyroelectric method

    NASA Astrophysics Data System (ADS)

    Frandas, A.; Duvaut, T.; Paris, D.

    2000-07-01

    The photopyroelectric (PPE) method in a non-contact configuration was proposed to study water migration in starch sheets used for biodegradable packaging. A 1-D theoretical model was developed, allowing the study of samples having a water profile characterized by an arbitrary continuous function. An experimental setup was designed or this purpose which included the choice of excitation source, detection of signals, signal and data processing, and cells for conditioning the samples. We report here the development of an inversion procedure allowing for the determination of the parameters that influence the PPE signal. This procedure led to the optimization of experimental conditions in order to identify the parameters related to the water profile in the sample, and to monitor the dynamics of the process.

  13. Fatigue Crack Growth Database for Damage Tolerance Analysis

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Shivakumar, V.; Cardinal, J. W.; Williams, L. C.; McKeighan, P. C.

    2005-01-01

    The objective of this project was to begin the process of developing a fatigue crack growth database (FCGD) of metallic materials for use in damage tolerance analysis of aircraft structure. For this initial effort, crack growth rate data in the NASGRO (Registered trademark) database, the United States Air Force Damage Tolerant Design Handbook, and other publicly available sources were examined and used to develop a database that characterizes crack growth behavior for specific applications (materials). The focus of this effort was on materials for general commercial aircraft applications, including large transport airplanes, small transport commuter airplanes, general aviation airplanes, and rotorcraft. The end products of this project are the FCGD software and this report. The specific goal of this effort was to present fatigue crack growth data in three usable formats: (1) NASGRO equation parameters, (2) Walker equation parameters, and (3) tabular data points. The development of this FCGD will begin the process of developing a consistent set of standard fatigue crack growth material properties. It is envisioned that the end product of the process will be a general repository for credible and well-documented fracture properties that may be used as a default standard in damage tolerance analyses.

  14. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    PubMed

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  15. Binary logistic regression-Instrument for assessing museum indoor air impact on exhibits.

    PubMed

    Bucur, Elena; Danet, Andrei Florin; Lehr, Carol Blaziu; Lehr, Elena; Nita-Lazar, Mihai

    2017-04-01

    This paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The prediction of the impact on the exhibits during certain pollution scenarios (environmental impact) was calculated by a mathematical model based on the binary logistic regression; it allows the identification of those environmental parameters from a multitude of possible parameters with a significant impact on exhibitions and ranks them according to their severity effect. Air quality (NO 2 , SO 2 , O 3 and PM 2.5 ) and microclimate parameters (temperature, humidity) monitoring data from a case study conducted within exhibition and storage spaces of the Romanian National Aviation Museum Bucharest have been used for developing and validating the binary logistic regression method and the mathematical model. The logistic regression analysis was used on 794 data combinations (715 to develop of the model and 79 to validate it) by a Statistical Package for Social Sciences (SPSS 20.0). The results from the binary logistic regression analysis demonstrated that from six parameters taken into consideration, four of them present a significant effect upon exhibits in the following order: O 3 >PM 2.5 >NO 2 >humidity followed at a significant distance by the effects of SO 2 and temperature. The mathematical model, developed in this study, correctly predicted 95.1 % of the cumulated effect of the environmental parameters upon the exhibits. Moreover, this model could also be used in the decisional process regarding the preventive preservation measures that should be implemented within the exhibition space. The paper presents a new way to assess the environmental impact on historical artifacts using binary logistic regression. The mathematical model developed on the environmental parameters analyzed by the binary logistic regression method could be useful in a decision-making process establishing the best measures for pollution reduction and preventive preservation of exhibits.

  16. A hybrid silicon membrane spatial light modulator for optical information processing

    NASA Technical Reports Server (NTRS)

    Pape, D. R.; Hornbeck, L. J.

    1984-01-01

    A new two dimensional, fast, analog, electrically addressable, silicon based membrane spatial light modulator (SLM) was developed for optical information processing applications. Coherent light reflected from the mirror elements is phase modulated producing an optical Fourier transform of an analog signal input to the device. The DMD architecture and operating parameters related to this application are presented. A model is developed that describes the optical Fourier transform properties of the DMD.

  17. Fabrication of Large YBCO Superconducting Disks

    NASA Technical Reports Server (NTRS)

    Koczor, Ronald J.; Noever, David A.; Robertson, Glen A.

    1999-01-01

    We have undertaken fabrication of large bulk items to develop a repeatable process and to provide test articles in laboratory experiments investigating reported coupling of electromagnetic fields with the local gravity field in the presence of rotating superconducting disks. A successful process was developed which resulted in fabrication of 30 cm diameter annular disks. The disks were fabricated of the superconductor YBa2Cu3O(7-x). Various material parameters of the disks were measured.

  18. Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory

    2007-01-01

    NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.

  19. Simulation of the microwave heating of a thin multilayered composite material: A parameter analysis

    NASA Astrophysics Data System (ADS)

    Tertrais, Hermine; Barasinski, Anaïs; Chinesta, Francisco

    2018-05-01

    Microwave (MW) technology relies on volumetric heating. Thermal energy is transferred to the material that can absorb it at specific frequencies. The complex physics involved in this process is far from being understood and that is why a simulation tool has been developed in order to solve the electromagnetic and thermal equations in such a complex material as a multilayered composite part. The code is based on the in-plane-out-of-plane separated representation within the Proper Generalized Decomposition framework. To improve the knowledge on the process, a parameter study in carried out in this paper.

  20. Gaussian Process Regression Model in Spatial Logistic Regression

    NASA Astrophysics Data System (ADS)

    Sofro, A.; Oktaviarina, A.

    2018-01-01

    Spatial analysis has developed very quickly in the last decade. One of the favorite approaches is based on the neighbourhood of the region. Unfortunately, there are some limitations such as difficulty in prediction. Therefore, we offer Gaussian process regression (GPR) to accommodate the issue. In this paper, we will focus on spatial modeling with GPR for binomial data with logit link function. The performance of the model will be investigated. We will discuss the inference of how to estimate the parameters and hyper-parameters and to predict as well. Furthermore, simulation studies will be explained in the last section.

  1. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  2. Evaluation of parameters of color profile models of LCD and LED screens

    NASA Astrophysics Data System (ADS)

    Zharinov, I. O.; Zharinov, O. O.

    2017-12-01

    The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.

  3. Numerical modeling of heat-transfer and the influence of process parameters on tailoring the grain morphology of IN718 in electron beam additive manufacturing

    DOE PAGES

    Raghavan, Narendran; Dehoff, Ryan; Pannala, Sreekanth; ...

    2016-04-26

    The fabrication of 3-D parts from CAD models by additive manufacturing (AM) is a disruptive technology that is transforming the metal manufacturing industry. The correlation between solidification microstructure and mechanical properties has been well understood in the casting and welding processes over the years. This paper focuses on extending these principles to additive manufacturing to understand the transient phenomena of repeated melting and solidification during electron beam powder melting process to achieve site-specific microstructure control within a fabricated component. In this paper, we have developed a novel melt scan strategy for electron beam melting of nickel-base superalloy (Inconel 718) andmore » also analyzed 3-D heat transfer conditions using a parallel numerical solidification code (Truchas) developed at Los Alamos National Laboratory. The spatial and temporal variations of temperature gradient (G) and growth velocity (R) at the liquid-solid interface of the melt pool were calculated as a function of electron beam parameters. By manipulating the relative number of voxels that lie in the columnar or equiaxed region, the crystallographic texture of the components can be controlled to an extent. The analysis of the parameters provided optimum processing conditions that will result in columnar to equiaxed transition (CET) during the solidification. Furthermore, the results from the numerical simulations were validated by experimental processing and characterization thereby proving the potential of additive manufacturing process to achieve site-specific crystallographic texture control within a fabricated component.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, M; Schmidt, M; Knutson, N

    Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less

  5. A novel pulsed gas metal arc welding system with direct droplet transfer close-loop control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Q.; Li, P.; Zhang, L.

    1994-12-31

    In pulsed gas metal arc welding (GMAW), a predominant parameter that has to be monitored and controlled in real time for maintaining process stability and ensuring weld quality, is droplet transfer. Based on the close correlation between droplet transfer and arc light radiant flux in GMAW of steel and aluminum, a direct closed-loop droplet transfer control system for pulsed GMAW with arc light sensor has been developed. By sensing the droplet transfer directly via the arc light signal, a pulsed GMAW process with real and exact one-pulse, one-droplet transfer has been achieved. The novel pulsed GMAW machine consists of threemore » parts: a sensing system, a controlling system, and a welding power system. The software used in this control system is capable of data sampling and processing, parameter matching, optimum parameter restoring, and resetting. A novel arc light sensing system has been developed. The sensor is small enough to be clamped to a semiautomatic welding torch. Based on thissensingn system, a closed-loop droplet transfer control system of GMAW of steel and aluminum has been built and a commercial prototype has been made. The system is capable of keeping one-pulse, one-droplet transfer against external interferences. The welding process with this control system has been proved to be stable, quiet, with no spatter, and provide good weld formation.« less

  6. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  7. Heat and Mass Transfer Processes in Scrubber of Flue Gas Heat Recovery Device

    NASA Astrophysics Data System (ADS)

    Veidenbergs, Ivars; Blumberga, Dagnija; Vigants, Edgars; Kozuhars, Grigorijs

    2010-01-01

    The paper deals with the heat and mass transfer process research in a flue gas heat recovery device, where complicated cooling, evaporation and condensation processes are taking place simultaneously. The analogy between heat and mass transfer is used during the process of analysis. In order to prepare a detailed process analysis based on heat and mass process descriptive equations, as well as the correlation for wet gas parameter calculation, software in the Microsoft Office Excel environment is being developed.

  8. Processing of Copper Zinc Tin Sulfide Nanocrystal Dispersions for Thin Film Solar Cells

    NASA Astrophysics Data System (ADS)

    Williams, Bryce Arthur

    A scalable and inexpensive renewable energy source is needed to meet the expected increase in electricity demand throughout the developed and developing world in the next 15 years without contributing further to global warming through CO2 emissions. Photovoltaics may meet this need but current technologies are less than ideal requiring complex manufacturing processes and/or use of toxic, rare-earth materials. Copper zinc tin sulfide (Cu 2ZnSnS4, CZTS) solar cells offer a true "green" alternative based upon non-toxic and abundant elements. Solution-based processes utilizing CZTS nanocrystal dispersions followed by high temperature annealing have received significant research attention due to their compatibility with traditional roll-to-roll coating processes. In this work, CZTS nanocrystal (5-35 nm diameters) dispersions were utilized as a production pathway to form solar absorber layers. Aerosol-based coating methods (aerosol jet printing and ultrasonic spray coating) were optimized for formation of dense, crack-free CZTS nanocrystal coatings. The primary variables underlying determination of coating morphology within the aerosol-coating parameter space were investigated. It was found that the liquid content of the aerosol droplets at the time of substrate impingement play a critical role. Evaporation of the liquid from the aerosol droplets during coating was altered through changes to coating parameters as well as to the CZTS nanocrystal dispersions. In addition, factors influencing conversion of CZTS nanocrystal coatings into dense, large-grained polycrystalline films suitable for solar cell development during thermal annealing were studied. The roles nanocrystal size, carbon content, sodium uptake, and sulfur pressure were found to have pivotal roles in film microstructure evolution. The effects of these parameters on film morphology, grain growth rates, and chemical makeup were analyzed from electron microscopy images as well as compositional analysis techniques. From these results, a deeper understanding of the interplay between the numerous annealing variables was achieved and improved annealing processes were developed.

  9. Verification and Validation of Residual Stresses in Bi-Material Composite Rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy

    Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less

  10. Friction Stir Welding at MSFC: Kinematics

    NASA Technical Reports Server (NTRS)

    Nunes, A. C., Jr.

    2001-01-01

    In 1991 The Welding Institute of the United Kingdom patented the Friction Stir Welding (FSW) process. In FSW a rotating pin-tool is inserted into a weld seam and literally stirs the faying surfaces together as it moves up the seam. By April 2000 the American Welding Society International Welding and Fabricating Exposition featured several exhibits of commercial FSW processes and the 81st Annual Convention devoted a technical session to the process. The FSW process is of interest to Marshall Space Flight Center (MSFC) as a means of avoiding hot-cracking problems presented by the 2195 aluminum-lithium alloy, which is the primary constituent of the Lightweight Space Shuttle External Tank. The process has been under development at MSFC for External Tank applications since the early 1990's. Early development of the FSW process proceeded by cut-and-try empirical methods. A substantial and complex body of data resulted. A theoretical model was wanted to deal with the complexity and reduce the data to concepts serviceable for process diagnostics, optimization, parameter selection, etc. A first step in understanding the FSW process is to determine the kinematics, i.e., the flow field in the metal in the vicinity of the pin-tool. Given the kinematics, the dynamics, i.e., the forces, can be targeted. Given a completed model of the FSW process, attempts at rational design of tools and selection of process parameters can be made.

  11. Life Prediction/Reliability Data of Glass-Ceramic Material Determined for Radome Applications

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    Brittle materials, ceramics, are candidate materials for a variety of structural applications for a wide range of temperatures. However, the process of slow crack growth, occurring in any loading configuration, limits the service life of structural components. Therefore, it is important to accurately determine the slow crack growth parameters required for component life prediction using an appropriate test methodology. This test methodology also should be useful in determining the influence of component processing and composition variables on the slow crack growth behavior of newly developed or existing materials, thereby allowing the component processing and composition to be tailored and optimized to specific needs. Through the American Society for Testing and Materials (ASTM), the authors recently developed two test methods to determine the life prediction parameters of ceramics. The two test standards, ASTM 1368 for room temperature and ASTM C 1465 for elevated temperatures, were published in the 2001 Annual Book of ASTM Standards, Vol. 15.01. Briefly, the test method employs constant stress-rate (or dynamic fatigue) testing to determine flexural strengths as a function of the applied stress rate. The merit of this test method lies in its simplicity: strengths are measured in a routine manner in flexure at four or more applied stress rates with an appropriate number of test specimens at each applied stress rate. The slow crack growth parameters necessary for life prediction are then determined from a simple relationship between the strength and the applied stress rate. Extensive life prediction testing was conducted at the NASA Glenn Research Center using the developed ASTM C 1368 test method to determine the life prediction parameters of a glass-ceramic material that the Navy will use for radome applications.

  12. Development, Characterization, and Resultant Properties of a Carbon, Boron, and Chromium Ternary Diffusion System

    NASA Astrophysics Data System (ADS)

    Domec, Brennan S.

    In today's industry, engineering materials are continuously pushed to the limits. Often, the application only demands high-specification properties in a narrowly-defined region of the material, such as the outermost surface. This, in combination with the economic benefits, makes case hardening an attractive solution to meet industry demands. While case hardening has been in use for decades, applications demanding high hardness, deep case depth, and high corrosion resistance are often under-served by this process. Instead, new solutions are required. The goal of this study is to develop and characterize a new borochromizing process applied to a pre-carburized AISI 8620 alloy steel. The process was successfully developed using a combination of computational simulations, calculations, and experimental testing. Process kinetics were studied by fitting case depth measurement data to Fick's Second Law of Diffusion and an Arrhenius equation. Results indicate that the kinetics of the co-diffusion method are unaffected by the addition of chromium to the powder pack. The results also show that significant structural degradation of the case occurs when chromizing is applied sequentially to an existing boronized case. The amount of degradation is proportional to the chromizing parameters. Microstructural evolution was studied using metallographic methods, simulation and computational calculations, and analytical techniques. While the co-diffusion process failed to enrich the substrate with chromium, significant enrichment is obtained with the sequential diffusion process. The amount of enrichment is directly proportional to the chromizing parameters with higher parameters resulting in more enrichment. The case consists of M7C3 and M23C6 carbides nearest the surface, minor amounts of CrB, and a balance of M2B. Corrosion resistance was measured with salt spray and electrochemical methods. These methods confirm the benefit of surface enrichment by chromium in the sequential diffusion method with corrosion resistance increasing directly with chromium concentration. The results also confirm the deleterious effect of surface-breaking case defects and the need to reduce or eliminate them. The best combination of microstructural integrity, mean surface hardness, effective case depth, and corrosion resistance is obtained in samples sequentially boronized and chromized at 870°C for 6hrs. Additional work is required to further optimize process parameters and case properties.

  13. The time-lapse AVO difference inversion for changes in reservoir parameters

    NASA Astrophysics Data System (ADS)

    Longxiao, Zhi; Hanming, Gu; Yan, Li

    2016-12-01

    The result of conventional time-lapse seismic processing is the difference between the amplitude and the post-stack seismic data. Although stack processing can improve the signal-to-noise ratio (SNR) of seismic data, it also causes a considerable loss of important information about the amplitude changes and only gives the qualitative interpretation. To predict the changes in reservoir fluid more precisely and accurately, we also need the quantitative information of the reservoir. To achieve this aim, we develop the method of time-lapse AVO (amplitude versus offset) difference inversion. For the inversion of reservoir changes in elastic parameters, we apply the Gardner equation as the constraint and convert the three-parameter inversion of elastic parameter changes into a two-parameter inversion to make the inversion more stable. For the inversion of variations in the reservoir parameters, we infer the relation between the difference of the reflection coefficient and variations in the reservoir parameters, and then invert reservoir parameter changes directly. The results of the theoretical modeling computation and practical application show that our method can estimate the relative variations in reservoir density, P-wave and S-wave velocity, calculate reservoir changes in water saturation and effective pressure accurately, and then provide reference for the rational exploitation of the reservoir.

  14. Modeling, analysis, and simulation of the co-development of road networks and vehicle ownership

    NASA Astrophysics Data System (ADS)

    Xu, Mingtao; Ye, Zhirui; Shan, Xiaofeng

    2016-01-01

    A two-dimensional logistic model is proposed to describe the co-development of road networks and vehicle ownership. The endogenous interaction between road networks and vehicle ownership and how natural market forces and policies transformed into their co-development are considered jointly in this model. If the involved parameters satisfy a certain condition, the proposed model can arrive at a steady equilibrium level and the final development scale will be within the maximum capacity of an urban traffic system; otherwise, the co-development process will be unstable and even manifest chaotic behavior. Then sensitivity tests are developed to determine the proper values for a series of parameters in this model. Finally, a case study, using Beijing City as an example, is conducted to explore the applicability of the proposed model to the real condition. Results demonstrate that the proposed model can effectively simulate the co-development of road network and vehicle ownership for Beijing City. Furthermore, we can obtain that their development process will arrive at a stable equilibrium level in the years 2040 and 2045 respectively, and the equilibrium values are within the maximum capacity.

  15. The Role of Core Grammar in Pidgin Development.

    ERIC Educational Resources Information Center

    Macedo, Donaldo P.

    1986-01-01

    Examines the process of pidgin development within the context of the Government and Binding Theory proposed by Chomsky in 1981. Hypothesizes that the contact of various languages may produce a new experience which subsequently fixes the parameters of Universal Grammar, providing a pidgin core gammar. (SED)

  16. PREDICTION OF THE VAPOR PRESSURE, BOILING POINT, HEAT OF VAPORIZATION AND DIFFUSION COEFFICIENT OF ORGANIC COMPOUNDS

    EPA Science Inventory

    The prototype computer program SPARC has been under development for several years to estimate physical properties and chemical reactivity parameters of organic compounds strictly from molecular structure. SPARC solute-solute physical process models have been developed and tested...

  17. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Task 1 and Quality Control Sample; Error-Prone Modeling Analysis Plan.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; And Others

    Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…

  18. [Research advances in secondary development of Chinese patent medicines based on quality by design concept].

    PubMed

    Gong, Xing-Chu; Chen, Teng; Qu, Hai-Bin

    2017-03-01

    Quality by design (QbD) concept is an advanced pharmaceutical quality control concept. The application of QbD concept in the research and development of pharmaceutical processes of traditional Chinese medicines (TCM) mainly contains five parts, including the definition of critical processes and their evaluation criteria, the determination of critical process parameters and critical material attributes, the establishment of quantitative models, the development of design space, as well as the application and continuous improvement of control strategy. In this work, recent research advances in QbD concept implementation methods in the secondary development of Chinese patent medicines were reviewed, and five promising fields of the implementation of QbD concept were pointed out, including the research and development of TCM new drugs and Chinese medicine granules for formulation, modeling of pharmaceutical processes, development of control strategy based on industrial big data, strengthening the research of process amplification rules, and the development of new pharmaceutical equipment.. Copyright© by the Chinese Pharmaceutical Association.

  19. Evaluation of methods for application of epitaxial layers of superconductor and buffer layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-06-01

    The recent achievements in a number of laboratories of critical currents in excess of 1.0x10{sup 6} amp/cm{sup 2} at 77K in YBCO deposited over suitably textured buffer/substrate composites have stimulated interest in the potential applications of coated conductors at high temperatures and high magnetic fields. As of today, two different approaches for obtaining the textured substrates have been identified. These are: Los Alamos National Laboratory`s (LANL) ion-beam assisted deposition called IBAD, to obtain a highly textured yttria-stabilized zirconia (YSZ) buffer on nickel alloy strips, and Oak Ridge National Laboratory`s (ORNL) rolling assisted, bi-axial texturized substrate option called RABiTS. Similarly, basedmore » on the published literature, the available options to form High Temperature Superconductor (HTS) films on metallic, semi-metallic or ceramic substrates can be divided into: physical methods, and non-physical or chemical methods. Under these two major groups, the schemes being proposed consist of: - Sputtering - Electron-Beam Evaporation - Flash Evaporation - Molecular Beam Epitaxy - Laser Ablation - Electrophoresis - Chemical Vapor Deposition (Including Metal-Organic Chemical Vapor Deposition) - Sol-Gel - Metal-Organic Decomposition - Electrodeposition, and - Aerosol/Spray Pyrolysis. In general, a spool- to-spool or reel-to-reel type of continuous manufacturing scheme developed out of any of the above techniques, would consist of: - Preparation of Substrate Material - Preparation and Application of the Buffer Layer(s) - Preparation and Application of the HTS Material and Required Post-Annealing, and - Preparation and Application of the External Protective Layer. These operations would be affected by various process parameters which can be classified into: Chemistry and Material Related Parameters; and Engineering and Environmental Based Parameters. Thus, one can see that for successful development of the coated conductors manufacturing process, an extensive review of the available options was necessary. Under the U.S. Department of Energy (DOE`s) sponsorship, the University of Tennessee Space Institute (UTSI), was given a responsibility of performing this review. In UTSI`s efforts to review the available options, Oak Ridge National Laboratory, (ORNL), especially Mr. Robert Hawsey and Dr. M. Paranthaman provided very valuable guidance and technical assistance. This report describes the review carried out by the UTSI staff, students and faculty members. It also provides the approach being used to develop the cost information as well as the major operational parameters/variables that will have to be monitored and the relevant control systems. In particular, the report includes: - Process Flow Schemes and Involved Operations - Multi-Attribute Analysis Carried out for Objective and Subjective Criteria - Manufacturing Parameters to Process 6,000 km/year of Quality Coated Conductor Material - Metal Organics (MOD), Sol-Gel, and E-Beam as the Leading Candidates, and Technical Concerns/Issues that Need to be Resolved to Develop a Commercially Viable Option Out of Each of Them. - Process Control Needs for Various Schemes - Approach/Methodology for Developing Cost of Coated Conductors This report also includes generic areas in which additional research and development work are needed. In general, it is our feeling that the science and chemistry that are being developed in the coated conductor wire program now need proper engineering assistance/viewpoints to develop leading options into a viable commercial process.« less

  20. Analysis of the structural behaviour of colonic segments by inflation tests: Experimental activity and physio-mechanical model.

    PubMed

    Carniel, Emanuele L; Mencattelli, Margherita; Bonsignori, Gabriella; Fontanella, Chiara G; Frigo, Alessandro; Rubini, Alessandro; Stefanini, Cesare; Natali, Arturo N

    2015-11-01

    A coupled experimental and computational approach is provided for the identification of the structural behaviour of gastrointestinal regions, accounting for both elastic and visco-elastic properties. The developed procedure is applied to characterize the mechanics of gastrointestinal samples from pig colons. Experimental data about the structural behaviour of colonic segments are provided by inflation tests. Different inflation processes are performed according to progressively increasing top pressure conditions. Each inflation test consists of an air in-flow, according to an almost constant increasing pressure rate, such as 3.5 mmHg/s, up to a prescribed top pressure, which is held constant for about 300 s to allow the development of creep phenomena. Different tests are interspersed by 600 s of rest to allow the recovery of the tissues' mechanical condition. Data from structural tests are post-processed by a physio-mechanical model in order to identify the mechanical parameters that interpret both the non-linear elastic behaviour of the sample, as the instantaneous pressure-stretch trend, and the time-dependent response, as the stretch increase during the creep processes. The parameters are identified by minimizing the discrepancy between experimental and model results. Different sets of parameters are evaluated for different specimens from different pigs. A statistical analysis is performed to evaluate the distribution of the parameters and to assess the reliability of the experimental and computational activities. © IMechE 2015.

  1. Motion compensated image processing and optimal parameters for egg crack detection using modified pressure

    USDA-ARS?s Scientific Manuscript database

    Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...

  2. A Resource Guide for Oceanography and Coastal Processes.

    ERIC Educational Resources Information Center

    Walker, Sharon H., Ed.; Damon-Randall, Kimberly, Ed.; Walters, Howard D., Ed.

    This resource guide was developed for elementary, middle, and high school teachers to teach about oceanography and coastal processes. This guide contains information on the program's history and names and contact information for all Operation Pathfinder participants since 1993. The body is divided into 6 topics. Topic 1 is on Physical Parameters,…

  3. Solid-Liquid and Liquid-Liquid Mixing Laboratory for Chemical Engineering Undergraduates

    ERIC Educational Resources Information Center

    Pour, Sanaz Barar; Norca, Gregory Benoit; Fradette, Louis; Legros, Robert; Tanguy, Philippe A.

    2007-01-01

    Solid-liquid and liquid-liquid mixing experiments have been developed to provide students with a practical experience on suspension and emulsification processes. The laboratory focuses on the characterization of the process efficiency, specifically the influence of the main operating parameters and the effect of the impeller type. (Contains 2…

  4. Clinching for sheet materials

    PubMed Central

    He, Xiaocong

    2017-01-01

    Abstract Latest developments in the clinching of sheet materials are reviewed in this article. Important issues are discussed, such as tool design, process parameters and joinability of some new lightweight sheet materials. Hybrid and modified clinching processes are introduced to a general reader. Several unaddressed issues in the clinching of sheet materials are identified. PMID:28656065

  5. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing✩

    PubMed Central

    Seppala, Jonathan E.; Migler, Kalman D.

    2016-01-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters. PMID:29167755

  6. Fermentation of Saccharomyces cerevisiae - Combining kinetic modeling and optimization techniques points out avenues to effective process design.

    PubMed

    Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara

    2018-09-14

    A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Architectural setup for online monitoring and control of process parameters in robot-based ISF

    NASA Astrophysics Data System (ADS)

    Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2017-10-01

    This article describes new developments in an incremental, robot-based sheet metal forming process (Roboforming) for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet forming (ISF) machines, this system offers high geometrical design flexibility without the need of any part-dependent tools. However, the industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors introduce a new architectural setup extending the current one by a superordinate process control. This sophisticated control consists of two modules, i.e. the compensation of the two industrial robots' low structural stiffness as well as a combined force/torque control. It is assumed that this contribution will lead to future research and development projects in which the authors will thoroughly investigate ISF process parameters influencing the geometric accuracy of the forming results.

  8. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing.

    PubMed

    Seppala, Jonathan E; Migler, Kalman D

    2016-10-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters.

  9. Assessment of Spatial and Temporal Variation of Surface Water Quality in Streams Affected by Coalbed Methane Development

    NASA Astrophysics Data System (ADS)

    Chitrakar, S.; Miller, S. N.; Liu, T.; Caffrey, P. A.

    2015-12-01

    Water quality data have been collected from three representative stream reaches in a coalbed methane (CBM) development area for over five years to improve the understanding of salt loading in the system. These streams are located within Atlantic Rim development area of the Muddy Creek in south-central Wyoming. Significant development of CBM wells is ongoing in the study area. Three representative sampling stream reaches included the Duck Pond Draw and Cow Creek, which receive co-produced water, and; South Fork Creek, and upstream Cow Creek which do not receive co-produced water. Water samples were assayed for various parameters which included sodium, calcium, magnesium, fluoride, chlorine, nitrate, O-phosphate, sulfate, carbonate, bicarbonates, and other water quality parameters such as pH, conductivity, and TDS. Based on these water quality parameters we have investigated various hydrochemical and geochemical processes responsible for the high variability in water quality in the region. However, effective interpretation of complex databases to understand aforementioned processes has been a challenging task due to the system's complexity. In this work we applied multivariate statistical techniques including cluster analysis (CA), principle component analysis (PCA) and discriminant analysis (DA) to analyze water quality data and identify similarities and differences among our locations. First, CA technique was applied to group the monitoring sites based on the multivariate similarities. Second, PCA technique was applied to identify the prevalent parameters responsible for the variation of water quality in each group. Third, the DA technique was used to identify the most important factors responsible for variation of water quality during low flow season and high flow season. The purpose of this study is to improve the understanding of factors or sources influencing the spatial and temporal variation of water quality. The ultimate goal of this whole research is to develop coupled salt loading and GIS-based hydrological modelling tool that will be able to simulate the salt loadings under various user defined scenarios in the regions undergoing CBM development. Therefore, the findings from this study will be used to formulate the predominant processes responsible for solute loading.

  10. Use of in-die powder densification parameters in the implementation of process analytical technologies for tablet production on industrial scale.

    PubMed

    Cespi, Marco; Perinelli, Diego R; Casettari, Luca; Bonacucina, Giulia; Caporicci, Giuseppe; Rendina, Filippo; Palmieri, Giovanni F

    2014-12-30

    The use of process analytical technologies (PAT) to ensure final product quality is by now a well established practice in pharmaceutical industry. To date, most of the efforts in this field have focused on development of analytical methods using spectroscopic techniques (i.e., NIR, Raman, etc.). This work evaluated the possibility of using the parameters derived from the processing of in-line raw compaction data (the forces and displacement of the punches) as a PAT tool for controlling the tableting process. To reach this goal, two commercially available formulations were used, changing the quantitative composition and compressing them on a fully instrumented rotary pressing machine. The Heckel yield pressure and the compaction energies, together with the tablets hardness and compaction pressure, were selected and evaluated as discriminating parameters in all the prepared formulations. The apparent yield pressure, as shown in the obtained results, has the necessary sensitivity to be effectively included in a PAT strategy to monitor the tableting process. Additional investigations were performed to understand the criticalities and the mechanisms beyond this performing parameter and the associated implications. Specifically, it was discovered that the efficiency of the apparent yield pressure depends on the nominal drug title, the drug densification mechanism and the error in pycnometric density. In this study, the potential of using some parameters derived from the compaction raw data has been demonstrated to be an attractive alternative and complementary method to the well established spectroscopic techniques to monitor and control the tableting process. The compaction data monitoring method is also easy to set up and very cost effective. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Simulation based analysis of laser beam brazing

    NASA Astrophysics Data System (ADS)

    Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael

    2016-03-01

    Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.

  12. Modeling and Analysis of Process Parameters for Evaluating Shrinkage Problems During Plastic Injection Molding of a DVD-ROM Cover

    NASA Astrophysics Data System (ADS)

    Öktem, H.

    2012-01-01

    Plastic injection molding plays a key role in the production of high-quality plastic parts. Shrinkage is one of the most significant problems of a plastic part in terms of quality in the plastic injection molding. This article focuses on the study of the modeling and analysis of the effects of process parameters on the shrinkage by evaluating the quality of the plastic part of a DVD-ROM cover made with Acrylonitrile Butadiene Styrene (ABS) polymer material. An effective regression model was developed to determine the mathematical relationship between the process parameters (mold temperature, melt temperature, injection pressure, injection time, and cooling time) and the volumetric shrinkage by utilizing the analysis data. Finite element (FE) analyses designed by Taguchi (L27) orthogonal arrays were run in the Moldflow simulation program. Analysis of variance (ANOVA) was then performed to check the adequacy of the regression model and to determine the effect of the process parameters on the shrinkage. Experiments were conducted to control the accuracy of the regression model with the FE analyses obtained from Moldflow. The results show that the regression model agrees very well with the FE analyses and the experiments. From this, it can be concluded that this study succeeded in modeling the shrinkage problem in our application.

  13. Estimation of fundamental kinetic parameters of polyhydroxybutyrate fermentation process of Azohydromonas australica using statistical approach of media optimization.

    PubMed

    Gahlawat, Geeta; Srivastava, Ashok K

    2012-11-01

    Polyhydroxybutyrate or PHB is a biodegradable and biocompatible thermoplastic with many interesting applications in medicine, food packaging, and tissue engineering materials. The present study deals with the enhanced production of PHB by Azohydromonas australica using sucrose and the estimation of fundamental kinetic parameters of PHB fermentation process. The preliminary culture growth inhibition studies were followed by statistical optimization of medium recipe using response surface methodology to increase the PHB production. Later on batch cultivation in a 7-L bioreactor was attempted using optimum concentration of medium components (process variables) obtained from statistical design to identify the batch growth and product kinetics parameters of PHB fermentation. A. australica exhibited a maximum biomass and PHB concentration of 8.71 and 6.24 g/L, respectively in bioreactor with an overall PHB production rate of 0.75 g/h. Bioreactor cultivation studies demonstrated that the specific biomass and PHB yield on sucrose was 0.37 and 0.29 g/g, respectively. The kinetic parameters obtained in the present investigation would be used in the development of a batch kinetic mathematical model for PHB production which will serve as launching pad for further process optimization studies, e.g., design of several bioreactor cultivation strategies to further enhance the biopolymer production.

  14. Determination of melt pool dimensions using DOE-FEM and RSM with process window during SLM of Ti6Al4V powder

    NASA Astrophysics Data System (ADS)

    Zhuang, Jyun-Rong; Lee, Yee-Ting; Hsieh, Wen-Hsin; Yang, An-Shik

    2018-07-01

    Selective laser melting (SLM) shows a positive prospect as an additive manufacturing (AM) technique for fabrication of 3D parts with complicated structures. A transient thermal model was developed by the finite element method (FEM) to simulate the thermal behavior for predicting the time evolution of temperature field and melt pool dimensions of Ti6Al4V powder during SLM. The FEM predictions were then compared with published experimental measurements and calculation results for model validation. This study applied the design of experiment (DOE) scheme together with the response surface method (RSM) to conduct the regression analysis based on four processing parameters (exactly, the laser power, scanning speed, preheating temperature and hatch space) for predicting the dimensions of the melt pool in SLM. The preliminary RSM results were used to quantify the effects of those parameters on the melt pool size. The process window was further implemented via two criteria of the width and depth of the molten pool to screen impractical conditions of four parameters for including the practical ranges of processing parameters. The FEM simulations confirmed the good accuracy of the critical RSM models in the predictions of melt pool dimensions for three typical SLM working scenarios.

  15. Finite Element Method (FEM) Modeling of Freeze-drying: Monitoring Pharmaceutical Product Robustness During Lyophilization.

    PubMed

    Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V

    2015-12-01

    Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.

  16. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  17. Advanced multivariate data analysis to determine the root cause of trisulfide bond formation in a novel antibody-peptide fusion.

    PubMed

    Goldrick, Stephen; Holmes, William; Bond, Nicholas J; Lewis, Gareth; Kuiper, Marcel; Turner, Richard; Farid, Suzanne S

    2017-10-01

    Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody-peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high-throughput (HT) micro-bioreactor system (Ambr TM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on-line and off-line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale-up. Biotechnol. Bioeng. 2017;114: 2222-2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  18. Stepwise sensitivity analysis from qualitative to quantitative: Application to the terrestrial hydrological modeling of a Conjunctive Surface-Subsurface Process (CSSP) land surface model

    NASA Astrophysics Data System (ADS)

    Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan

    2015-06-01

    An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  19. Influence of dielectric barrier discharge treatment on mechanical and dyeing properties of wool

    NASA Astrophysics Data System (ADS)

    Rahul, NAVIK; Sameera, SHAFI; Md Miskatul, ALAM; Md Amjad, FAROOQ; Lina, LIN; Yingjie, CAI

    2018-06-01

    Physical and chemical properties of wool surface significantly affect the absorbency, rate of dye bath exhaustion and fixation of the industrial dyes. Hence, surface modification is a necessary operation prior to coloration process in wool wet processing industries. Plasma treatment is an effective alternative for physiochemical modification of wool surface. However, optimum processing parameters to get the expected modification are still under investigation, hence this technology is still under development in the wool wet processing industries. Therefore, in this paper, treatment parameters with the help of simple dielectric barrier discharge plasma reactor and air as a plasma gas, which could be a promising combination for treatment of wool substrate at industrial scale were schematically studied, and their influence on the water absorbency, mechanical, and dyeing properties of twill woven wool fabric samples are reported. It is expected that the results will assist to the wool coloration industries to improve the dyeing processes.

  20. Thermodynamic and economic analysis of heat pumps for energy recovery in industrial processes

    NASA Astrophysics Data System (ADS)

    Urdaneta-B, A. H.; Schmidt, P. S.

    1980-09-01

    A computer code has been developed for analyzing the thermodynamic performance, cost and economic return for heat pump applications in industrial heat recovery. Starting with basic defining characteristics of the waste heat stream and the desired heat sink, the algorithm first evaluates the potential for conventional heat recovery with heat exchangers, and if applicable, sizes the exchanger. A heat pump system is then designed to process the residual heating and cooling requirements of the streams. In configuring the heat pump, the program searches a number of parameters, including condenser temperature, evaporator temperature, and condenser and evaporator approaches. All system components are sized for each set of parameters, and economic return is estimated and compared with system economics for conventional processing of the heated and cooled streams (i.e., with process heaters and coolers). Two case studies are evaluated, one in a food processing application and the other in an oil refinery unit.

  1. Parameter estimating state reconstruction

    NASA Technical Reports Server (NTRS)

    George, E. B.

    1976-01-01

    Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.

  2. In-Flight Calibration Processes for the MMS Fluxgate Magnetometers

    NASA Technical Reports Server (NTRS)

    Bromund, K. R.; Leinweber, H. K.; Plaschke, F.; Strangeway, R. J.; Magnes, W.; Fischer, D.; Nakamura, R.; Anderson, B. J.; Russell, C. T.; Baumjohann, W.; hide

    2015-01-01

    The calibration effort for the Magnetospheric Multiscale Mission (MMS) Analog Fluxgate (AFG) and DigitalFluxgate (DFG) magnetometers is a coordinated effort between three primary institutions: University of California, LosAngeles (UCLA); Space Research Institute, Graz, Austria (IWF); and Goddard Space Flight Center (GSFC). Since thesuccessful deployment of all 8 magnetometers on 17 March 2015, the effort to confirm and update the groundcalibrations has been underway during the MMS commissioning phase. The in-flight calibration processes evaluatetwelve parameters that determine the alignment, orthogonalization, offsets, and gains for all 8 magnetometers usingalgorithms originally developed by UCLA and the Technical University of Braunschweig and tailored to MMS by IWF,UCLA, and GSFC. We focus on the processes run at GSFC to determine the eight parameters associated with spin tonesand harmonics. We will also discuss the processing flow and interchange of parameters between GSFC, IWF, and UCLA.IWF determines the low range spin axis offsets using the Electron Drift Instrument (EDI). UCLA determines the absolutegains and sensor azimuth orientation using Earth field comparisons. We evaluate the performance achieved for MMS andgive examples of the quality of the resulting calibrations.

  3. United States Air Force Summer Research Program -- 1993 Summer Research Program Final Reports. Volume 11. Arnold Engineering Development Center, Frank J. Seiler Research Laboratory, Wilford Hall Medical Center

    DTIC Science & Technology

    1993-01-01

    external parameters such as airflow, temperature, pressure, etc, are measured. Turbine Engine testing generates massive volumes of data at very high...a form that describes the signal flow graph topology as well as specific parameters of the processing blocks in the diagram. On multiprocessor...provides an interface to the symbolic builder and control functions such that parameters may be set during the build operation that will affect the

  4. Multisite EPR oximetry from multiple quadrature harmonics.

    PubMed

    Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C

    2012-01-01

    Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover

    NASA Technical Reports Server (NTRS)

    Dangelo, K. R.

    1974-01-01

    A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.

  6. Limited ability driven phase transitions in the coevolution process in Axelrod's model

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Han, Yuexing; Chen, Luonan; Aihara, Kazuyuki

    2009-04-01

    We study the coevolution process in Axelrod's model by taking into account of agents' abilities to access information, which is described by a parameter α to control the geographical range of communication. We observe two kinds of phase transitions in both cultural domains and network fragments, which depend on the parameter α. By simulation, we find that not all rewiring processes pervade the dissemination of culture, that is, a very limited ability to access information constrains the cultural dissemination, while an exceptional ability to access information aids the dissemination of culture. Furthermore, by analyzing the network characteristics at the frozen states, we find that there exists a stage at which the network develops to be a small-world network with community structures.

  7. Application of 'Six Sigma{sup TM}' and 'Design of Experiment' for Cementation - Recipe Development for Evaporator Concentrate for NPP Ling AO, Phase II (China) - 12555

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehrmann, Henning; Perdue, Robert

    2012-07-01

    Cementation of radioactive waste is a common technology. The waste is mixed with cement and water and forms a stable, solid block. The physical properties like compression strength or low leach ability depends strongly on the cement recipe. Due to the fact that this waste cement mixture has to fulfill special requirements, a recipe development is necessary. The Six Sigma{sup TM}' DMAIC methodology, together with the Design of experiment (DoE) approach, was employed to optimize the process of a recipe development for cementation at the Ling Ao nuclear power plant (NPP) in China. The DMAIC offers a structured, systematical andmore » traceable process to derive test parameters. The DoE test plans and statistical analysis is efficient regarding the amount of test runs and the benefit gain by getting a transfer function. A transfer function enables simulation which is useful to optimize the later process and being responsive to changes. The DoE method was successfully applied for developing a cementation recipe for both evaporator concentrate and resin waste in the plant. The key input parameters were determined, evaluated and the control of these parameters were included into the design. The applied Six Sigma{sup TM} tools can help to organize the thinking during the engineering process. Data are organized and clearly presented. Various variables can be limited to the most important ones. The Six Sigma{sup TM} tools help to make the thinking and decision process trace able. The tools can help to make data driven decisions (e.g. C and E Matrix). But the tools are not the only golden way. Results from scoring tools like the C and E Matrix need close review before using them. The DoE is an effective tool for generating test plans. DoE can be used with a small number of tests runs, but gives a valuable result from an engineering perspective in terms of a transfer function. The DoE prediction results, however, are only valid in the tested area. So a careful selection of input parameter and their limits for setting up a DoE is very important. An extrapolation of results is not recommended because the results are not reliable out of the tested area. (authors)« less

  8. Earth resources data acquisition sensor study

    NASA Technical Reports Server (NTRS)

    Grohse, E. W.

    1975-01-01

    The minimum data collection and data processing requirements are investigated for the development of water monitoring systems, which disregard redundant and irrelevant data and process only those data predictive of the onset of significant pollution events. Two approaches are immediately suggested: (1) adaptation of a presently available ambient air monitoring system developed by TVA, and (2) consideration of an air, water, and radiological monitoring system developed by the Georgia Tech Experiment Station. In order to apply monitoring systems, threshold values and maximum allowable rates of change of critical parameters such as dissolved oxygen and temperature are required.

  9. Automation of data processing and calculation of retention parameters and thermodynamic data for gas chromatography

    NASA Astrophysics Data System (ADS)

    Makarycheva, A. I.; Faerman, V. A.

    2017-02-01

    The analyses of automation patterns is performed and the programming solution for the automation of data processing of the chromatographic data and their further information storage with a help of a software package, Mathcad and MS Excel spreadsheets, is developed. The offered approach concedes the ability of data processing algorithm modification and does not require any programming experts participation. The approach provides making a measurement of the given time and retention volumes, specific retention volumes, a measurement of differential molar free adsorption energy, and a measurement of partial molar solution enthalpies and isosteric heats of adsorption. The developed solution is focused on the appliance in a small research group and is tested on the series of some new gas chromatography sorbents. More than 20 analytes were submitted to calculation of retention parameters and thermodynamic sorption quantities. The received data are provided in the form accessible to comparative analysis, and they are able to find sorbing agents with the most profitable properties to solve some concrete analytic issues.

  10. Ceramization of low and intermediate level radioactive wastes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiquet, O.; Berson, X.

    1993-12-31

    A ceramic conditioning is studied for a large variety of low and intermediate level wastes. These wastes arise from several waste streams coming from all process steps of the fuel cycle. The physical properties of ceramics can advantageously be used for radioactive waste immobilization. Their chemical durability can offer a barrier against external aggression. More over, some minerals have possible host sites in their crystal structure for heavy elements which can confer the best immobilization mechanism. The general route for development studies is described giving compositions and process choices. Investigations have been conducted on clay materials and on the processmore » parameters which condition the final product properties. Two practical examples are described concerning chemical precipitation sludge resulting from liquid waste treatment and chamot used as a fluidized bed in a graphite incinerator. Important process parameters are put in evidence and the possibility of a pilot plant development is briefly mentioned. Results of investigations are promising to define a new route of conditioning.« less

  11. Direct injection analysis of fatty and resin acids in papermaking process waters by HPLC/MS.

    PubMed

    Valto, Piia; Knuutinen, Juha; Alén, Raimo

    2011-04-01

    A novel HPLC-atmospheric pressure chemical ionization/MS (HPLC-APCI/MS) method was developed for the rapid analysis of selected fatty and resin acids typically present in papermaking process waters. A mixture of palmitic, stearic, oleic, linolenic, and dehydroabietic acids was separated by a commercial HPLC column (a modified stationary C(18) phase) using gradient elution with methanol/0.15% formic acid (pH 2.5) as a mobile phase. The internal standard (myristic acid) method was used to calculate the correlation coefficients and in the quantitation of the results. In the thorough quality parameters measurement, a mixture of these model acids in aqueous media as well as in six different paper machine process waters was quantitatively determined. The measured quality parameters, such as selectivity, linearity, precision, and accuracy, clearly indicated that, compared with traditional gas chromatographic techniques, the simple method developed provided a faster chromatographic analysis with almost real-time monitoring of these acids. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Modelling methane emissions from natural wetlands by development and application of the TRIPLEX-GHG model

    USGS Publications Warehouse

    Zhu, Qing; Liu, Jinxun; Peng, C.; Chen, H.; Fang, X.; Jiang, H.; Yang, G.; Zhu, D.; Wang, W.; Zhou, X.

    2014-01-01

    A new process-based model TRIPLEX-GHG was developed based on the Integrated Biosphere Simulator (IBIS), coupled with a new methane (CH4) biogeochemistry module (incorporating CH4 production, oxidation, and transportation processes) and a water table module to investigate CH4 emission processes and dynamics that occur in natural wetlands. Sensitivity analysis indicates that the most sensitive parameters to evaluate CH4 emission processes from wetlands are r (defined as the CH4 to CO2 release ratio) and Q10 in the CH4 production process. These two parameters were subsequently calibrated to data obtained from 19 sites collected from approximately 35 studies across different wetlands globally. Being heterogeneously spatially distributed, r ranged from 0.1 to 0.7 with a mean value of 0.23, and the Q10 for CH4 production ranged from 1.6 to 4.5 with a mean value of 2.48. The model performed well when simulating magnitude and capturing temporal patterns in CH4 emissions from natural wetlands. Results suggest that the model is able to be applied to different wetlands under varying conditions and is also applicable for global-scale simulations.

  13. Risk management for moisture related effects in dry manufacturing processes: a statistical approach.

    PubMed

    Quiroz, Jorge; Strong, John; Zhang, Lanju

    2016-03-01

    A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method.

  14. Modeling of microstructure evolution in direct metal laser sintering: A phase field approach

    NASA Astrophysics Data System (ADS)

    Nandy, Jyotirmoy; Sarangi, Hrushikesh; Sahoo, Seshadev

    2017-02-01

    Direct Metal Laser Sintering (DMLS) is a new technology in the field of additive manufacturing, which builds metal parts in a layer by layer fashion directly from the powder bed. The process occurs within a very short time period with rapid solidification rate. Slight variations in the process parameters may cause enormous change in the final build parts. The physical and mechanical properties of the final build parts are dependent on the solidification rate which directly affects the microstructure of the material. Thus, the evolving of microstructure plays a vital role in the process parameters optimization. Nowadays, the increase in computational power allows for direct simulations of microstructures during materials processing for specific manufacturing conditions. In this study, modeling of microstructure evolution of Al-Si-10Mg powder in DMLS process was carried out by using a phase field approach. A MATLAB code was developed to solve the set of phase field equations, where simulation parameters include temperature gradient, laser scan speed and laser power. The effects of temperature gradient on microstructure evolution were studied and found that with increase in temperature gradient, the dendritic tip grows at a faster rate.

  15. Condition monitoring of turning process using infrared thermography technique - An experimental approach

    NASA Astrophysics Data System (ADS)

    Prasad, Balla Srinivasa; Prabha, K. Aruna; Kumar, P. V. S. Ganesh

    2017-03-01

    In metal cutting machining, major factors that affect the cutting tool life are machine tool vibrations, tool tip/chip temperature and surface roughness along with machining parameters like cutting speed, feed rate, depth of cut, tool geometry, etc., so it becomes important for the manufacturing industry to find the suitable levels of process parameters for obtaining maintaining tool life. Heat generation in cutting was always a main topic to be studied in machining. Recent advancement in signal processing and information technology has resulted in the use of multiple sensors for development of the effective monitoring of tool condition monitoring systems with improved accuracy. From a process improvement point of view, it is definitely more advantageous to proactively monitor quality directly in the process instead of the product, so that the consequences of a defective part can be minimized or even eliminated. In the present work, a real time process monitoring method is explored using multiple sensors. It focuses on the development of a test bed for monitoring the tool condition in turning of AISI 316L steel by using both coated and uncoated carbide inserts. Proposed tool condition monitoring (TCM) is evaluated in the high speed turning using multiple sensors such as Laser Doppler vibrometer and infrared thermography technique. The results indicate the feasibility of using the dominant frequency of the vibration signals for the monitoring of high speed turning operations along with temperatures gradient. A possible correlation is identified in both regular and irregular cutting tool wear. While cutting speed and feed rate proved to be influential parameter on the depicted temperatures and depth of cut to be less influential. Generally, it is observed that lower heat and temperatures are generated when coated inserts are employed. It is found that cutting temperatures are gradually increased as edge wear and deformation developed.

  16. Modeling urbanized watershed flood response changes with distributed hydrological model: key hydrological processes, parameterization and case studies

    NASA Astrophysics Data System (ADS)

    Chen, Y.

    2017-12-01

    Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to urbanization, and the results show urbanization has big impact on the watershed flood responses. The peak flow increased a few times after urbanization which is much higher than previous reports.

  17. The trade-off between morphology and control in the co-optimized design of robots.

    PubMed

    Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.

  18. Engineering model for ultrafast laser microprocessing

    NASA Astrophysics Data System (ADS)

    Audouard, E.; Mottay, E.

    2016-03-01

    Ultrafast laser micro-machining relies on complex laser-matter interaction processes, leading to a virtually athermal laser ablation. The development of industrial ultrafast laser applications benefits from a better understanding of these processes. To this end, a number of sophisticated scientific models have been developed, providing valuable insights in the physics of the interaction. Yet, from an engineering point of view, they are often difficult to use, and require a number of adjustable parameters. We present a simple engineering model for ultrafast laser processing, applied in various real life applications: percussion drilling, line engraving, and non normal incidence trepanning. The model requires only two global parameters. Analytical results are derived for single pulse percussion drilling or simple pass engraving. Simple assumptions allow to predict the effect of non normal incident beams to obtain key parameters for trepanning drilling. The model is compared to experimental data on stainless steel with a wide range of laser characteristics (time duration, repetition rate, pulse energy) and machining conditions (sample or beam speed). Ablation depth and volume ablation rate are modeled for pulse durations from 100 fs to 1 ps. Trepanning time of 5.4 s with a conicity of 0.15° is obtained for a hole of 900 μm depth and 100 μm diameter.

  19. Sensitivity analysis of 1-D dynamical model for basin analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, S.

    1987-01-01

    Geological processes related to petroleum generation, migration and accumulation are very complicated in terms of time and variables involved, and it is very difficult to simulate these processes by laboratory experiments. For this reasons, many mathematic/computer models have been developed to simulate these geological processes based on geological, geophysical and geochemical principles. The sensitivity analysis in this study is a comprehensive examination on how geological, geophysical and geochemical parameters influence the reconstructions of geohistory, thermal history and hydrocarbon generation history using the 1-D fluid flow/compaction model developed in the Basin Modeling Group at the University of South Carolina. This studymore » shows the effects of some commonly used parameter such as depth, age, lithology, porosity, permeability, unconformity (eroded thickness and erosion time), temperature at sediment surface, bottom hole temperature, present day heat flow, thermal gradient, thermal conductivity and kerogen type and content on the evolutions of formation thickness, porosity, permeability, pressure with time and depth, heat flow with time, temperature with time and depth, vitrinite reflectance (Ro) and TTI with time and depth, and oil window in terms of time and depth, amount of hydrocarbons generated with time and depth. Lithology, present day heat flow and thermal conductivity are the most sensitive parameters in the reconstruction of temperature history.« less

  20. The trade-off between morphology and control in the co-optimized design of robots

    PubMed Central

    Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482

  1. Quantifying Uranium Isotope Ratios Using Resonance Ionization Mass Spectrometry: The Influence of Laser Parameters on Relative Ionization Probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isselhardt, Brett H.

    2011-09-01

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure relative uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process to provide a distinction between uranium atoms and potential isobars without the aid of chemical purification and separation. We explore the laser parameters critical to the ionization process and their effects on the measured isotope ratio. Specifically, the use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of 235U/ 238U ratios to decrease laser-induced isotopic fractionation. By broadening the bandwidth of the first laser inmore » a 3-color, 3-photon ionization process from a bandwidth of 1.8 GHz to about 10 GHz, the variation in sequential relative isotope abundance measurements decreased from >10% to less than 0.5%. This procedure was demonstrated for the direct interrogation of uranium oxide targets with essentially no sample preparation. A rate equation model for predicting the relative ionization probability has been developed to study the effect of variation in laser parameters on the measured isotope ratio. This work demonstrates that RIMS can be used for the robust measurement of uranium isotope ratios.« less

  2. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  3. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  4. Recent Advances in the Development of Thick-Section Melt-Infiltrated C/SiC Composites

    NASA Technical Reports Server (NTRS)

    Babcock, Jason R.; Ramachandran, Gautham; Williams, Brian E.; Effinger, Michael R.

    2004-01-01

    Using a pressureless melt infiltration and in situ reaction process to form the silicon carbide (SiC) matrix, Ultramet has been developing a means to rapidly fabricate ceramic matrix composites (CMCs) targeting thicker sections. The process also employs a unique route for the application of oxide fiber interface coatings designed to protect the fiber and impart fiber-matrix debond. Working toward a 12 inch diameter, 2.5 inch thick demonstrator component, the effect of various processing parameters on room temperature flexure strength is being studied with plans for more extensive elevated temperature mechanical strength evaluation to follow this initial optimization process.

  5. Effects of processing parameters in thermally induced phase separation technique on porous architecture of scaffolds for bone tissue engineering.

    PubMed

    Akbarzadeh, Rosa; Yousefi, Azizeh-Mitra

    2014-08-01

    Tissue engineering makes use of 3D scaffolds to sustain three-dimensional growth of cells and guide new tissue formation. To meet the multiple requirements for regeneration of biological tissues and organs, a wide range of scaffold fabrication techniques have been developed, aiming to produce porous constructs with the desired pore size range and pore morphology. Among different scaffold fabrication techniques, thermally induced phase separation (TIPS) method has been widely used in recent years because of its potential to produce highly porous scaffolds with interconnected pore morphology. The scaffold architecture can be closely controlled by adjusting the process parameters, including polymer type and concentration, solvent composition, quenching temperature and time, coarsening process, and incorporation of inorganic particles. The objective of this review is to provide information pertaining to the effect of these parameters on the architecture and properties of the scaffolds fabricated by the TIPS technique. © 2014 Wiley Periodicals, Inc.

  6. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  7. Predictive process simulation of cryogenic implants for leading edge transistor design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh

    2012-11-06

    Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less

  8. Acceptance procedures for dense-graded mixes

    DOT National Transportation Integrated Search

    2001-03-01

    Recent literature related to acceptance procedures for dense-graded mixtures is summarized. Current state of practice and development of acceptance procedures are reviewed. Many agencies are reducing the number of process control-related parameters i...

  9. Data Driven Ionospheric Modeling in Relation to Space Weather: Percent Cloud Coverage

    NASA Astrophysics Data System (ADS)

    Tulunay, Y.; Senalp, E. T.; Tulunay, E.

    2009-04-01

    Since 1990, a small group at METU has been developing data driven models in order to forecast some critical system parameters related with the near-Earth space processes. The background on the subject supports new achievements, which contributed the COST 724 activities, which will contribute to the new ES0803 activities. This work mentions one of the outstanding contributions, namely forecasting of meteorological parameters by considering the probable influence of cosmic rays (CR) and sunspot numbers (SSN). The data-driven method is generic and applicable to many Near-Earth Space processes including ionospheric/plasmaspheric interactions. It is believed that the EURIPOS initiative would be useful in supplying wide range reliable data to the models developed. Quantification of physical mechanisms, which causally link Space Weather to the Earth's Weather, has been a challenging task. In this basis, the percent cloud coverage (%CC) and cloud top temperatures (CTT) were forecast one month ahead of time between geographic coordinates of (22.5˚N; 57.5˚N); and (7.5˚W; 47.5˚E) at 96 grid locations and covering the years of 1983 to 2000 using the Middle East Technical University Fuzzy Neural Network Model (METU-FNN-M) [Tulunay, 2008]. The Near Earth Space variability at several different time scales arises from a number of separate factors and the physics of the variations cannot be modeled due to the lack of current information about the parameters of several natural processes. CR are shielded by the magnetosphere to a certain extent, but they can modulate the low level cloud cover. METU-FNN-M was developed, trained and applied for forecasting the %CC and CTT, by considering the history of those meteorological variables; Cloud Optical Depth (COD); the Ionization (I) value that is formulized and computed by using CR data and CTT; SSN; temporal variables; and defuzified cloudiness. The temporal and spatial variables and the cut off rigidity are used to compute the defuzified cloudiness. The forecast %CC and CTT values at uniformly spaced grids over the region of interest are used for mapping by Bezier surfaces. The major advantage of the fuzzy model is that it uses its inputs and the expert knowledge in coordination. Long-term cloud analysis was performed on a region having differences in terms of atmospheric activity, in order to show the generalization capability. Global and local parameters of the process were considered. Both CR Flux and SSN reflect the influence of Space Weather on general planetary situation; but other parameters in the inputs of the model reflect local situation. Error and correlation analysis on the forecast and observed parameters were performed. The correlations between the forecast and observed parameters are very promising. The model contributes to the dependence of the cloud formation process on CR Fluxes. The one-month in advance forecast values of the model can also be used as inputs to other models, which forecast some other local or global parameters in order to further test the hypothesis on possible link(s) between Space Weather and the Earth's Weather. The model based, theoretical and numerical works mentioned are promising and have potential for future research and developments. References Tulunay Y., E.T. Şenalp, Ş. Öz, L.I. Dorman, E. Tulunay, S.S. Menteş and M.E. Akcan (2008), A Fuzzy Neural Network Model to Forecast the Percent Cloud Coverage and Cloud Top Temperature Maps, Ann. Geophys., 26(12), 3945-3954, 2008.

  10. Neutron coincidence measurements when nuclear parameters vary during the multiplication process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.

    1995-07-01

    In a recent paper, a physical/mathematical model was developed for neutron coincidence counting, taking explicit account of neutron absorption and leakage, and using dual probability generating function to derive explicit formulae for the single and multiple count-rates in terms of the physical parameters of the system. The results of this modeling proved very successful in a number of cases in which the system parameters (neutron reaction cross-sections, detection probabilities, etc.) remained the same at the various stages of the process (i.e. from collision to collision). However, there are practical circumstances in which such system parameters change from collision to collision,more » and it is necessary to accommodate these, too, in a general theory, applicable to such situations. For instance, in the case of the neutron coincidence collar (NCC), the parameters for the initial, spontaneous fission neutrons, are not the same as those for the succeeding induced fission neutrons, and similar situations can be envisaged for certain other experimental configurations. This present document shows how the previous considerations can be elaborated to embrace these more general requirements.« less

  11. Automatic genetic optimization approach to two-dimensional blade profile design for steam turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trigg, M.A.; Tubby, G.R.; Sheard, A.G.

    1999-01-01

    In this paper a systematic approach to the optimization of two-dimensional blade profiles is presented. A genetic optimizer has been developed that modifies the blade profile and calculates its profile loss. This process is automatic, producing profile designs significantly faster and with significantly lower loss than has previously been possible. The optimizer developed uses a genetic algorithm to optimize a two-dimensional profile, defined using 17 parameters, for minimum loss with a given flow condition. The optimizer works with a population of two-dimensional profiles with varied parameters. A CFD mesh is generated for each profile, and the result is analyzed usingmore » a two-dimensional blade-to-blade solver, written for steady viscous compressible flow, to determine profile loss. The loss is used as the measure of a profile`s fitness. The optimizer uses this information to select the members of the next population, applying crossovers, mutations, and elitism in the process. Using this method, the optimizer tends toward the best values for the parameters defining the profile with minimum loss.« less

  12. Optimization of CO2 laser cutting parameters on Austenitic type Stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Parthiban, A.; Sathish, S.; Chandrasekaran, M.; Ravikumar, R.

    2017-03-01

    Thin AISI 316L stainless steel sheet widely used in sheet metal processing industries for specific applications. CO2 laser cutting is one of the most popular sheet metal cutting processes for cutting of sheets in different profile. In present work various cutting parameters such as laser power (2000 watts-4000 watts), cutting speed (3500mm/min - 5500 mm/min) and assist gas pressure (0.7 Mpa-0.9Mpa) for cutting of AISI 316L 2mm thickness stainless sheet. This experimentation was conducted based on Box-Behenken design. The aim of this work is to develop a mathematical model kerf width for straight and curved profile through response surface methodology. The developed mathematical models for straight and curved profile have been compared. The Quadratic models have the best agreement with experimental data, and also the shape of the profile a substantial role in achieving to minimize the kerf width. Finally the numerical optimization technique has been used to find out best optimum laser cutting parameter for both straight and curved profile cut.

  13. Critical modeling parameters identified for 3D CFD modeling of rectangular final settling tanks for New York City wastewater treatment plants.

    PubMed

    Ramalingam, K; Xanthos, S; Gong, M; Fillos, J; Beckmann, K; Deur, A; McCorquodale, J A

    2012-01-01

    New York City Environmental Protection is in the process of incorporating biological nitrogen removal (BNR) in its wastewater treatment plants (WWTPs) which entails operating the aeration tanks with higher levels of mixed liquor suspended solids (MLSS) than a conventional activated sludge process. The objective of this paper is to discuss two of the important parameters introduced in the 3D CFD model that has been developed by the City College of New York (CCNY) group: (a) the development of the 'discrete particle' measurement technique to carry out the fractionation of the solids in the final settling tank (FST) which has critical implications in the prediction of the effluent quality; and (b) the modification of the floc aggregation (K(A)) and floc break-up (K(B)) coefficients that are found in Parker's flocculation equation (Parker et al. 1970, 1971) used in the CFD model. The dependence of these parameters on the predictions of the CFD model will be illustrated with simulation results on one of the FSTs at the 26th Ward WWTP in Brooklyn, NY.

  14. Development of Self-Healing Coatings Based on Linseed Oil as Autonomous Repairing Agent for Corrosion Resistance.

    PubMed

    Thanawala, Karan; Mutneja, Nisha; Khanna, Anand S; Raman, R K Singh

    2014-11-11

    In recent years corrosion-resistant self-healing coatings have witnessed strong growth and their successful laboratory design and synthesis categorises them in the family of smart/multi-functional materials. Among various approaches for achieving self-healing, microcapsule embedment through the material matrix is the main one for self-healing ability in coatings. The present work focuses on optimizing the process parameters for developing microcapsules by in-situ polymerization of linseed oil as core and urea-formaldehyde as shell material. Characteristics of these microcapsules with respect to change in processing parameters such as stirring rate and reaction time were studied by using optical microscopy (OM), scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FT-IR). The effectiveness of these microcapsules in coatings was characterized by studying their adhesion, performance, and mechanical properties.

  15. Dilution in single pass arc welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DuPont, J.N.; Marder, A.R.

    1996-06-01

    A study was conducted on dilution of single pass arc welds of type 308 stainless steel filler metal deposited onto A36 carbon steel by the plasma arc welding (PAW), gas tungsten arc welding (GTAW), gas metal arc welding (GMAW), and submerged arc welding (SAW) processes. Knowledge of the arc and melting efficiency was used in a simple energy balance to develop an expression for dilution as a function of welding variables and thermophysical properties of the filler metal and substrate. Comparison of calculated and experimentally determined dilution values shows the approach provides reasonable predictions of dilution when the melting efficiencymore » can be accurately predicted. The conditions under which such accuracy is obtained are discussed. A diagram is developed from the dilution equation which readily reveals the effect of processing parameters on dilution to aid in parameter optimization.« less

  16. Development of Self-Healing Coatings Based on Linseed Oil as Autonomous Repairing Agent for Corrosion Resistance

    PubMed Central

    Thanawala, Karan; Mutneja, Nisha; Khanna, Anand S.; Singh Raman, R. K.

    2014-01-01

    In recent years corrosion-resistant self-healing coatings have witnessed strong growth and their successful laboratory design and synthesis categorises them in the family of smart/multi-functional materials. Among various approaches for achieving self-healing, microcapsule embedment through the material matrix is the main one for self-healing ability in coatings. The present work focuses on optimizing the process parameters for developing microcapsules by in-situ polymerization of linseed oil as core and urea-formaldehyde as shell material. Characteristics of these microcapsules with respect to change in processing parameters such as stirring rate and reaction time were studied by using optical microscopy (OM), scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FT-IR). The effectiveness of these microcapsules in coatings was characterized by studying their adhesion, performance, and mechanical properties. PMID:28788249

  17. Discrete element weld model, phase 2

    NASA Technical Reports Server (NTRS)

    Prakash, C.; Samonds, M.; Singhal, A. K.

    1987-01-01

    A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.

  18. Prediction of porosity of food materials during drying: Current challenges and directions.

    PubMed

    Joardder, Mohammad U H; Kumar, C; Karim, M A

    2017-07-18

    Pore formation in food samples is a common physical phenomenon observed during dehydration processes. The pore evolution during drying significantly affects the physical properties and quality of dried foods. Therefore, it should be taken into consideration when predicting transport processes in the drying sample. Characteristics of pore formation depend on the drying process parameters, product properties and processing time. Understanding the physics of pore formation and evolution during drying will assist in accurately predicting the drying kinetics and quality of food materials. Researchers have been trying to develop mathematical models to describe the pore formation and evolution during drying. In this study, existing porosity models are critically analysed and limitations are identified. Better insight into the factors affecting porosity is provided, and suggestions are proposed to overcome the limitations. These include considerations of process parameters such as glass transition temperature, sample temperature, and variable material properties in the porosity models. Several researchers have proposed models for porosity prediction of food materials during drying. However, these models are either very simplistic or empirical in nature and failed to consider relevant significant factors that influence porosity. In-depth understanding of characteristics of the pore is required for developing a generic model of porosity. A micro-level analysis of pore formation is presented for better understanding, which will help in developing an accurate and generic porosity model.

  19. Containerless processing of undercooled melts

    NASA Technical Reports Server (NTRS)

    Perepezko, J. H.

    1993-01-01

    The investigation focused on the control of microstructural evolution in Mn-Al, Fe-Ni, Ni-V, and Au-Pb-Sb alloys through the high undercooling levels provided by containerless processing, and provided fundamental new information on the control of nucleation. Solidification analysis was conducted by means of thermal analysis, x-ray diffraction, and metallographic characterization on samples processed in a laboratory scale drop tube system. The Mn-Al alloy system offers a useful model system with the capability of phase separation on an individual particle basis, thus permitting a more complete understanding of the operative kinetics and the key containerless processing variables. This system provided the opportunity of analyzing the nucleation rate as a function of processing conditions and allowed for the quantitative assessment of the relevant processing parameters. These factors are essential in the development of a containerless processing model which has a predictive capability. Similarly, Ni-V is a model system that was used to study duplex partitionless solidification, which is a structure possible only in high under cooling solidification processes. Nucleation kinetics for the competing bcc and fcc phases were studied to determine how this structure can develop and the conditions under which it may occur. The Fe-Ni alloy system was studied to identify microstructural transitions with controlled variations in sample size and composition during containerless solidification. This work was forwarded to develop a microstructure map which delineates regimes of structural evolution and provides a unified analysis of experimental observations. The Au-Pb-Sb system was investigated to characterize the thermodynamic properties of the undercooled liquid phase and to characterize the glass transition under a variety of processing conditions. By analyzing key containerless processing parameters in a ground based drop tube study, a carefully designed flight experiment may be planned to utilize the extended duration microgravity conditions of orbiting spacecraft.

  20. Laser-Assisted Cold-Sprayed Corrosion- and Wear-Resistant Coatings: A Review

    NASA Astrophysics Data System (ADS)

    Olakanmi, E. O.; Doyoyo, M.

    2014-06-01

    Laser-assisted cold spray (LACS) process will be increasingly employed for depositing coatings because of its unique advantages: solid-state deposition of dense, homogeneous, and pore-free coatings onto a range of substrates; and high build rate at reduced operating costs without the use of expensive heating and process inert gases. Depositing coatings with excellent performance indicators via LACS demands an accurate knowledge and control of processing and materials' variables. By varying the LACS process parameters and their interactions, the functional properties of coatings can be manipulated. Moreover, thermal effect due to laser irradiation and microstructural evolution complicate the interpretation of LACS mechanical deformation mechanism which is essential for elucidating its physical phenomena. In order to provide a basis for follow-on-research that leads to the development of high-productivity LACS processing of coatings, this review focuses on the latest developments in depositing corrosion- and wear-resistant coatings with the emphasis on the composition, structure, and mechanical and functional properties. Historical developments and fundamentals of LACS are addressed in an attempt to describe the physics behind the process. Typical technological applications of LACS coatings are also identified. The investigations of all process sequences, from laser irradiation of the powder-laden gas stream and the substrate, to the impingement of thermally softened particles on the deposition site, and subsequent further processes, are described. Existing gaps in the literature relating to LACS-dependent microstructural evolution, mechanical deformation mechanisms, correlation between functional properties and process parameters, processing challenges, and industrial applications have been identified in order to provide insights for further investigations and innovation in LACS deposition of wear- and corrosion-resistant coatings.

  1. Hybrid Modeling of Cell Signaling and Transcriptional Reprogramming and Its Application in C. elegans Development.

    PubMed

    Fertig, Elana J; Danilova, Ludmila V; Favorov, Alexander V; Ochs, Michael F

    2011-01-01

    Modeling of signal driven transcriptional reprogramming is critical for understanding of organism development, human disease, and cell biology. Many current modeling techniques discount key features of the biological sub-systems when modeling multiscale, organism-level processes. We present a mechanistic hybrid model, GESSA, which integrates a novel pooled probabilistic Boolean network model of cell signaling and a stochastic simulation of transcription and translation responding to a diffusion model of extracellular signals. We apply the model to simulate the well studied cell fate decision process of the vulval precursor cells (VPCs) in C. elegans, using experimentally derived rate constants wherever possible and shared parameters to avoid overfitting. We demonstrate that GESSA recovers (1) the effects of varying scaffold protein concentration on signal strength, (2) amplification of signals in expression, (3) the relative external ligand concentration in a known geometry, and (4) feedback in biochemical networks. We demonstrate that setting model parameters based on wild-type and LIN-12 loss-of-function mutants in C. elegans leads to correct prediction of a wide variety of mutants including partial penetrance of phenotypes. Moreover, the model is relatively insensitive to parameters, retaining the wild-type phenotype for a wide range of cell signaling rate parameters.

  2. Journal: Efficient Hydrologic Tracer-Test Design for Tracer ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for the determination of basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed to facilitate the design of tracer tests by root determination of the one-dimensional advection-dispersion equation (ADE) using a preset average tracer concentration which provides a theoretical basis for an estimate of necessary tracer mass. The method uses basic measured field parameters (e.g., discharge, distance, cross-sectional area) that are combined in functional relatipnships that descrive solute-transport processes related to flow velocity and time of travel. These initial estimates for time of travel and velocity are then applied to a hypothetical continuous stirred tank reactor (CSTR) as an analog for the hydrological-flow system to develop initial estimates for tracer concentration, tracer mass, and axial dispersion. Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be necessary for descri

  3. Tracer-Test Planning Using the Efficient Hydrologic Tracer ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be

  4. EFFICIENT HYDROLOGICAL TRACER-TEST DESIGN (EHTD ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to

  5. Optics Program Simplifies Analysis and Design

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Engineers at Goddard Space Flight Center partnered with software experts at Mide Technology Corporation, of Medford, Massachusetts, through a Small Business Innovation Research (SBIR) contract to design the Disturbance-Optics-Controls-Structures (DOCS) Toolbox, a software suite for performing integrated modeling for multidisciplinary analysis and design. The DOCS Toolbox integrates various discipline models into a coupled process math model that can then predict system performance as a function of subsystem design parameters. The system can be optimized for performance; design parameters can be traded; parameter uncertainties can be propagated through the math model to develop error bounds on system predictions; and the model can be updated, based on component, subsystem, or system level data. The Toolbox also allows the definition of process parameters as explicit functions of the coupled model and includes a number of functions that analyze the coupled system model and provide for redesign. The product is being sold commercially by Nightsky Systems Inc., of Raleigh, North Carolina, a spinoff company that was formed by Mide specifically to market the DOCS Toolbox. Commercial applications include use by any contractors developing large space-based optical systems, including Lockheed Martin Corporation, The Boeing Company, and Northrup Grumman Corporation, as well as companies providing technical audit services, like General Dynamics Corporation

  6. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  7. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  8. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  9. Thermochemical Ablation Analysis of the Orion Heatshield

    NASA Technical Reports Server (NTRS)

    Sixel, William

    2015-01-01

    The Orion Multi-Purpose Crew Vehicle will one day carry astronauts to the Moon and beyond, and Orion's heatshield is a critical component in ensuring their safe return to Earth. The Orion heatshield is the structural component responsible for absorbing the intense heating environment caused by re-entry to Earth's atmosphere. The heatshield is primarily composed of Avcoat, an ablative material that is consumed during the re-entry process. Ablation is primarily characterized by two processes: pyrolysis and recession. The decomposition of in-depth virgin material is known as pyrolysis. Recession occurs when the exposed surface of the heatshield reacts with the surrounding flow. The Orion heatshield design was changed from an individually filled Avcoat honeycomb to a molded block Avcoat design. The molded block Avcoat heatshield relies on an adhesive bond to keep it attached to the capsule. In some locations on the heatshield, the integrity of the adhesive bond cannot be verified. For these locations, a mechanical retention device was proposed. Avcoat ablation was modelled in CHAR and the in-depth virgin material temperatures were used in a Thermal Desktop model of the mechanical retention device. The retention device was analyzed and shown to cause a large increase in the maximum bondline temperature. In order to study the impact of individual ablation modelling parameters on the heatshield sizing process, a Monte Carlo simulation of the sizing process was proposed. The simulation will give the sensitivity of the ablation model to each of its input parameters. As part of the Monte Carlo simulation, statistical uncertainties on material properties were required for Avcoat. Several properties were difficult to acquire uncertainties for: the pyrolysis gas enthalpy, non-dimensional mass loss rate (B´c), and Arrhenius equation parameters. Variability in the elemental composition of Avcoat was used as the basis for determining the statistical uncertainty in pyrolysis gas enthalpy and B´c. A MATLAB program was developed to allow for faster, more accurate and automated computation of Arrhenius reaction parameters. These parameters are required for a material model to be used in the CHAR ablation analysis program. This MATLAB program, along with thermogravimetric analysis (TGA) data, was used to generate uncertainties on the Arrhenius parameters for Avcoat. In addition, the TGA fitting program was developed to provide Arrhenius parameters for the ablation model of the gap filler material, RTV silicone.

  10. Design of high productivity antibody capture by protein A chromatography using an integrated experimental and modeling approach.

    PubMed

    Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G

    2012-06-15

    An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Laser Direct Metal Deposition of 2024 Al Alloy: Trace Geometry Prediction via Machine Learning.

    PubMed

    Caiazzo, Fabrizia; Caggiano, Alessandra

    2018-03-19

    Laser direct metal deposition is an advanced additive manufacturing technology suitably applicable in maintenance, repair, and overhaul of high-cost products, allowing for minimal distortion of the workpiece, reduced heat affected zones, and superior surface quality. Special interest is growing for the repair and coating of 2024 aluminum alloy parts, extensively utilized for a wide range of applications in the automotive, military, and aerospace sectors due to its excellent plasticity, corrosion resistance, electric conductivity, and strength-to-weight ratio. A critical issue in the laser direct metal deposition process is related to the geometrical parameters of the cross-section of the deposited metal trace that should be controlled to meet the part specifications. In this research, a machine learning approach based on artificial neural networks is developed to find the correlation between the laser metal deposition process parameters and the output geometrical parameters of the deposited metal trace produced by laser direct metal deposition on 5-mm-thick 2024 aluminum alloy plates. The results show that the neural network-based machine learning paradigm is able to accurately estimate the appropriate process parameters required to obtain a specified geometry for the deposited metal trace.

  12. A Systematic Evaluation of Blood Serum and Plasma Pre-Analytics for Metabolomics Cohort Studies

    PubMed Central

    Jobard, Elodie; Trédan, Olivier; Postoly, Déborah; André, Fabrice; Martin, Anne-Laure; Elena-Herrmann, Bénédicte; Boyault, Sandrine

    2016-01-01

    The recent thriving development of biobanks and associated high-throughput phenotyping studies requires the elaboration of large-scale approaches for monitoring biological sample quality and compliance with standard protocols. We present a metabolomic investigation of human blood samples that delineates pitfalls and guidelines for the collection, storage and handling procedures for serum and plasma. A series of eight pre-processing technical parameters is systematically investigated along variable ranges commonly encountered across clinical studies. While metabolic fingerprints, as assessed by nuclear magnetic resonance, are not significantly affected by altered centrifugation parameters or delays between sample pre-processing (blood centrifugation) and storage, our metabolomic investigation highlights that both the delay and storage temperature between blood draw and centrifugation are the primary parameters impacting serum and plasma metabolic profiles. Storing the blood drawn at 4 °C is shown to be a reliable routine to confine variability associated with idle time prior to sample pre-processing. Based on their fine sensitivity to pre-analytical parameters and protocol variations, metabolic fingerprints could be exploited as valuable ways to determine compliance with standard procedures and quality assessment of blood samples within large multi-omic clinical and translational cohort studies. PMID:27929400

  13. Laser Direct Metal Deposition of 2024 Al Alloy: Trace Geometry Prediction via Machine Learning

    PubMed Central

    2018-01-01

    Laser direct metal deposition is an advanced additive manufacturing technology suitably applicable in maintenance, repair, and overhaul of high-cost products, allowing for minimal distortion of the workpiece, reduced heat affected zones, and superior surface quality. Special interest is growing for the repair and coating of 2024 aluminum alloy parts, extensively utilized for a wide range of applications in the automotive, military, and aerospace sectors due to its excellent plasticity, corrosion resistance, electric conductivity, and strength-to-weight ratio. A critical issue in the laser direct metal deposition process is related to the geometrical parameters of the cross-section of the deposited metal trace that should be controlled to meet the part specifications. In this research, a machine learning approach based on artificial neural networks is developed to find the correlation between the laser metal deposition process parameters and the output geometrical parameters of the deposited metal trace produced by laser direct metal deposition on 5-mm-thick 2024 aluminum alloy plates. The results show that the neural network-based machine learning paradigm is able to accurately estimate the appropriate process parameters required to obtain a specified geometry for the deposited metal trace. PMID:29562682

  14. The Mars mapper science and mission planning tool

    NASA Technical Reports Server (NTRS)

    Lo, Martin W.

    1993-01-01

    The Mars Mapper Program (MOm) is an interactive tool for science and mission design developed for the Mars Observer Mission (MO). MOm is a function of the Planning and Sequencing Element of the MO Ground Data System. The primary users of MOm are members of the science and mission planning teams. Using MOm, the user can display digital maps of Mars in various projections and resolutions ranging from 1 to 256 pixels per degree squared. The user can overlay the maps with ground tracks of the MO spacecraft (S/C) and footprints and swaths of the various instruments on-board the S/C. Orbital and instrument geometric parameters can be computed on demand and displayed on the digital map or plotted in XY-plots. The parameter data can also be saved into files for other uses. MOm is divided into 3 major processes: Generator, Mapper, Plotter. The Generator Process is the main control which spawns all other processes. The processes communicate via sockets. At any one time, only 1 copy of MOm may operate on the system. However, up to 5 copies of each of the major processes may be invoked from the Generator. MOm is developed on the Sun SPARCStation 2GX with menu driven graphical user interface (GUI). The map window and its overlays are mouse-sensitized to permit on-demand calculations of various parameters along an orbit. The program is currently under testing and will be delivered to the MO Mission System Configuration Management for distribution to the MO community in 3/93.

  15. Discrete element modeling of the mass movement and loose material supplying the gully process of a debris avalanche in the Bayi Gully, Southwest China

    NASA Astrophysics Data System (ADS)

    Zhou, Jia-wen; Huang, Kang-xin; Shi, Chong; Hao, Ming-hui; Guo, Chao-xu

    2015-03-01

    The dynamic process of a debris avalanche in mountainous areas is influenced by the landslide volume, topographical conditions, mass-material composition, mechanical properties and other factors. A good understanding of the mass movement and loose material supplying the gully process is very important for understanding the dynamic properties of debris avalanches. Three-dimensional particle flow code (PFC3D) was used to simulate a debris avalanche in Quaternary deposits at the Bayi Gully, Southwest China. FORTRAN and AutoCAD were used for the secondary development to display the mass movement process and to quantitatively describe the mass movement and loose material supplying the gully process. The simulated results show that after the landslide is initiated, the gravitational potential energy is converted into kinetic energy with a variation velocity for the sliding masses. Two stages exist for the average-movement velocity: the acceleration stage and the slowdown stage, which are influenced by the topographical conditions. For the loose materials supplying the gully process, the cumulative volume of the sliding masses into the gully gradually increases over the time. When the landslide volume is not large enough, the increasing landslide volume does not obviously influence the movement process of the sliding masses. The travel distance and movement velocity increase with the decreasing numerical parameters, and the mass-movement process is finished more quickly using low-value parameters. The deposition area of the sliding masses decreases with the increasing numerical parameters and the corresponding deposition thickness increases. The mass movement of the debris avalanche is not only influenced by the mechanical parameters but is also controlled by the topographical conditions.

  16. The dynamic nature of conflict in Wikipedia

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Sampaio dos Aidos, F.; Carvalho, J.

    2014-10-01

    The voluntary process of Wikipedia edition provides an environment in which the outcome is clearly a collective product of interactions involving a large number of people. We propose a simple agent-based model, developed from real data, to reproduce the collaborative process of Wikipedia edition. With a small number of simple ingredients, our model mimics several interesting features of real human behaviour, namely in the context of edit wars. We show that the level of conflict is determined by a tolerance parameter, which measures the editors' capability to accept different opinions and to change their own opinion. We propose to measure conflict with a parameter based on mutual reverts, which increases only in contentious situations. Using this parameter, we find a distribution for the inter-peace periods that is heavy tailed. The effects of wiki-robots in the conflict levels and in the edition patterns are also studied. Our findings are compared with previous parameters used to measure conflicts in edit wars.

  17. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  18. Continued Data Acquisition Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwellenbach, David

    This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muonmore » tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.« less

  19. A Workflow for Global Sensitivity Analysis of PBPK Models

    PubMed Central

    McNally, Kevin; Cotton, Richard; Loizou, George D.

    2011-01-01

    Physiologically based pharmacokinetic (PBPK) models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilized to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis (SA) technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined the elements of a workflow for SA of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot), which we believe is intuitive and appropriate for toxicologists, risk assessors, and regulators. PMID:21772819

  20. Study of Material Consolidation at Higher Throughput Parameters in Selective Laser Melting of Inconel 718

    NASA Technical Reports Server (NTRS)

    Prater, Tracie

    2016-01-01

    Selective Laser Melting (SLM) is a powder bed fusion additive manufacturing process used increasingly in the aerospace industry to reduce the cost, weight, and fabrication time for complex propulsion components. SLM stands poised to revolutionize propulsion manufacturing, but there are a number of technical questions that must be addressed in order to achieve rapid, efficient fabrication and ensure adequate performance of parts manufactured using this process in safety-critical flight applications. Previous optimization studies for SLM using the Concept Laser M1 and M2 machines at NASA Marshall Space Flight Center have centered on machine default parameters. The objective of this work is to characterize the impact of higher throughput parameters (a previously unexplored region of the manufacturing operating envelope for this application) on material consolidation. In phase I of this work, density blocks were analyzed to explore the relationship between build parameters (laser power, scan speed, hatch spacing, and layer thickness) and material consolidation (assessed in terms of as-built density and porosity). Phase II additionally considers the impact of post-processing, specifically hot isostatic pressing and heat treatment, as well as deposition pattern on material consolidation in the same higher energy parameter regime considered in the phase I work. Density and microstructure represent the "first-gate" metrics for determining the adequacy of the SLM process in this parameter range and, as a critical initial indicator of material quality, will factor into a follow-on DOE that assesses the impact of these parameters on mechanical properties. This work will contribute to creating a knowledge base (understanding material behavior in all ranges of the AM equipment operating envelope) that is critical to transitioning AM from the custom low rate production sphere it currently occupies to the world of mass high rate production, where parts are fabricated at a rapid rate with confidence that they will meet or exceed all stringent functional requirements for spaceflight hardware. These studies will also provide important data on the sensitivity of material consolidation to process parameters that will inform the design and development of future flight articles using SLM.

Top