Sample records for key process parameter

  1. Controlling Ethylene for Extended Preservation of Fresh Fruits and Vegetables

    DTIC Science & Technology

    2008-12-01

    into a process simulation to determine the effects of key design parameters on the overall performance of the system. Integrating process simulation...High Decay [Asian Pears High High Decay [ Avocados High High Decay lBananas Moderate ~igh Decay Cantaloupe High Moderate Decay Cherimoya Very High High...ozonolysis. Process simulation was subsequently used to understand the effect of key system parameters on EEU performance. Using this modeling work

  2. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  3. Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties

    USDA-ARS?s Scientific Manuscript database

    Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as alter the...

  4. Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties

    USDA-ARS?s Scientific Manuscript database

    Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as skew the...

  5. A risk-based approach to management of leachables utilizing statistical analysis of extractables.

    PubMed

    Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M

    2015-04-01

    To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.

  6. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  7. Key management of the double random-phase-encoding method using public-key encryption

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2010-03-01

    Public-key encryption has been used to encode the key of the encryption process. In the proposed technique, an input image has been encrypted by using the double random-phase-encoding method using extended fractional Fourier transform. The key of the encryption process have been encoded by using the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. The encoded key has then been transmitted to the receiver side along with the encrypted image. In the decryption process, first the encoded key has been decrypted using the secret key and then the encrypted image has been decrypted by using the retrieved key parameters. The proposed technique has advantage over double random-phase-encoding method because the problem associated with the transmission of the key has been eliminated by using public-key encryption. Computer simulation has been carried out to validate the proposed technique.

  8. Field spectrometer (S191H) preprocessor tape quality test program design document

    NASA Technical Reports Server (NTRS)

    Campbell, H. M.

    1976-01-01

    Program QA191H performs quality assurance tests on field spectrometer data recorded on 9-track magnetic tape. The quality testing involves the comparison of key housekeeping and data parameters with historic and predetermined tolerance limits. Samples of key parameters are processed during the calibration period and wavelength cal period, and the results are printed out and recorded on an historical file tape.

  9. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  10. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    NASA Astrophysics Data System (ADS)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  11. Research on a Defects Detection Method in the Ferrite Phase Shifter Cementing Process Based on a Multi-Sensor Prognostic and Health Management (PHM) System.

    PubMed

    Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu

    2016-08-10

    The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects.

  12. The status of membrane bioreactor technology.

    PubMed

    Judd, Simon

    2008-02-01

    In this article, the current status of membrane bioreactor (MBR) technology for wastewater treatment is reviewed. Fundamental facets of the MBR process and membrane and process configurations are outlined and the advantages and disadvantages over conventional suspended growth-based biotreatment are briefly identified. Key process design and operating parameters are defined and their significance explained. The inter-relationships between these parameters are identified and their implications discussed, with particular reference to impacts on membrane surface fouling and channel clogging. In addition, current understanding of membrane surface fouling and identification of candidate foulants is appraised. Although much interest in this technology exists and its penetration of the market will probably increase significantly, there remains a lack of understanding of key process constraints such as membrane channel clogging, and of the science of membrane cleaning.

  13. Systems Analysis of the Hydrogen Transition with HyTrans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leiby, Paul Newsome; Greene, David L; Bowman, David Charles

    2007-01-01

    The U.S. Federal government is carefully considering the merits and long-term prospects of hydrogen-fueled vehicles. NAS (1) has called for the careful application of systems analysis tools to structure the complex assessment required. Others, raising cautionary notes, question whether a consistent and plausible transition to hydrogen light-duty vehicles can identified (2) and whether that transition would, on balance, be environmentally preferred. Modeling the market transition to hydrogen-powered vehicles is an inherently complex process, encompassing hydrogen production, delivery and retailing, vehicle manufacturing, and vehicle choice and use. We describe the integration of key technological and market factors in a dynamic transitionmore » model, HyTrans. The usefulness of HyTrans and its predictions depends on three key factors: (1) the validity of the economic theories that underpin the model, (2) the authenticity with which the key processes are represented, and (3) the accuracy of specific parameter values used in the process representations. This paper summarizes the theoretical basis of HyTrans, and highlights the implications of key parameter specifications with sensitivity analysis.« less

  14. An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate

    NASA Astrophysics Data System (ADS)

    Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud

    2018-03-01

    In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.

  15. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  16. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  17. A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.

    PubMed

    Faya, Paul; Stamey, James D; Seaman, John W

    2017-01-01

    For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.

  18. At-line monitoring of key parameters of nisin fermentation by near infrared spectroscopy, chemometric modeling and model improvement.

    PubMed

    Guo, Wei-Liang; Du, Yi-Ping; Zhou, Yong-Can; Yang, Shuang; Lu, Jia-Hui; Zhao, Hong-Yu; Wang, Yao; Teng, Li-Rong

    2012-03-01

    An analytical procedure has been developed for at-line (fast off-line) monitoring of 4 key parameters including nisin titer (NT), the concentration of reducing sugars, cell concentration and pH during a nisin fermentation process. This procedure is based on near infrared (NIR) spectroscopy and Partial Least Squares (PLS). Samples without any preprocessing were collected at intervals of 1 h during fifteen batch of fermentations. These fermentation processes were implemented in 3 different 5 l fermentors at various conditions. NIR spectra of the samples were collected in 10 min. And then, PLS was used for modeling the relationship between NIR spectra and the key parameters which were determined by reference methods. Monte Carlo Partial Least Squares (MCPLS) was applied to identify the outliers and select the most efficacious methods for preprocessing spectra, wavelengths and the suitable number of latent variables (n (LV)). Then, the optimum models for determining NT, concentration of reducing sugars, cell concentration and pH were established. The correlation coefficients of calibration set (R (c)) were 0.8255, 0.9000, 0.9883 and 0.9581, respectively. These results demonstrated that this method can be successfully applied to at-line monitor of NT, concentration of reducing sugars, cell concentration and pH during nisin fermentation processes.

  19. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  20. Overview of Characterization Techniques for High Speed Crystal Growth

    NASA Technical Reports Server (NTRS)

    Ravi, K. V.

    1984-01-01

    Features of characterization requirements for crystals, devices and completed products are discussed. Key parameters of interest in semiconductor processing are presented. Characterization as it applies to process control, diagnostics and research needs is discussed with appropriate examples.

  1. Optical components damage parameters database system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong

    2012-10-01

    Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.

  2. Crystal growth of device quality GaAs in space

    NASA Technical Reports Server (NTRS)

    Gatos, H. C.; Lagowski, J.

    1979-01-01

    The optimization of space processing of GaAs is described. The detailed compositional, structural, and electronic characterization of GaAs on a macro- and microscale and the relationships between growth parameters and the properties of GaAs are among the factors discussed. The key parameters limiting device performance are assessed.

  3. A theoretical and experimental study on the pulsed laser dressing of bronze-bonded diamond grinding wheels

    NASA Astrophysics Data System (ADS)

    Deng, H.; Chen, G. Y.; Zhou, C.; Zhou, X. C.; He, J.; Zhang, Y.

    2014-09-01

    A series of theoretical analyses and experimental investigations were performed to examine a pulsed fiber-laser tangential profiling and radial sharpening technique for bronze-bonded diamond grinding wheels. The mechanisms for the pulsed laser tangential profiling and radial sharpening of grinding wheels were theoretically analyzed, and the four key processing parameters that determine the quality, accuracy, and efficiency of pulsed laser dressing, namely, the laser power density, laser spot overlap ratio, laser scanning track line overlap ratio, and number of laser scanning cycles, were proposed. Further, by utilizing cylindrical bronze wheels (without diamond grains) and bronze-bonded diamond grinding wheels as the experimental subjects, the effects of these four processing parameters on the removal efficiency and the surface smoothness of the bond material after pulsed laser ablation, as well as the effects on the contour accuracy of the grinding wheels, the protrusion height of the diamond grains, the sharpness of the grain cutting edges, and the graphitization degree of the diamond grains after pulsed laser dressing, were explored. The optimal values of the four key processing parameters were identified.

  4. Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model

    EPA Science Inventory

    This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...

  5. The Research and Implementation of Vehicle Bluetooth Hands-free Devices Key Parameters Downloading Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-bo; Wang, Zhi-xue; Li, Jian-xin; Ma, Jian-hui; Li, Yang; Li, Yan-qiang

    In order to facilitate Bluetooth function realization and information can be effectively tracked in the process of production, the vehicle Bluetooth hands-free devices need to download such key parameters as Bluetooth address, CVC license and base plate numbers, etc. Therefore, it is the aim to search simple and effective methods to download parameters for each vehicle Bluetooth hands-free device, and to control and record the use of parameters. In this paper, by means of Bluetooth Serial Peripheral Interface programmer device, the parallel port is switched to SPI. The first step is to download parameters is simulating SPI with the parallel port. To perform SPI function, operating the parallel port in accordance with the SPI timing. The next step is to achieve SPI data transceiver functions according to the programming parameters of options. Utilizing the new method, downloading parameters is fast and accurate. It fully meets vehicle Bluetooth hands-free devices production requirements. In the production line, it has played a large role.

  6. Image processing methods in two and three dimensions used to animate remotely sensed data. [cloud cover

    NASA Technical Reports Server (NTRS)

    Hussey, K. J.; Hall, J. R.; Mortensen, R. A.

    1986-01-01

    Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.

  7. Optimization of process parameters for RF sputter deposition of tin-nitride thin-films

    NASA Astrophysics Data System (ADS)

    Jangid, Teena; Rao, G. Mohan

    2018-05-01

    Radio frequency Magnetron sputtering technique was employed to deposit Tin-nitride thin films on Si and glass substrate at different process parameters. Influence of varying parameters like substrate temperature, target-substrate distance and RF power is studied in detail. X-ray diffraction method is used as a key technique for analyzing the changes in the stoichiometric and structural properties of the deposited films. Depending on the combination of deposition parameters, crystalline as well as amorphous films were obtained. Pure tin-nitride thin films were deposited at 15W RF power and 600°C substrate temperature with target-substrate distance fixed at 10cm. Bandgap value of 1.6 eV calculated for the film deposited at optimum process conditions matches well with reported values.

  8. Polyoxylglycerides and glycerides: effects of manufacturing parameters on API stability, excipient functionality and processing.

    PubMed

    Jannin, Vincent; Rodier, Jean-David; Musakhanian, Jasmine

    2014-05-15

    Lipid-based formulations are a viable option to address modern drug delivery challenges such as increasing the oral bioavailability of poorly water-soluble active pharmaceutical ingredients (APIs), or sustaining the drug release of molecules intended for chronic diseases. Esters of fatty acids and glycerol (glycerides) and polyethylene-glycols (polyoxylglycerides) are two main classes of lipid-based excipients used by oral, dermal, rectal, vaginal or parenteral routes. These lipid-based materials are more and more commonly used in pharmaceutical drug products but there is still a lack of understanding of how the manufacturing processes, processing aids, or additives can impact the chemical stability of APIs within the drug product. In that regard, this review summarizes the key parameters to look at when formulating with lipid-based excipients in order to anticipate a possible impact on drug stability or variation of excipient functionality. The introduction presents the chemistry of natural lipids, fatty acids and their properties in relation to the extraction and refinement processes. Then, the key parameters during the manufacturing process influencing the quality of lipid-based excipients are provided. Finally, their critical characteristics are discussed in relation with their intended functionality and ability to interact with APIs and others excipients within the formulation. Copyright © 2014. Published by Elsevier B.V.

  9. Application of Quality by Design to the characterization of the cell culture process of an Fc-Fusion protein.

    PubMed

    Rouiller, Yolande; Solacroup, Thomas; Deparis, Véronique; Barbafieri, Marco; Gleixner, Ralf; Broly, Hervé; Eon-Duval, Alex

    2012-06-01

    The production bioreactor step of an Fc-Fusion protein manufacturing cell culture process was characterized following Quality by Design principles. Using scientific knowledge derived from the literature and process knowledge gathered during development studies and manufacturing to support clinical trials, potential critical and key process parameters with a possible impact on product quality and process performance, respectively, were determined during a risk assessment exercise. The identified process parameters were evaluated using a design of experiment approach. The regression models generated from the data allowed characterizing the impact of the identified process parameters on quality attributes. The main parameters having an impact on product titer were pH and dissolved oxygen, while those having the highest impact on process- and product-related impurities and variants were pH and culture duration. The models derived from characterization studies were used to define the cell culture process design space. The design space limits were set in such a way as to ensure that the drug substance material would consistently have the desired quality. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.

  11. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  12. Method for extracting relevant electrical parameters from graphene field-effect transistors using a physical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boscá, A., E-mail: alberto.bosca@upm.es; Dpto. de Ingeniería Electrónica, E.T.S.I. de Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040; Pedrós, J.

    2015-01-28

    Due to its intrinsic high mobility, graphene has proved to be a suitable material for high-speed electronics, where graphene field-effect transistor (GFET) has shown excellent properties. In this work, we present a method for extracting relevant electrical parameters from GFET devices using a simple electrical characterization and a model fitting. With experimental data from the device output characteristics, the method allows to calculate parameters such as the mobility, the contact resistance, and the fixed charge. Differentiated electron and hole mobilities and direct connection with intrinsic material properties are some of the key aspects of this method. Moreover, the method outputmore » values can be correlated with several issues during key fabrication steps such as the graphene growth and transfer, the lithographic steps, or the metalization processes, providing a flexible tool for quality control in GFET fabrication, as well as a valuable feedback for improving the material-growth process.« less

  13. Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.

    PubMed

    Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing

    2009-06-01

    Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.

  14. Methods of mesophyll conductance estimation: its impact on key biochemical parameters and photosynthetic limitations in phosphorus-stressed soybean across CO2

    USDA-ARS?s Scientific Manuscript database

    Photosynthetic potential in C3 plants is largely limited by CO2 diffusion through stomata (Ls) and mesophyll (Lm) and photo-biochemical (Lb) processes. Accurate estimation of mesophyll conductance (gm) using gas exchange (GE) and chlorophyll fluorescence (CF) parameters of the photosynthetic proces...

  15. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  16. Mining manufacturing data for discovery of high productivity process characteristics.

    PubMed

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  17. Numerical Simulation and Optimization of Directional Solidification Process of Single Crystal Superalloy Casting

    PubMed Central

    Zhang, Hang; Xu, Qingyan; Liu, Baicheng

    2014-01-01

    The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535

  18. Laser Trimming of CuAlMo Thin-Film Resistors: Effect of Laser Processing Parameters

    NASA Astrophysics Data System (ADS)

    Birkett, Martin; Penlington, Roger

    2012-08-01

    This paper reports the effect of varying laser trimming process parameters on the electrical performance of a novel CuAlMo thin-film resistor material. The films were prepared on Al2O3 substrates by direct-current (DC) magnetron sputtering, before being laser trimmed to target resistance value. The effect of varying key laser parameters of power, Q-rate, and bite size on the resistor stability and tolerance accuracy were systematically investigated. By reducing laser power and bite size and balancing this with Q-rate setting, significant improvements in resistor stability and resistor tolerance accuracies of less than ±0.5% were achieved.

  19. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  20. Laboratory Studies on Surface Sampling of Bacillus anthracis Contamination: Summary, Gaps, and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca

    2011-11-28

    This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  2. Process development for robust removal of aggregates using cation exchange chromatography in monoclonal antibody purification with implementation of quality by design.

    PubMed

    Xu, Zhihao; Li, Jason; Zhou, Joe X

    2012-01-01

    Aggregate removal is one of the most important aspects in monoclonal antibody (mAb) purification. Cation-exchange chromatography (CEX), a widely used polishing step in mAb purification, is able to clear both process-related impurities and product-related impurities. In this study, with the implementation of quality by design (QbD), a process development approach for robust removal of aggregates using CEX is described. First, resin screening studies were performed and a suitable CEX resin was chosen because of its relatively better selectivity and higher dynamic binding capacity. Second, a pH-conductivity hybrid gradient elution method for the CEX was established, and the risk assessment for the process was carried out. Third, a process characterization study was used to evaluate the impact of the potentially important process parameters on the process performance with respect to aggregate removal. Accordingly, a process design space was established. Aggregate level in load is the critical parameter. Its operating range is set at 0-3% and the acceptable range is set at 0-5%. Equilibration buffer is the key parameter. Its operating range is set at 40 ± 5 mM acetate, pH 5.0 ± 0.1, and acceptable range is set at 40 ± 10 mM acetate, pH 5.0 ± 0.2. Elution buffer, load mass, and gradient elution volume are non-key parameters; their operating ranges and acceptable ranges are equally set at 250 ± 10 mM acetate, pH 6.0 ± 0.2, 45 ± 10 g/L resin, and 10 ± 20% CV respectively. Finally, the process was scaled up 80 times and the impurities removal profiles were revealed. Three scaled-up runs showed that the size-exclusion chromatography (SEC) purity of the CEX pool was 99.8% or above and the step yield was above 92%, thereby proving that the process is both consistent and robust.

  3. Effects of Processing Parameters on the Forming Quality of C-Shaped Thermosetting Composite Laminates in Hot Diaphragm Forming Process

    NASA Astrophysics Data System (ADS)

    Bian, X. X.; Gu, Y. Z.; Sun, J.; Li, M.; Liu, W. P.; Zhang, Z. G.

    2013-10-01

    In this study, the effects of processing temperature and vacuum applying rate on the forming quality of C-shaped carbon fiber reinforced epoxy resin matrix composite laminates during hot diaphragm forming process were investigated. C-shaped prepreg preforms were produced using a home-made hot diaphragm forming equipment. The thickness variations of the preforms and the manufacturing defects after diaphragm forming process, including fiber wrinkling and voids, were evaluated to understand the forming mechanism. Furthermore, both interlaminar slipping friction and compaction behavior of the prepreg stacks were experimentally analyzed for showing the importance of the processing parameters. In addition, autoclave processing was used to cure the C-shaped preforms to investigate the changes of the defects before and after cure process. The results show that the C-shaped prepreg preforms with good forming quality can be achieved through increasing processing temperature and reducing vacuum applying rate, which obviously promote prepreg interlaminar slipping process. The process temperature and forming rate in hot diaphragm forming process strongly influence prepreg interply frictional force, and the maximum interlaminar frictional force can be taken as a key parameter for processing parameter optimization. Autoclave process is effective in eliminating voids in the preforms and can alleviate fiber wrinkles to a certain extent.

  4. On the synergistic use of microwave and infrared satellite observations to monitor soil moisture and flooding

    USDA-ARS?s Scientific Manuscript database

    Extreme hydrological processes are often very dynamic and destructive.A better understanding of these processes requires an accurate mapping of key variables that control them. In this regard, soil moisture is perhaps the most important parameter that impacts the magnitude of flooding events as it c...

  5. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Treesearch

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  6. Titanium nitride plasma-chemical synthesis with titanium tetrachloride raw material in the DC plasma-arc reactor

    NASA Astrophysics Data System (ADS)

    Kirpichev, D. E.; Sinaiskiy, M. A.; Samokhin, A. V.; Alexeev, N. V.

    2017-04-01

    The possibility of plasmochemical synthesis of titanium nitride is demonstrated in the paper. Results of the thermodynamic analysis of TiCl4 - H2 - N2 system are presented; key parameters of TiN synthesis process are calculated. The influence of parameters of plasma-chemical titanium nitride synthesis process in the reactor with an arc plasmatron on characteristics on the produced powders is experimentally investigated. Structure, chemical composition and morphology dependencies on plasma jet enthalpy, stoichiometric excess of hydrogen and nitrogen in a plasma jet are determined.

  7. Setting priorities in health care organizations: criteria, processes, and parameters of success.

    PubMed

    Gibson, Jennifer L; Martin, Douglas K; Singer, Peter A

    2004-09-08

    Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.

  8. Effects of process variables on the properties of YBa2Cu3O(7-x) ceramics formed by investment casting

    NASA Technical Reports Server (NTRS)

    Hooker, M. W.; Taylor, T. D.; Leigh, H. D.; Wise, S. A.; Buckley, J. D.; Vasquez, P.; Buck, G. M.; Hicks, L. P.

    1993-01-01

    An investment casting process has been developed to produce net-shape, superconducting ceramics. In this work, a factorial experiment was performed to determine the critical process parameters for producing cast YBa2Cu3O7 ceramics with optimum properties. An analysis of variance procedure indicated that the key variables in casting superconductive ceramics are the particle size distribution and sintering temperature. Additionally, the interactions between the sintering temperature and the other process parameters (e.g., particle size distribution and the use of silver dopants) were also found to influence the density, porosity, and critical current density of the fired ceramics.

  9. Synchrotron-Based X-ray Microtomography Characterization of the Effect of Processing Variables on Porosity Formation in Laser Power-Bed Additive Manufacturing of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Cunningham, Ross; Narra, Sneha P.; Montgomery, Colt; Beuth, Jack; Rollett, A. D.

    2017-03-01

    The porosity observed in additively manufactured (AM) parts is a potential concern for components intended to undergo high-cycle fatigue without post-processing to remove such defects. The morphology of pores can help identify their cause: irregularly shaped lack of fusion or key-holing pores can usually be linked to incorrect processing parameters, while spherical pores suggest trapped gas. Synchrotron-based x-ray microtomography was performed on laser powder-bed AM Ti-6Al-4V samples over a range of processing conditions to investigate the effects of processing parameters on porosity. The process mapping technique was used to control melt pool size. Tomography was also performed on the powder to measure porosity within the powder that may transfer to the parts. As observed previously in experiments with electron beam powder-bed fabrication, significant variations in porosity were found as a function of the processing parameters. A clear connection between processing parameters and resulting porosity formation mechanism was observed in that inadequate melt pool overlap resulted in lack-of-fusion pores whereas excess power density produced keyhole pores.

  10. The Impact of Dielectric Material and Temperature on Dielectric Charging in RF MEMS Capacitive Switches

    NASA Astrophysics Data System (ADS)

    Papaioannou, George

    The present work attempts to provide a better insight on the dielectric charging in RF-MEMS capacitive switches that constitutes a key issue limiting parameter of their commercialization. The dependence of the charging process on the nature of dielectric materials widely used in these devices, such as SiO2, Si3N4, AlN, Al2O3, Ta2O5, HfO2, which consist of covalent or ionic bonds and may exhibit piezoelectric properties is discussed taking into account the effect of deposition conditions and resulting material stoichiometry. Another key issue parameter that accelerates the charging and discharging processes by providing enough energy to trapped charges to be released and to dipoles to overcome potential barriers and randomize their orientation is the temperature will be investigated too. Finally, the effect of device structure will be also taken into account.

  11. Defining process design space for monoclonal antibody cell culture.

    PubMed

    Abu-Absi, Susan Fugett; Yang, LiYing; Thompson, Patrick; Jiang, Canping; Kandula, Sunitha; Schilling, Bernhard; Shukla, Abhinav A

    2010-08-15

    The concept of design space has been taking root as a foundation of in-process control strategies for biopharmaceutical manufacturing processes. During mapping of the process design space, the multidimensional combination of operational variables is studied to quantify the impact on process performance in terms of productivity and product quality. An efficient methodology to map the design space for a monoclonal antibody cell culture process is described. A failure modes and effects analysis (FMEA) was used as the basis for the process characterization exercise. This was followed by an integrated study of the inoculum stage of the process which includes progressive shake flask and seed bioreactor steps. The operating conditions for the seed bioreactor were studied in an integrated fashion with the production bioreactor using a two stage design of experiments (DOE) methodology to enable optimization of operating conditions. A two level Resolution IV design was followed by a central composite design (CCD). These experiments enabled identification of the edge of failure and classification of the operational parameters as non-key, key or critical. In addition, the models generated from the data provide further insight into balancing productivity of the cell culture process with product quality considerations. Finally, process and product-related impurity clearance was evaluated by studies linking the upstream process with downstream purification. Production bioreactor parameters that directly influence antibody charge variants and glycosylation in CHO systems were identified.

  12. Process Integration and Optimization of ICME Carbon Fiber Composites for Vehicle Lightweighting: A Preliminary Development

    DOE PAGES

    Xu, Hongyi; Li, Yang; Zeng, Danielle

    2017-01-02

    Process integration and optimization is the key enabler of the Integrated Computational Materials Engineering (ICME) of carbon fiber composites. In this paper, automated workflows are developed for two types of composites: Sheet Molding Compounds (SMC) short fiber composites, and multi-layer unidirectional (UD) composites. For SMC, the proposed workflow integrates material processing simulation, microstructure representation volume element (RVE) models, material property prediction and structure preformation simulation to enable multiscale, multidisciplinary analysis and design. Processing parameters, microstructure parameters and vehicle subframe geometry parameters are defined as the design variables; the stiffness and weight of the structure are defined as the responses. Formore » multi-layer UD structure, this work focuses on the discussion of different design representation methods and their impacts on the optimization performance. Challenges in ICME process integration and optimization are also summarized and highlighted. Two case studies are conducted to demonstrate the integrated process and its application in optimization.« less

  13. Fault detection in heavy duty wheels by advanced vibration processing techniques and lumped parameter modeling

    NASA Astrophysics Data System (ADS)

    Malago`, M.; Mucchi, E.; Dalpiaz, G.

    2016-03-01

    Heavy duty wheels are used in applications such as automatic vehicles and are mainly composed of a polyurethane tread glued to a cast iron hub. In the manufacturing process, the adhesive application between tread and hub is a critical assembly phase, since it is completely made by an operator and a contamination of the bond area may happen. Furthermore, the presence of rust on the hub surface can contribute to worsen the adherence interface, reducing the operating life. In this scenario, a quality control procedure for fault detection to be used at the end of the manufacturing process has been developed. This procedure is based on vibration processing techniques and takes advantages of the results of a lumped parameter model. Indicators based on cyclostationarity can be considered as key parameters to be adopted in a monitoring test station at the end of the production line due to their not deterministic characteristics.

  14. Optimization of hybrid laser arc welding of 42CrMo steel to suppress pore formation

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Chen, Genyu; Mao, Shuai; Zhou, Cong; Chen, Fei

    2017-06-01

    The hybrid laser arc welding (HLAW) of 42CrMo quenched and tempered steel was conducted. The effect of the processing parameters, such as the relative positions of the laser and the arc, the shielding gas flow rate, the defocusing distance, the laser power, the wire feed rate and the welding speed, on the pore formation was analyzed, the morphological characteristics of the pores were analyzed using scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). The results showed that the majority of the pores were invasive. The pores formed at the leading a laser (LA) welding process were fewer than those at the leading a arc (AL) welding process. Increasing the shielding gas flow rate could also facilitate the reduction of pores. The laser power and the welding speed were two key process parameters to reduce the pores. The flow of the molten pool, the weld cooling rate and the pore escaping rate as a result of different parameters could all affect pore formation. An ideal pore-free weld was obtained for the optimal welding process parameters.

  15. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  16. Exploring microphysical, radiative, dynamic and thermodynamic processes driving fog and low stratus clouds using ground-based Lidar and Radar measurements

    NASA Astrophysics Data System (ADS)

    Haeffelin, Martial

    2016-04-01

    Radiation fog formation is largely influenced by the chemical composition, size and number concentration of cloud condensation nuclei and by heating/cooling and drying/moistening processes in a shallow mixing layer near the surface. Once a fog water layer is formed, its development and dissipation become predominantly controlled by radiative cooling/heating, turbulent mixing, sedimentation and deposition. Key processes occur in the atmospheric surface layer, directly in contact with the soil and vegetation, and throughout the atmospheric column. Recent publications provide detailed descriptions of these processes for idealized cases using very high-resolution models and proper representation of microphysical processes. Studying these processes in real fog situations require atmospheric profiling capabilities to monitor the temporal evolution of key parameters at several heights (surface, inside the fog, fog top, free troposphere). This could be done with in-situ sensors flown on tethered balloons or drones, during dedicated intensive field campaigns. In addition Backscatter Lidars, Doppler Lidars, Microwave Radiometers and Cloud Doppler Radars can provide more continuous, yet precise monitoring of key parameters throughout the fog life cycle. The presentation will describe how Backscatter Lidars can be used to study the height and kinetics of aerosol activation into fog droplets. Next we will show the potential of Cloud Doppler Radar measurements to characterize the temporal evolution of droplet size, liquid water content, sedimentation and deposition. Contributions from Doppler Lidars and Microwave Radiometers will be discussed. This presentation will conclude on the potential to use Lidar and Radar remote sensing measurements to support operational fog nowcasting.

  17. Reconstruction method for data protection in telemedicine systems

    NASA Astrophysics Data System (ADS)

    Buldakova, T. I.; Suyatinov, S. I.

    2015-03-01

    In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.

  18. Prediction of Tensile Strength of Friction Stir Weld Joints with Adaptive Neuro-Fuzzy Inference System (ANFIS) and Neural Network

    NASA Technical Reports Server (NTRS)

    Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.

    2015-01-01

    Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.

  19. Novel secret key generation techniques using memristor devices

    NASA Astrophysics Data System (ADS)

    Abunahla, Heba; Shehada, Dina; Yeun, Chan Yeob; Mohammad, Baker; Jaoude, Maguy Abi

    2016-02-01

    This paper proposes novel secret key generation techniques using memristor devices. The approach depends on using the initial profile of a memristor as a master key. In addition, session keys are generated using the master key and other specified parameters. In contrast to existing memristor-based security approaches, the proposed development is cost effective and power efficient since the operation can be achieved with a single device rather than a crossbar structure. An algorithm is suggested and demonstrated using physics based Matlab model. It is shown that the generated keys can have dynamic size which provides perfect security. Moreover, the proposed encryption and decryption technique using the memristor based generated keys outperforms Triple Data Encryption Standard (3DES) and Advanced Encryption Standard (AES) in terms of processing time. This paper is enriched by providing characterization results of a fabricated microscale Al/TiO2/Al memristor prototype in order to prove the concept of the proposed approach and study the impacts of process variations. The work proposed in this paper is a milestone towards System On Chip (SOC) memristor based security.

  20. In-depth analysis and characterization of a dual damascene process with respect to different CD

    NASA Astrophysics Data System (ADS)

    Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Kim, Wan-Soo; Thrun, Xaver

    2018-03-01

    In a 200 mm high volume environment, we studied data from a dual damascene process. Dual damascene is a combination of lithography, etch and CMP that is used to create copper lines and contacts in one single step. During these process steps, different metal CD are measured by different measurement methods. In this study, we analyze the key numbers of the different measurements after different process steps and develop simple models to predict the electrical behavior* . In addition, radial profiles have been analyzed of both inline measurement parameters and electrical parameters. A matching method was developed based on inline and electrical data. Finally, correlation analysis for radial signatures is presented that can be used to predict excursions in electrical signatures.

  1. Processing parameter optimization for the laser dressing of bronze-bonded diamond wheels

    NASA Astrophysics Data System (ADS)

    Deng, H.; Chen, G. Y.; Zhou, C.; Li, S. C.; Zhang, M. J.

    2014-01-01

    In this paper, a pulsed fiber-laser dressing method for bronze-bonded diamond wheels was studied systematically and comprehensively. The mechanisms for the laser dressing of bronze-bonded diamond wheels were theoretically analyzed, and the key processing parameters that determine the results of laser dressing, including the laser power density, pulse overlap ratio, ablation track line overlap ratio, and number of scanning cycles, were proposed for the first time. Further, the effects of these four key parameters on the oxidation-damaged layer of the material surface, the material removal efficiency, the material surface roughness, and the average protrusion height of the diamond grains were explored and summarized through pulsed laser ablation experiments. Under the current experimental conditions, the ideal values of the laser power density, pulse overlap ratio, ablation track line overlap ratio, and number of scanning cycles were determined to be 4.2 × 107 W/cm2, 30%, 30%, and 16, respectively. Pulsed laser dressing experiments were conducted on bronze-bonded diamond wheels using the optimized processing parameters; next, both the normal and tangential grinding forces produced by the dressed grinding wheel were measured while grinding alumina ceramic materials. The results revealed that the normal and tangential grinding forces produced by the laser-dressed grinding wheel during grinding were smaller than those of grinding wheels dressed using the conventional mechanical method, indicating that the pulsed laser dressing technology provides irreplaceable advantages relative to the conventional mechanical dressing method.

  2. Reflow dynamics of thin patterned viscous films

    NASA Astrophysics Data System (ADS)

    Leveder, T.; Landis, S.; Davoust, L.

    2008-01-01

    This letter presents a study of viscous smoothening dynamics of a nanopatterned thin film. Ultrathin film manufacturing processes appearing to be a key point of nanotechnology engineering and numerous studies have been recently led in order to exhibit driving parameters of this transient surface motion, focusing on time scale accuracy method. Based on nanomechanical analysis, this letter shows that controlled shape measurements provided much more detailed information about reflow mechanism. Control of reflow process of any complex surface shape, or measurement of material parameter as thin film viscosity, free surface energy, or even Hamaker constant are therefore possible.

  3. Collision avoidance system cost-benefit analysis : volume I - technical manual

    DOT National Transportation Integrated Search

    1981-09-01

    Collision-avoidance systems under development in the U.S.A., Japan and Germany were evaluated. The performance evaluation showed that the signal processing and the control law of a system were the key parameters that decided the system's capability, ...

  4. Application of lab derived kinetic biodegradation parameters at the field scale

    NASA Astrophysics Data System (ADS)

    Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.

    2003-04-01

    Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.

  5. Requirements Document for Development of a Livermore Tomography Tools Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seetho, I. M.

    In this document, we outline an exercise performed at LLNL to evaluate the user interface deficits of a LLNL-developed CT reconstruction software package, Livermore Tomography Tools (LTT). We observe that a difficult-to-use command line interface and the lack of support functions compound to generate a bottleneck in the CT reconstruction process when input parameters to key functions are not well known. Through the exercise of systems engineering best practices, we generate key performance parameters for a LTT interface refresh, and specify a combination of back-end (“test-mode” functions) and front-end (graphical user interface visualization and command scripting tools) solutions to LTT’smore » poor user interface that aim to mitigate issues and lower costs associated with CT reconstruction using LTT. Key functional and non-functional requirements and risk mitigation strategies for the solution are outlined and discussed.« less

  6. Membrane tension: A challenging but universal physical parameter in cell biology.

    PubMed

    Pontes, Bruno; Monzo, Pascale; Gauthier, Nils C

    2017-11-01

    The plasma membrane separates the interior of cells from the outside environment. The membrane tension, defined as the force per unit length acting on a cross-section of membrane, regulates many vital biological processes. In this review, we summarize the first historical findings and the latest advances, showing membrane tension as an important physical parameter in cell biology. We also discuss how this parameter must be better integrated and we propose experimental approaches for key unanswered questions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. White-light Interferometry using a Channeled Spectrum: II. Calibration Methods, Numerical and Experimental Results

    NASA Technical Reports Server (NTRS)

    Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.

    2007-01-01

    In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.

  8. Application of Multi-Parameter Data Visualization by Means of Multidimensional Scaling to Evaluate Possibility of Coal Gasification

    NASA Astrophysics Data System (ADS)

    Jamróz, Dariusz; Niedoba, Tomasz; Surowiak, Agnieszka; Tumidajski, Tadeusz; Szostek, Roman; Gajer, Mirosław

    2017-09-01

    The application of methods drawing upon multi-parameter visualization of data by transformation of multidimensional space into two-dimensional one allow to show multi-parameter data on computer screen. Thanks to that, it is possible to conduct a qualitative analysis of this data in the most natural way for human being, i.e. by the sense of sight. An example of such method of multi-parameter visualization is multidimensional scaling. This method was used in this paper to present and analyze a set of seven-dimensional data obtained from Janina Mining Plant and Wieczorek Coal Mine. It was decided to examine whether the method of multi-parameter data visualization allows to divide the samples space into areas of various applicability to fluidal gasification process. The "Technological applicability card for coals" was used for this purpose [Sobolewski et al., 2012; 2017], in which the key parameters, important and additional ones affecting the gasification process were described.

  9. Microdesigning of Lightweight/High Strength Ceramic Materials

    DTIC Science & Technology

    1989-07-31

    Continue on reverse if necessary and identiy by block number) FIELD GROUP SUB- GROUP Ceramics, Composite Materials, Colloidal Processing Iii 19. ABSTRACT...to identify key processing parameters that affect the microstructure of the composite material. The second section describes experimental results in...results of the significant theoretical effort made in our group . Theoretical models of particle-particle interaction, particle-polymer interaction

  10. Microwave-Assisted Preparation of Activated Carbon from Eupatorium Adenophorum: Effects of Preparation Parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Song; Zhang, Shengzhou; Zhang, Libo; Xia, Hongying; Peng, Jinhui; Wang, Shixing

    2017-09-01

    Eupatorium adenophorum, global exotic weeds, was utilized as feedstock for preparation of activated carbon (AC) via microwave-induced KOH activation. Influences of the three vital process parameters - microwave power, activation time and impregnation ratio (IR) - have been assessed on the adsorption capacity and yield of AC. The process parameters were optimized utilizing the Design Expert software and were identified to be a microwave power of 700 W, an activation time of 15 min and an IR of 4, with the resultant iodine adsorption number and yield being 2,621 mg/g and 28.25 %, respectively. The key parameters that characterize the AC such as the brunauer emmett teller (BET) surface area, total pore volume and average pore diameter were estimated to be 3,918 m2/g, 2,383 ml/g and 2.43 nm, respectively, under the optimized process conditions. The surface characteristics of AC were characterized by Fourier transform infrared spectroscopy, scanning electron microscope and Transmission electron microscope.

  11. Effect of process parameters on greenhouse gas generation by wastewater treatment plants.

    PubMed

    Yerushalmi, L; Shahabadi, M Bani; Haghighat, F

    2011-05-01

    The effect of key process parameters on greenhouse gas (GHG) emission by wastewater treatment plants was evaluated, and the governing parameters that exhibited major effects on the overall on- and off-site GHG emissions were identified. This evaluation used aerobic, anaerobic, and hybrid anaerobic/aerobic treatment systems with food processing industry wastewater. The operating temperature of anaerobic sludge digester was identified to have the highest effect on GHG generation in the aerobic treatment system. The total GHG emissions of 2694 kg CO2e/d were increased by 72.5% with the increase of anaerobic sludge digester temperature from 20 to 40 degrees C. The operating temperature of the anaerobic reactor was the dominant controlling parameter in the anaerobic and hybrid treatment systems. Raising the anaerobic reactor's temperature from 25 to 40 degrees C increased the total GHG emissions from 5822 and 6617 kg CO2e/d by 105.6 and 96.5% in the anaerobic and hybrid treatment systems, respectively.

  12. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.

  13. Financial gains and risks in pay-for-performance bonus algorithms.

    PubMed

    Cromwell, Jerry; Drozd, Edward M; Smith, Kevin; Trisolini, Michael

    2007-01-01

    Considerable attention has been given to evidence-based process indicators associated with quality of care, while much less attention has been given to the structure and key parameters of the various pay-for-performance (P4P) bonus and penalty arrangements using such measures. In this article we develop a general model of quality payment arrangements and discuss the advantages and disadvantages of the key parameters. We then conduct simulation analyses of four general P4P payment algorithms by varying seven parameters, including indicator weights, indicator intercorrelation, degree of uncertainty regarding intervention effectiveness, and initial baseline rates. Bonuses averaged over several indicators appear insensitive to weighting, correlation, and the number of indicators. The bonuses are sensitive to disease manager perceptions of intervention effectiveness, facing challenging targets, and the use of actual-to-target quality levels versus rates of improvement over baseline.

  14. Collision avoidance system cost-benefit analysis : volume III - appendices F-M

    DOT National Transportation Integrated Search

    1981-09-01

    Collision-avoidance systems under development in the U.S.A., Japan and Germany were evaluated. The performance evaluation showed that the signal processing and the control law of a system were the key parameters that decided the system's capability, ...

  15. Collision avoidance system cost-benefit analysis : volume II - appendices A-E

    DOT National Transportation Integrated Search

    1981-09-01

    Collision-avoidance systems under development in the U.S.A., Japan and Germany were evaluated. The performance evaluation showed that the signal processing and the control law of a system were the key parameters that decided the system's capability, ...

  16. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    USGS Publications Warehouse

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  17. Improved performance of analog and digital acousto-optic modulation with feedback under profiled beam propagation for secure communication using chaos

    NASA Astrophysics Data System (ADS)

    Almehmadi, Fares S.; Chatterjee, Monish R.

    2014-12-01

    Using intensity feedback, the closed-loop behavior of an acousto-optic hybrid device under profiled beam propagation has been recently shown to exhibit wider chaotic bands potentially leading to an increase in both the dynamic range and sensitivity to key parameters that characterize the encryption. In this work, a detailed examination is carried out vis-à-vis the robustness of the encryption/decryption process relative to parameter mismatch for both analog and pulse code modulation signals, and bit error rate (BER) curves are used to examine the impact of additive white noise. The simulations with profiled input beams are shown to produce a stronger encryption key (i.e., much lower parametric tolerance thresholds) relative to simulations with uniform plane wave input beams. In each case, it is shown that the tolerance for key parameters drops by factors ranging from 10 to 20 times below those for uniform plane wave propagation. Results are shown to be at consistently lower tolerances for secure transmission of analog and digital signals using parameter tolerance measures, as well as BER performance measures for digital signals. These results hold out the promise for considerably greater information transmission security for such a system.

  18. Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements

    NASA Technical Reports Server (NTRS)

    Lyatsky, W.; Khazanov, G. V.

    2007-01-01

    Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.

  19. Does the cation really matter? The effect of modifying an ionic liquid cation on an SN2 process.

    PubMed

    Tanner, Eden E L; Yau, Hon Man; Hawker, Rebecca R; Croft, Anna K; Harper, Jason B

    2013-09-28

    The rate of reaction of a Menschutkin process in a range of ionic liquids with different cations was investigated, with temperature-dependent kinetic data giving access to activation parameters for the process in each solvent. These data, along with molecular dynamics simulations, demonstrate the importance of accessibility of the charged centre on the cation and that the key interactions are of a generalised electrostatic nature.

  20. Development of a simulation environment to support intercalibration studies over the Algodones Dunes system

    NASA Astrophysics Data System (ADS)

    Eon, Rehman S.; Gerace, Aaron D.; Montanaro, Matthew; Ambeau, Brittany L.; McCorkel, Joel T.

    2018-01-01

    The ability of sensors to detect changes in the Earth's environment is dependent on retrieving radiometrically consistent and calibrated measurements from its surface. Intercalibration provides consistency among satellite instruments and ensures fidelity of scientific information. Intercalibration is especially important for spaceborne satellites without any on-board calibration, as accuracy of instruments is significantly affected by changes that occur postlaunch. To better understand the key parameters that impact the intercalibration process, this paper describes a simulation environment that was developed to support the primary mission of the Algodones Dunes campaign. Specifically, measurements obtained from the campaign were utilized to create a synthetic landscape to assess the feasibility of using the Algodones Dunes system as an intercalibration site for spaceborne instruments. The impact of two key parameters (differing view-angles and temporal offsets between instruments) on the intercalibration process was assessed. Results of these studies indicate that although the accuracy of intercalibration is sensitive to these parameters, proper knowledge of their impact leads to situations that minimize their effect. This paper concludes with a case study that addresses the feasibility of performing intercalibration on the International Space Station's platform to support NASA's CLARREO, the climate absolute radiance and refractivity observatory, mission.

  1. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  2. Development of AACAP practice parameters for gender nonconformity and gender discordance in children and adolescents.

    PubMed

    Adelson, Stewart L

    2011-10-01

    The American Academy of Child and Adolescent Psychiatry (AACAP) is preparing a publication, Practice Parameter on Gay, Lesbian or Bisexual Sexual Orientation, Gender-Nonconformity, and Gender Discordance in Children and Adolescents. This article discusses the development of the part of the parameter related to gender nonconformity and gender discordance and describes the practice parameter preparation process,rationale, key scientific evidence, and methodology. Also discussed are terminology considerations, related clinical issues and practice skills, and overall organization of information including influences on gender development, gender role behavior, gender nonconformity and gender discordance, and their relationship to the development of sexual orientation.

  3. Modeling and optimization of joint quality for laser transmission joint of thermoplastic using an artificial neural network and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Zhang, Cheng; Li, Pin; Wang, Kai; Hu, Yang; Zhang, Peng; Liu, Huixia

    2012-11-01

    A central composite rotatable experimental design(CCRD) is conducted to design experiments for laser transmission joining of thermoplastic-Polycarbonate (PC). The artificial neural network was used to establish the relationships between laser transmission joining process parameters (the laser power, velocity, clamp pressure, scanning number) and joint strength and joint seam width. The developed mathematical models are tested by analysis of variance (ANOVA) method to check their adequacy and the effects of process parameters on the responses and the interaction effects of key process parameters on the quality are analyzed and discussed. Finally, the desirability function coupled with genetic algorithm is used to carry out the optimization of the joint strength and joint width. The results show that the predicted results of the optimization are in good agreement with the experimental results, so this study provides an effective method to enhance the joint quality.

  4. High density circuit technology, part 3

    NASA Technical Reports Server (NTRS)

    Wade, T. E.

    1982-01-01

    Dry processing - both etching and deposition - and present/future trends in semiconductor technology are discussed. In addition to a description of the basic apparatus, terminology, advantages, glow discharge phenomena, gas-surface chemistries, and key operational parameters for both dry etching and plasma deposition processes, a comprehensive survey of dry processing equipment (via vendor listing) is also included. The following topics are also discussed: fine-line photolithography, low-temperature processing, packaging for dense VLSI die, the role of integrated optics, and VLSI and technology innovations.

  5. Immersion lithography defectivity analysis at DUV inspection wavelength

    NASA Astrophysics Data System (ADS)

    Golan, E.; Meshulach, D.; Raccah, N.; Yeo, J. Ho.; Dassa, O.; Brandl, S.; Schwarz, C.; Pierson, B.; Montgomery, W.

    2007-03-01

    Significant effort has been directed in recent years towards the realization of immersion lithography at 193nm wavelength. Immersion lithography is likely a key enabling technology for the production of critical layers for 45nm and 32nm design rule (DR) devices. In spite of the significant progress in immersion lithography technology, there remain several key technology issues, with a critical issue of immersion lithography process induced defects. The benefits of the optical resolution and depth of focus, made possible by immersion lithography, are well understood. Yet, these benefits cannot come at the expense of increased defect counts and decreased production yield. Understanding the impact of the immersion lithography process parameters on wafer defects formation and defect counts, together with the ability to monitor, control and minimize the defect counts down to acceptable levels is imperative for successful introduction of immersion lithography for production of advanced DR's. In this report, we present experimental results of immersion lithography defectivity analysis focused on topcoat layer thickness parameters and resist bake temperatures. Wafers were exposed on the 1150i-α-immersion scanner and 1200B Scanner (ASML), defect inspection was performed using a DUV inspection tool (UVision TM, Applied Materials). Higher sensitivity was demonstrated at DUV through detection of small defects not detected at the visible wavelength, indicating on the potential high sensitivity benefits of DUV inspection for this layer. The analysis indicates that certain types of defects are associated with different immersion process parameters. This type of analysis at DUV wavelengths would enable the optimization of immersion lithography processes, thus enabling the qualification of immersion processes for volume production.

  6. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  7. A Four-parameter Budyko Equation for Mean Annual Water Balance

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Wang, D.

    2016-12-01

    In this study, a four-parameter Budyko equation for long-term water balance at watershed scale is derived based on the proportionality relationships of the two-stage partitioning of precipitation. The four-parameter Budyko equation provides a practical solution to balance model simplicity and representation of dominated hydrologic processes. Under the four-parameter Budyko framework, the key hydrologic processes related to the lower bound of Budyko curve are determined, that is, the lower bound is corresponding to the situation when surface runoff and initial evaporation not competing with base flow generation are zero. The derived model is applied to 166 MOPEX watersheds in United States, and the dominant controlling factors on each parameter are determined. Then, four statistical models are proposed to predict the four model parameters based on the dominant controlling factors, e.g., saturated hydraulic conductivity, fraction of sand, time period between two storms, watershed slope, and Normalized Difference Vegetation Index. This study shows a potential application of the four-parameter Budyko equation to constrain land-surface parameterizations in ungauged watersheds or general circulation models.

  8. Investigation into the influence of build parameters on failure of 3D printed parts

    NASA Astrophysics Data System (ADS)

    Fornasini, Giacomo

    Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.

  9. Semi-physical Simulation of the Airborne InSAR based on Rigorous Geometric Model and Real Navigation Data

    NASA Astrophysics Data System (ADS)

    Changyong, Dou; Huadong, Guo; Chunming, Han; yuquan, Liu; Xijuan, Yue; Yinghui, Zhao

    2014-03-01

    Raw signal simulation is a useful tool for the system design, mission planning, processing algorithm testing, and inversion algorithm design of Synthetic Aperture Radar (SAR). Due to the wide and high frequent variation of aircraft's trajectory and attitude, and the low accuracy of the Position and Orientation System (POS)'s recording data, it's difficult to quantitatively study the sensitivity of the key parameters, i.e., the baseline length and inclination, absolute phase and the orientation of the antennas etc., of the airborne Interferometric SAR (InSAR) system, resulting in challenges for its applications. Furthermore, the imprecise estimation of the installation offset between the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and the InSAR antennas compounds the issue. An airborne interferometric SAR (InSAR) simulation based on the rigorous geometric model and real navigation data is proposed in this paper, providing a way for quantitatively studying the key parameters and for evaluating the effect from the parameters on the applications of airborne InSAR, as photogrammetric mapping, high-resolution Digital Elevation Model (DEM) generation, and surface deformation by Differential InSAR technology, etc. The simulation can also provide reference for the optimal design of the InSAR system and the improvement of InSAR data processing technologies such as motion compensation, imaging, image co-registration, and application parameter retrieval, etc.

  10. Sensitivity of black carbon concentrations and climate impact to aging and scavenging in OsloCTM2-M7

    NASA Astrophysics Data System (ADS)

    Lund, Marianne T.; Berntsen, Terje K.; Samset, Bjørn H.

    2017-05-01

    Accurate representation of black carbon (BC) concentrations in climate models is a key prerequisite for understanding its net climate impact. BC aging and scavenging are treated very differently in current models. Here, we examine the sensitivity of three-dimensional (3-D), temporally resolved BC concentrations to perturbations to individual model processes in the chemistry transport model OsloCTM2-M7. The main goals are to identify processes related to aerosol aging and scavenging where additional observational constraints may most effectively improve model performance, in particular for BC vertical profiles, and to give an indication of how model uncertainties in the BC life cycle propagate into uncertainties in climate impacts. Coupling OsloCTM2 with the microphysical aerosol module M7 allows us to investigate aging processes in more detail than possible with a simpler bulk parameterization. Here we include, for the first time in this model, a treatment of condensation of nitric acid on BC. Using kernels, we also estimate the range of radiative forcing and global surface temperature responses that may result from perturbations to key tunable parameters in the model. We find that BC concentrations in OsloCTM2-M7 are particularly sensitive to convective scavenging and the inclusion of condensation by nitric acid. The largest changes are found at higher altitudes around the Equator and at low altitudes over the Arctic. Convective scavenging of hydrophobic BC, and the amount of sulfate required for BC aging, are found to be key parameters, potentially reducing bias against HIAPER Pole-to-Pole Observations (HIPPO) flight-based measurements by 60 to 90 %. Even for extensive tuning, however, the total impact on global-mean surface temperature is estimated to less than 0.04 K. Similar results are found when nitric acid is allowed to condense on the BC aerosols. We conclude, in line with previous studies, that a shorter atmospheric BC lifetime broadly improves the comparison with measurements over the Pacific. However, we also find that the model-measurement discrepancies can not be uniquely attributed to uncertainties in a single process or parameter. Model development therefore needs to be focused on improvements to individual processes, supported by a broad range of observational and experimental data, rather than tuning of individual, effective parameters such as the global BC lifetime.

  11. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    NASA Astrophysics Data System (ADS)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.

  12. Q-marker based strategy for CMC research of Chinese medicine: A case study of Panax Notoginseng saponins.

    PubMed

    Zhong, Yi; Zhu, Jieqiang; Yang, Zhenzhong; Shao, Qing; Fan, Xiaohui; Cheng, Yiyu

    2018-01-31

    To ensure pharmaceutical quality, chemistry, manufacturing and control (CMC) research is essential. However, due to the inherent complexity of Chinese medicine (CM), CMC study of CM remains a great challenge for academia, industry, and regulatory agencies. Recently, quality-marker (Q-marker) was proposed to establish quality standards or quality analysis approaches of Chinese medicine, which sheds a light on Chinese medicine's CMC study. Here manufacture processes of Panax Notoginseng Saponins (PNS) is taken as a case study and the present work is to establish a Q-marker based research strategy for CMC of Chinese medicine. The Q-markers of Panax Notoginseng Saponins (PNS) is selected and established by integrating chemical profile with pharmacological activities. Then, the key processes of PNS manufacturing are identified by material flow analysis. Furthermore, modeling algorithms are employed to explore the relationship between Q-markers and critical process parameters (CPPs) of the key processes. At last, CPPs of the key processes are optimized in order to improving the process efficiency. Among the 97 identified compounds, Notoginsenoside R 1 , ginsenoside Rg 1 , Re, Rb 1 and Rd are selected as the Q-markers of PNS. Our analysis on PNS manufacturing show the extraction process and column chromatography process are the key processes. With the CPPs of each process as the inputs and Q-markers' contents as the outputs, two process prediction models are built separately for the extraction process and column chromatography process of Panax notoginseng, which both possess good prediction ability. Based on the efficiency models of extraction process and column chromatography process we constructed, the optimal CPPs of both processes are calculated. Our results show that the Q-markers derived from CMC research strategy can be applied to analyze the manufacturing processes of Chinese medicine to assure product's quality and promote key processes' efficiency simultaneously. Copyright © 2018 Elsevier GmbH. All rights reserved.

  13. Additive Manufacturing of Fuel Injectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadek Tadros, Dr. Alber Alphonse; Ritter, Dr. George W.; Drews, Charles Donald

    Additive manufacturing (AM), also known as 3D-printing, has been shifting from a novelty prototyping paradigm to a legitimate manufacturing tool capable of creating components for highly complex engineered products. An emerging AM technology for producing metal parts is the laser powder bed fusion (L-PBF) process; however, industry manufacturing specifications and component design practices for L-PBF have not yet been established. Solar Turbines Incorporated (Solar), an industrial gas turbine manufacturer, has been evaluating AM technology for development and production applications with the desire to enable accelerated product development cycle times, overall turbine efficiency improvements, and supply chain flexibility relative to conventionalmore » manufacturing processes (casting, brazing, welding). Accordingly, Solar teamed with EWI on a joint two-and-a-half-year project with the goal of developing a production L-PBF AM process capable of consistently producing high-nickel alloy material suitable for high temperature gas turbine engine fuel injector components. The project plan tasks were designed to understand the interaction of the process variables and their combined impact on the resultant AM material quality. The composition of the high-nickel alloy powders selected for this program met the conventional cast Hastelloy X compositional limits and were commercially available in different particle size distributions (PSD) from two suppliers. Solar produced all the test articles and both EWI and Solar shared responsibility for analyzing them. The effects of powder metal input stock, laser parameters, heat treatments, and post-finishing methods were evaluated. This process knowledge was then used to generate tensile, fatigue, and creep material properties data curves suitable for component design activities. The key process controls for ensuring consistent material properties were documented in AM powder and process specifications. The basic components of the project were: • Powder metal input stock: Powder characterization, dimensional accuracy, metallurgical characterization, and mechanical properties evaluation. • Process parameters: Laser parameter effects, post-printing heat-treatment development, mechanical properties evaluation, and post-finishing technique. • Material design curves: Room and elevated temperature tensiles, low cycle fatigue, and creep rupture properties curves generated. • AM specifications: Key metal powder characteristics, laser parameters, and heat-treatment controls identified.« less

  14. Multi-party Measurement-Device-Independent Quantum Key Distribution Based on Cluster States

    NASA Astrophysics Data System (ADS)

    Liu, Chuanqi; Zhu, Changhua; Ma, Shuquan; Pei, Changxing

    2018-03-01

    We propose a novel multi-party measurement-device-independent quantum key distribution (MDI-QKD) protocol based on cluster states. A four-photon analyzer which can distinguish all the 16 cluster states serves as the measurement device for four-party MDI-QKD. Any two out of four participants can build secure keys after the analyzers obtains successful outputs and the two participants perform post-processing. We derive a security analysis for the protocol, and analyze the key rates under different values of polarization misalignment. The results show that four-party MDI-QKD is feasible over 280 km in the optical fiber channel when the key rate is about 10- 6 with the polarization misalignment parameter 0.015. Moreover, our work takes an important step toward a quantum communication network.

  15. Scheduling on the basis of the research of dependences among the construction process parameters

    NASA Astrophysics Data System (ADS)

    Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga

    2017-10-01

    The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.

  16. Modeling urbanized watershed flood response changes with distributed hydrological model: key hydrological processes, parameterization and case studies

    NASA Astrophysics Data System (ADS)

    Chen, Y.

    2017-12-01

    Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to urbanization, and the results show urbanization has big impact on the watershed flood responses. The peak flow increased a few times after urbanization which is much higher than previous reports.

  17. Global Sensitivity Applied to Dynamic Combined Finite Discrete Element Methods for Fracture Simulation

    NASA Astrophysics Data System (ADS)

    Godinez, H. C.; Rougier, E.; Osthus, D.; Srinivasan, G.

    2017-12-01

    Fracture propagation play a key role for a number of application of interest to the scientific community. From dynamic fracture processes like spall and fragmentation in metals and detection of gas flow in static fractures in rock and the subsurface, the dynamics of fracture propagation is important to various engineering and scientific disciplines. In this work we implement a global sensitivity analysis test to the Hybrid Optimization Software Suite (HOSS), a multi-physics software tool based on the combined finite-discrete element method, that is used to describe material deformation and failure (i.e., fracture and fragmentation) under a number of user-prescribed boundary conditions. We explore the sensitivity of HOSS for various model parameters that influence how fracture are propagated through a material of interest. The parameters control the softening curve that the model relies to determine fractures within each element in the mesh, as well a other internal parameters which influence fracture behavior. The sensitivity method we apply is the Fourier Amplitude Sensitivity Test (FAST), which is a global sensitivity method to explore how each parameter influence the model fracture and to determine the key model parameters that have the most impact on the model. We present several sensitivity experiments for different combination of model parameters and compare against experimental data for verification.

  18. Comparative fiber evaluation of the mesdan aqualab microwave moisture measurement instrument

    USDA-ARS?s Scientific Manuscript database

    Moisture is a key cotton fiber parameter, as it can impact the fiber quality and the processing of cotton fiber. The Mesdan Aqualab is a microwave-based fiber moisture measurement instrument for samples with moderate sample size. A program was implemented to determine the capabilities of the Aqual...

  19. Cotton micronaire measurements by small portable near infrared (nir) analyzers

    USDA-ARS?s Scientific Manuscript database

    A key quality and processing parameter for cotton fiber is micronaire, which is a function of the fiber’s maturity and fineness. Near Infrared (NIR) spectroscopy has previously shown the ability to measure micronaire, primarily in the laboratory and using large, research-grade laboratory NIR instru...

  20. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  1. Analysis of digital communication signals and extraction of parameters

    NASA Astrophysics Data System (ADS)

    Al-Jowder, Anwar

    1994-12-01

    The signal classification performance of four types of electronics support measure (ESM) communications detection systems is compared from the standpoint of the unintended receiver (interceptor). Typical digital communication signals considered include binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), frequency shift keying (FSK), and on-off keying (OOK). The analysis emphasizes the use of available signal processing software. Detection methods compared include broadband energy detection, FFT-based narrowband energy detection, and two correlation methods which employ the fast Fourier transform (FFT). The correlation methods utilize modified time-frequency distributions, where one of these is based on the Wigner-Ville distribution (WVD). Gaussian white noise is added to the signal to simulate various signal-to-noise ratios (SNR's).

  2. Technique for Determination of Rational Boundaries in Combining Construction and Installation Processes Based on Quantitative Estimation of Technological Connections

    NASA Astrophysics Data System (ADS)

    Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.

    2017-11-01

    The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.

  3. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  4. Self-assembly kinetics of microscale components: A parametric evaluation

    NASA Astrophysics Data System (ADS)

    Carballo, Jose M.

    The goal of the present work is to develop, and evaluate a parametric model of a basic microscale Self-Assembly (SA) interaction that provides scaling predictions of process rates as a function of key process variables. At the microscale, assembly by "grasp and release" is generally challenging. Recent research efforts have proposed adapting nanoscale self-assembly (SA) processes to the microscale. SA offers the potential for reduced equipment cost and increased throughput by harnessing attractive forces (most commonly, capillary) to spontaneously assemble components. However, there are challenges for implementing microscale SA as a commercial process. The existing lack of design tools prevents simple process optimization. Previous efforts have characterized a specific aspect of the SA process. However, the existing microscale SA models do not characterize the inter-component interactions. All existing models have simplified the outcome of SA interactions as an experimentally-derived value specific to a particular configuration, instead of evaluating it outcome as a function of component level parameters (such as speed, geometry, bonding energy and direction). The present study parameterizes the outcome of interactions, and evaluates the effect of key parameters. The present work closes the gap between existing microscale SA models to add a key piece towards a complete design tool for general microscale SA process modeling. First, this work proposes a simple model for defining the probability of assembly of basic SA interactions. A basic SA interaction is defined as the event where a single part arrives on an assembly site. The model describes the probability of assembly as a function of kinetic energy, binding energy, orientation and incidence angle for the component and the assembly site. Secondly, an experimental SA system was designed, and implemented to create individual SA interactions while controlling process parameters independently. SA experiments measured the outcome of SA interactions, while studying the independent effects of each parameter. As a first step towards a complete scaling model, experiments were performed to evaluate the effects of part geometry and part travel direction under low kinetic energy conditions. Experimental results show minimal dependence of assembly yield on the incidence angle of the parts, and significant effects induced by changes in part geometry. The results from this work indicate that SA could be modeled as an energy-based process due to the small path dependence effects. Assembly probability is linearly related to the orientation probability. The proportionality constant is based on the area fraction of the sites with an amplification factor. This amplification factor accounts for the ability of capillary forces to align parts with only very small areas of contact when they have a low kinetic energy. Results provide unprecedented insight about SA interactions. The present study is a key step towards completing a basic model of a general SA process. Moreover, the outcome from this work can complement existing SA process models, in order to create a complete design tool for microscale SA systems. In addition to SA experiments, Monte Carlo simulations of experimental part-site interactions were conducted. This study confirmed that a major contributor to experimental variation is the stochastic nature of experimental SA interactions and the limited sample size of the experiments. Furthermore, the simulations serve as a tool for defining an optimum sampling strategy to minimize the uncertainty in future SA experiments.

  5. Excitonic processes at organic heterojunctions

    NASA Astrophysics Data System (ADS)

    He, ShouJie; Lu, ZhengHong

    2018-02-01

    Understanding excitonic processes at organic heterojunctions is crucial for development of organic semiconductor devices. This article reviews recent research on excitonic physics that involve intermolecular charge transfer (CT) excitons, and progress on understanding relationships between various interface energy levels and key parameters governing various competing interface excitonic processes. These interface excitonic processes include radiative exciplex emission, nonradiative recombination, Auger electron emission, and CT exciton dissociation. This article also reviews various device applications involving interface CT excitons, such as organic light-emitting diodes (OLEDs), organic photovoltaic cells, organic rectifying diodes, and ultralow-voltage Auger OLEDs.

  6. Emergency medical services key performance measurement in Asian cities.

    PubMed

    Rahman, Nik Hisamuddin; Tanaka, Hideharu; Shin, Sang Do; Ng, Yih Yng; Piyasuwankul, Thammapad; Lin, Chih-Hao; Ong, Marcus Eng Hock

    2015-01-01

    One of the key principles in the recommended standards is that emergency medical service (EMS) providers should continuously monitor the quality and safety of their services. This requires service providers to implement performance monitoring using appropriate and relevant measures including key performance indicators. In Asia, EMS systems are at different developmental phases and maturity. This will create difficultly in benchmarking or assessing the quality of EMS performance across the region. An attempt was made to compare the EMS performance index based on the structure, process, and outcome analysis. The data was collected from the Pan-Asian Resuscitation Outcome Study (PAROS) data among few Asian cities, namely, Tokyo, Osaka, Singapore, Bangkok, Kuala Lumpur, Taipei, and Seoul. The parameters of inclusions were broadly divided into structure, process, and outcome measurements. The data was collected by the site investigators from each city and keyed into the electronic web-based data form which is secured strictly by username and passwords. Generally, there seems to be a more uniformity for EMS performance parameters among the more developed EMS systems. The major problem with the EMS agencies in the cities of developing countries like Bangkok and Kuala Lumpur is inadequate or unavailable data pertaining to EMS performance. There is non-uniformity in the EMS performance measurement across the Asian cities. This creates difficulty for EMS performance index comparison and benchmarking. Hopefully, in the future, collaborative efforts such as the PAROS networking group will further enhance the standardization in EMS performance reporting across the region.

  7. Influence of processing parameters on pore structure of 3D porous chitosan-alginate polyelectrolyte complex scaffolds.

    PubMed

    Florczyk, Stephen J; Kim, Dae-Joon; Wood, David L; Zhang, Miqin

    2011-09-15

    Fabrication of porous polymeric scaffolds with controlled structure can be challenging. In this study, we investigated the influence of key experimental parameters on the structures and mechanical properties of resultant porous chitosan-alginate (CA) polyelectrolyte complex (PEC) scaffolds, and on proliferation of MG-63 osteoblast-like cells, targeted at bone tissue engineering. We demonstrated that the porous structure is largely affected by the solution viscosity, which can be regulated by the acetic acid and alginate concentrations. We found that the CA PEC solutions with viscosity below 300 Pa.s yielded scaffolds of uniform pore structure and that more neutral pH promoted more complete complexation of chitosan and alginate, yielding stiffer scaffolds. CA PEC scaffolds produced from solutions with viscosities below 300 Pa.s also showed enhanced cell proliferation compared with other samples. By controlling the key experimental parameters identified in this study, CA PEC scaffolds of different structures can be made to suit various tissue engineering applications. Copyright © 2011 Wiley Periodicals, Inc.

  8. A Novel Method for Measuring the Diffusion, Partition and Convective Mass Transfer Coefficients of Formaldehyde and VOC in Building Materials

    PubMed Central

    Xiong, Jianyin; Huang, Shaodan; Zhang, Yinping

    2012-01-01

    The diffusion coefficient (D m) and material/air partition coefficient (K) are two key parameters characterizing the formaldehyde and volatile organic compounds (VOC) sorption behavior in building materials. By virtue of the sorption process in airtight chamber, this paper proposes a novel method to measure the two key parameters, as well as the convective mass transfer coefficient (h m). Compared to traditional methods, it has the following merits: (1) the K, D m and h m can be simultaneously obtained, thus is convenient to use; (2) it is time-saving, just one sorption process in airtight chamber is required; (3) the determination of h m is based on the formaldehyde and VOC concentration data in the test chamber rather than the generally used empirical correlations obtained from the heat and mass transfer analogy, thus is more accurate and can be regarded as a significant improvement. The present method is applied to measure the three parameters by treating the experimental data in the literature, and good results are obtained, which validates the effectiveness of the method. Our new method also provides a potential pathway for measuring h m of semi-volatile organic compounds (SVOC) by using that of VOC. PMID:23145156

  9. Final Report: Superconducting Joints Between (RE)Ba 2Cu 3O 7-x Coated Conductors via Electric Field Assisted Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Justin

    Here we report the results from a project aimed at developing a fully superconducting joint between two REBCO coated conductors using electric field processing (EFP). Due to a reduction in the budget and time period of this contract, we reduced the project scope and focused first on the key scientific issues for forming a strong bond between conductors, and subsequently focused on improving through-the-joint transport. A modified timeline and task list is shown in Table 1, summarizing accomplishments to date. In the first period, we accomplished initial surface characterization as well as rounds of EFP experiments to begin to understandmore » processing parameters which produce well-bonded tapes. In the second phase, we explored the effects of two fundamental EFP parameters, voltage and pressure, and the limitations they place on the process. In the third phase, we achieved superconducting joints and established base characteristics of both the bonding process and the types of tapes best suited to this process. Finally, we investigated some of the parameters related to kinetics which appeared inhibit joint quality and performance.« less

  10. Interplay between discharge physics, gas phase chemistry and surface processes in hydrocarbon plasmas

    NASA Astrophysics Data System (ADS)

    Hassouni, Khaled

    2013-09-01

    In this paper we present two examples that illustrate two different contexts of the interplay between plasma-surface interaction process and discharge physics and gas phase chemistry in hydrocarbon discharges. In the first example we address the case of diamond deposition processes and illustrate how a detailed investigation of the discharge physics, collisional processes and transport phenomena in the plasma phase make possible to accurately predict the key local-parameters, i.e., species density at the growing substrate, as function of the macroscopic process parameters, thus allowing for a precise control of diamond deposition process. In the second example, we illustrate how the interaction between a rare gas pristine discharge and carbon (graphite) electrode induce a dramatic change on the discharge nature, i.e., composition, ionization kinetics, charge equilibrium, etc., through molecular growth and clustering processes, solid particle formation and dusty plasma generation. Work done in collaboration with Alix Gicquel, Francois Silva, Armelle Michau, Guillaume Lombardi, Xavier Bonnin, Xavier Duten, CNRS, Universite Paris 13.

  11. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  12. Formability Analysis of Bamboo Fabric Reinforced Poly (Lactic) Acid Composites

    PubMed Central

    M. R., Nurul Fazita; Jayaraman, Krishnan; Bhattacharyya, Debes

    2016-01-01

    Poly (lactic) acid (PLA) composites have made their way into various applications that may require thermoforming to produce 3D shapes. Wrinkles are common in many forming processes and identification of the forming parameters to prevent them in the useful part of the mechanical component is a key consideration. Better prediction of such defects helps to significantly reduce the time required for a tooling design process. The purpose of the experiment discussed here is to investigate the effects of different test parameters on the occurrence of deformations during sheet forming of double curvature shapes with bamboo fabric reinforced-PLA composites. The results demonstrated that the domes formed using hot tooling conditions were better in quality than those formed using cold tooling conditions. Wrinkles were more profound in the warp direction of the composite domes compared to the weft direction. Grid Strain Analysis (GSA) identifies the regions of severe deformation and provides useful information regarding the optimisation of processing parameters. PMID:28773662

  13. A modal parameter extraction procedure applicable to linear time-invariant dynamic systems

    NASA Technical Reports Server (NTRS)

    Kurdila, A. J.; Craig, R. R., Jr.

    1985-01-01

    Modal analysis has emerged as a valuable tool in many phases of the engineering design process. Complex vibration and acoustic problems in new designs can often be remedied through use of the method. Moreover, the technique has been used to enhance the conceptual understanding of structures by serving to verify analytical models. A new modal parameter estimation procedure is presented. The technique is applicable to linear, time-invariant systems and accommodates multiple input excitations. In order to provide a background for the derivation of the method, some modal parameter extraction procedures currently in use are described. Key features implemented in the new technique are elaborated upon.

  14. Curie-Montgolfiere Planetary Explorers

    NASA Astrophysics Data System (ADS)

    Taylor, Chris Y.; Hansen, Jeremiah

    2007-01-01

    Hot-air balloons, also known as Montgolfiere balloons, powered by heat from radioisotope decay are a potentially useful tool for exploring planetary atmospheres and augmenting the capabilities of other exploration technologies. This paper describes the physical equations and identifies the key engineering parameters that drive radioisotope-powered balloon performance. These parameters include envelope strength-to-weight, envelope thermal conductivity, heater power-to-weight, heater temperature, and balloon shape. The design space for these parameters are shown for varying atmospheric compositions to illustrate the performance needed to build functioning ``Curie-Montgolfiere'' balloons for various planetary atmospheres. Methods to ease the process of Curie-Montgolfiere conceptual design and sizing of are also introduced.

  15. Information processing in dendrites I. Input pattern generalisation.

    PubMed

    Gurney, K N

    2001-10-01

    In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.

  16. Atomic layer deposition for fabrication of HfO2/Al2O3 thin films with high laser-induced damage thresholds.

    PubMed

    Wei, Yaowei; Pan, Feng; Zhang, Qinghua; Ma, Ping

    2015-01-01

    Previous research on the laser damage resistance of thin films deposited by atomic layer deposition (ALD) is rare. In this work, the ALD process for thin film generation was investigated using different process parameters such as various precursor types and pulse duration. The laser-induced damage threshold (LIDT) was measured as a key property for thin films used as laser system components. Reasons for film damaged were also investigated. The LIDTs for thin films deposited by improved process parameters reached a higher level than previously measured. Specifically, the LIDT of the Al2O3 thin film reached 40 J/cm(2). The LIDT of the HfO2/Al2O3 anti-reflector film reached 18 J/cm(2), the highest value reported for ALD single and anti-reflect films. In addition, it was shown that the LIDT could be improved by further altering the process parameters. All results show that ALD is an effective film deposition technique for fabrication of thin film components for high-power laser systems.

  17. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    NASA Technical Reports Server (NTRS)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.

  18. Parameter regimes for a single sequential quantum repeater

    NASA Astrophysics Data System (ADS)

    Rozpędek, F.; Goodenough, K.; Ribeiro, J.; Kalb, N.; Caprara Vivoli, V.; Reiserer, A.; Hanson, R.; Wehner, S.; Elkouss, D.

    2018-07-01

    Quantum key distribution allows for the generation of a secret key between distant parties connected by a quantum channel such as optical fibre or free space. Unfortunately, the rate of generation of a secret key by direct transmission is fundamentally limited by the distance. This limit can be overcome by the implementation of so-called quantum repeaters. Here, we assess the performance of a specific but very natural setup called a single sequential repeater for quantum key distribution. We offer a fine-grained assessment of the repeater by introducing a series of benchmarks. The benchmarks, which should be surpassed to claim a working repeater, are based on finite-energy considerations, thermal noise and the losses in the setup. In order to boost the performance of the studied repeaters we introduce two methods. The first one corresponds to the concept of a cut-off, which reduces the effect of decoherence during the storage of a quantum state by introducing a maximum storage time. Secondly, we supplement the standard classical post-processing with an advantage distillation procedure. Using these methods, we find realistic parameters for which it is possible to achieve rates greater than each of the benchmarks, guiding the way towards implementing quantum repeaters.

  19. Preliminary comparisons of portable near infrared (nir) instrumentation for laboratory measurements of cotton fiber micronaire

    USDA-ARS?s Scientific Manuscript database

    Micronaire is a key quality and processing parameter for cotton fiber. A program was implemented to determine the capabilities of portable Near Infrared (NIR) instrumentation to monitor cotton fiber micronaire both in the laboratory and in/near the field. Previous evaluations on one NIR unit demon...

  20. Aqueous enzymatic extraction of Moringa oleifera oil.

    PubMed

    Mat Yusoff, Masni; Gordon, Michael H; Ezeh, Onyinye; Niranjan, Keshavan

    2016-11-15

    This paper reports on the extraction of Moringa oleifera (MO) oil by using aqueous enzymatic extraction (AEE) method. The effect of different process parameters on the oil recovery was discovered by using statistical optimization, besides the effect of selected parameters on the formation of its oil-in-water cream emulsions. Within the pre-determined ranges, the use of pH 4.5, moisture/kernel ratio of 8:1 (w/w), and 300stroke/min shaking speed at 40°C for 1h incubation time resulted in highest oil recovery of approximately 70% (goil/g solvent-extracted oil). These optimized parameters also result in a very thin emulsion layer, indicating minute amount of emulsion formed. Zero oil recovery with thick emulsion were observed when the used aqueous phase was re-utilized for another AEE process. The findings suggest that the critical selection of AEE parameters is key to high oil recovery with minimum emulsion formation thereby lowering the load on the de-emulsification step. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Digital Model of Fourier and Fresnel Quantized Holograms

    NASA Astrophysics Data System (ADS)

    Boriskevich, Anatoly A.; Erokhovets, Valery K.; Tkachenko, Vadim V.

    Some models schemes of Fourier and Fresnel quantized protective holograms with visual effects are suggested. The condition to arrive at optimum relationship between the quality of reconstructed images, and the coefficient of data reduction about a hologram, and quantity of iterations in the reconstructing hologram process has been estimated through computer model. Higher protection level is achieved by means of greater number both bi-dimensional secret keys (more than 2128) in form of pseudorandom amplitude and phase encoding matrixes, and one-dimensional encoding key parameters for every image of single-layer or superimposed holograms.

  2. Density of Spray-Formed Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevin M. McHugh; Volker Uhlenwinkel; Nils Ellendr

    2008-06-01

    Spray Forming is an advanced materials processing technology that transforms molten metal into a near-net-shape solid by depositing atomized droplets onto a substrate. Depending on the application, the spray-formed material may be used in the as-deposited condition or it may undergo post-deposition processing. Regardless, the density of the as-deposited material is an important issue. Porosity is detrimental because it can significantly reduce strength, toughness, hardness and other properties. While it is not feasible to achieve fully-dense material in the as-deposited state, density greater than 99% of theoretical density is possible if the atomization and impact conditions are optimized. Thermal conditionsmore » at the deposit surface and droplet impact angle are key processing parameters that influence the density of the material. This paper examines the factors that contribute to porosity formation during spray forming and illustrates that very high as-deposited density is achieved by optimizing processing parameters.« less

  3. Assaying Mitochondrial Respiration as an Indicator of Cellular Metabolism and Fitness.

    PubMed

    Smolina, Natalia; Bruton, Joseph; Kostareva, Anna; Sejersen, Thomas

    2017-01-01

    Mitochondrial respiration is the most important generator of cellular energy under most circumstances. It is a process of energy conversion of substrates into ATP. The Seahorse equipment allows measuring oxygen consumption rate (OCR) in living cells and estimates key parameters of mitochondrial respiration in real-time mode. Through use of mitochondrial inhibitors, four key mitochondrial respiration parameters can be measured: basal, ATP production-linked, maximal, and proton leak-linked OCR. This approach requires application of mitochondrial inhibitors-oligomycin to block ATP synthase, FCCP-to make the inner mitochondrial membrane permeable for protons and allow maximum electron flux through the electron transport chain, and rotenone and antimycin A-to inhibit complexes I and III, respectively. This chapter describes the protocol of OCR assessment in the culture of primary myotubes obtained upon satellite cell fusion.

  4. Size-related bioconcentration kinetics of hydrophobic chemicals in fish

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sijm, D.T.H.M.; Linde, A. van der

    1994-12-31

    Uptake and elimination of hydrophobic chemicals by fish can be regarded as passive diffusive transport processes. Diffusion coefficients, lipid/water partitioning, diffusion pathlenghts, concentration gradients and surface exchange areas are key parameters describing this bioconcentration distribution process. In the present study two of these parameters were studied: the influence of lipid/water partitioning was studied by using hydrophobic chemicals of different hydrophobicity, and the surface exchange area by using different sizes of fish. By using one species of fish it was assumed that all other parameters were kept constant. Seven age classes of fish were exposed to a series of hydrophobic, formore » five days, which was followed by a deputation phase lasting up to 6 months. Bioconcentration parameters, such as uptake and elimination rate constants, and bioconcentration factors were determined. Uptake of the hydrophobic compounds was compared to that of oxygen. Uptake and elimination rates were compared to weight and estimated (gill) exchange areas. The role of weight and its implications for extrapolations of bioconcentration parameters to other species and sizes will be discussed.« less

  5. Phenological Parameters Estimation Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney D.; Ross, Kenton W.; Spruce, Joseph P.; Smoot, James C.; Ryan, Robert E.; Gasser, Gerald E.; Prados, Donald L.; Vaughan, Ronald D.

    2010-01-01

    The Phenological Parameters Estimation Tool (PPET) is a set of algorithms implemented in MATLAB that estimates key vegetative phenological parameters. For a given year, the PPET software package takes in temporally processed vegetation index data (3D spatio-temporal arrays) generated by the time series product tool (TSPT) and outputs spatial grids (2D arrays) of vegetation phenological parameters. As a precursor to PPET, the TSPT uses quality information for each pixel of each date to remove bad or suspect data, and then interpolates and digitally fills data voids in the time series to produce a continuous, smoothed vegetation index product. During processing, the TSPT displays NDVI (Normalized Difference Vegetation Index) time series plots and images from the temporally processed pixels. Both the TSPT and PPET currently use moderate resolution imaging spectroradiometer (MODIS) satellite multispectral data as a default, but each software package is modifiable and could be used with any high-temporal-rate remote sensing data collection system that is capable of producing vegetation indices. Raw MODIS data from the Aqua and Terra satellites is processed using the TSPT to generate a filtered time series data product. The PPET then uses the TSPT output to generate phenological parameters for desired locations. PPET output data tiles are mosaicked into a Conterminous United States (CONUS) data layer using ERDAS IMAGINE, or equivalent software package. Mosaics of the vegetation phenology data products are then reprojected to the desired map projection using ERDAS IMAGINE

  6. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  7. Asymmetric cryptography based on wavefront sensing.

    PubMed

    Peng, Xiang; Wei, Hengzheng; Zhang, Peng

    2006-12-15

    A system of asymmetric cryptography based on wavefront sensing (ACWS) is proposed for the first time to our knowledge. One of the most significant features of the asymmetric cryptography is that a trapdoor one-way function is required and constructed by analogy to wavefront sensing, in which the public key may be derived from optical parameters, such as the wavelength or the focal length, while the private key may be obtained from a kind of regular point array. The ciphertext is generated by the encoded wavefront and represented with an irregular array. In such an ACWS system, the encryption key is not identical to the decryption key, which is another important feature of an asymmetric cryptographic system. The processes of asymmetric encryption and decryption are formulized mathematically and demonstrated with a set of numerical experiments.

  8. Encryption for Remote Control via Internet or Intranet

    NASA Technical Reports Server (NTRS)

    Lineberger, Lewis

    2005-01-01

    A data-communication protocol has been devised to enable secure, reliable remote control of processes and equipment via a collision-based network, while using minimal bandwidth and computation. The network could be the Internet or an intranet. Control is made secure by use of both a password and a dynamic key, which is sent transparently to a remote user by the controlled computer (that is, the computer, located at the site of the equipment or process to be controlled, that exerts direct control over the process). The protocol functions in the presence of network latency, overcomes errors caused by missed dynamic keys, and defeats attempts by unauthorized remote users to gain control. The protocol is not suitable for real-time control, but is well suited for applications in which control latencies up to about 0.5 second are acceptable. The encryption scheme involves the use of both a dynamic and a private key, without any additional overhead that would degrade performance. The dynamic key is embedded in the equipment- or process-monitor data packets sent out by the controlled computer: in other words, the dynamic key is a subset of the data in each such data packet. The controlled computer maintains a history of the last 3 to 5 data packets for use in decrypting incoming control commands. In addition, the controlled computer records a private key (password) that is given to the remote computer. The encrypted incoming command is permuted by both the dynamic and private key. A person who records the command data in a given packet for hostile purposes cannot use that packet after the public key expires (typically within 3 seconds). Even a person in possession of an unauthorized copy of the command/remote-display software cannot use that software in the absence of the password. The use of a dynamic key embedded in the outgoing data makes the central-processing unit overhead very small. The use of a National Instruments DataSocket(TradeMark) (or equivalent) protocol or the User Datagram Protocol makes it possible to obtain reasonably short response times: Typical response times in event-driven control, using packets sized .300 bytes, are <0.2 second for commands issued from locations anywhere on Earth. The protocol requires that control commands represent absolute values of controlled parameters (e.g., a specified temperature), as distinguished from changes in values of controlled parameters (e.g., a specified increment of temperature). Each command is issued three or more times to ensure delivery in crowded networks. The use of absolute-value commands prevents additional (redundant) commands from causing trouble. Because a remote controlling computer receives "talkback" in the form of data packets from the controlled computer, typically within a time interval < or =1 s, the controlling computer can re-issue a command if network failure has occurred. The controlled computer, the process or equipment that it controls, and any human operator(s) at the site of the controlled equipment or process should be equipped with safety measures to prevent damage to equipment or injury to humans. These features could be a combination of software, external hardware, and intervention by the human operator(s). The protocol is not fail-safe, but by adopting these safety measures as part of the protocol, one makes the protocol a robust means of controlling remote processes and equipment by use of typical office computers via intranets and/or the Internet.

  9. Phase-only asymmetric optical cryptosystem based on random modulus decomposition

    NASA Astrophysics Data System (ADS)

    Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan

    2018-06-01

    We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.

  10. Advanced multivariate data analysis to determine the root cause of trisulfide bond formation in a novel antibody–peptide fusion

    PubMed Central

    Goldrick, Stephen; Holmes, William; Bond, Nicholas J.; Lewis, Gareth; Kuiper, Marcel; Turner, Richard

    2017-01-01

    ABSTRACT Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody–peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high‐throughput (HT) micro‐bioreactor system (AmbrTM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on‐line and off‐line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale‐up. Biotechnol. Bioeng. 2017;114: 2222–2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:28500668

  11. Analysis on pseudo excitation of random vibration for structure of time flight counter

    NASA Astrophysics Data System (ADS)

    Wu, Qiong; Li, Dapeng

    2015-03-01

    Traditional computing method is inefficient for getting key dynamical parameters of complicated structure. Pseudo Excitation Method(PEM) is an effective method for calculation of random vibration. Due to complicated and coupling random vibration in rocket or shuttle launching, the new staging white noise mathematical model is deduced according to the practical launch environment. This deduced model is applied for PEM to calculate the specific structure of Time of Flight Counter(ToFC). The responses of power spectral density and the relevant dynamic characteristic parameters of ToFC are obtained in terms of the flight acceptance test level. Considering stiffness of fixture structure, the random vibration experiments are conducted in three directions to compare with the revised PEM. The experimental results show the structure can bear the random vibration caused by launch without any damage and key dynamical parameters of ToFC are obtained. The revised PEM is similar with random vibration experiment in dynamical parameters and responses are proved by comparative results. The maximum error is within 9%. The reasons of errors are analyzed to improve reliability of calculation. This research provides an effective method for solutions of computing dynamical characteristic parameters of complicated structure in the process of rocket or shuttle launching.

  12. Containerless processing of undercooled melts

    NASA Technical Reports Server (NTRS)

    Perepezko, J. H.

    1993-01-01

    The investigation focused on the control of microstructural evolution in Mn-Al, Fe-Ni, Ni-V, and Au-Pb-Sb alloys through the high undercooling levels provided by containerless processing, and provided fundamental new information on the control of nucleation. Solidification analysis was conducted by means of thermal analysis, x-ray diffraction, and metallographic characterization on samples processed in a laboratory scale drop tube system. The Mn-Al alloy system offers a useful model system with the capability of phase separation on an individual particle basis, thus permitting a more complete understanding of the operative kinetics and the key containerless processing variables. This system provided the opportunity of analyzing the nucleation rate as a function of processing conditions and allowed for the quantitative assessment of the relevant processing parameters. These factors are essential in the development of a containerless processing model which has a predictive capability. Similarly, Ni-V is a model system that was used to study duplex partitionless solidification, which is a structure possible only in high under cooling solidification processes. Nucleation kinetics for the competing bcc and fcc phases were studied to determine how this structure can develop and the conditions under which it may occur. The Fe-Ni alloy system was studied to identify microstructural transitions with controlled variations in sample size and composition during containerless solidification. This work was forwarded to develop a microstructure map which delineates regimes of structural evolution and provides a unified analysis of experimental observations. The Au-Pb-Sb system was investigated to characterize the thermodynamic properties of the undercooled liquid phase and to characterize the glass transition under a variety of processing conditions. By analyzing key containerless processing parameters in a ground based drop tube study, a carefully designed flight experiment may be planned to utilize the extended duration microgravity conditions of orbiting spacecraft.

  13. Advance Preparation in Task-Switching: Converging Evidence from Behavioral, Brain Activation, and Model-Based Approaches

    PubMed Central

    Karayanidis, Frini; Jamadar, Sharna; Ruge, Hannes; Phillips, Natalie; Heathcote, Andrew; Forstmann, Birte U.

    2010-01-01

    Recent research has taken advantage of the temporal and spatial resolution of event-related brain potentials (ERPs) and functional magnetic resonance imaging (fMRI) to identify the time course and neural circuitry of preparatory processes required to switch between different tasks. Here we overview some key findings contributing to understanding strategic processes in advance preparation. Findings from these methodologies are compatible with advance preparation conceptualized as a set of processes activated for both switch and repeat trials, but with substantial variability as a function of individual differences and task requirements. We then highlight new approaches that attempt to capitalize on this variability to link behavior and brain activation patterns. One approach examines correlations among behavioral, ERP and fMRI measures. A second “model-based” approach accounts for differences in preparatory processes by estimating quantitative model parameters that reflect latent psychological processes. We argue that integration of behavioral and neuroscientific methodologies is key to understanding the complex nature of advance preparation in task-switching. PMID:21833196

  14. Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling.

    PubMed

    Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M

    2012-04-01

    Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency. Copyright © 2011 Wiley Periodicals, Inc.

  15. A systemic study on key parameters affecting nanocomposite coatings on magnesium substrates.

    PubMed

    Johnson, Ian; Wang, Sebo Michelle; Silken, Christine; Liu, Huinan

    2016-05-01

    Nanocomposite coatings offer multiple functions simultaneously to improve the interfacial properties of magnesium (Mg) alloys for skeletal implant applications, e.g., controlling the degradation rate of Mg substrates, improving bone cell functions, and providing drug delivery capability. However, the effective service time of nanocomposite coatings may be limited due to their early delamination from the Mg-based substrates. Therefore, the objective of this study was to address the delamination issue of nanocomposite coatings, improve the coating properties for reducing the degradation of Mg-based substrates, and thus improve their cytocompatibility with bone marrow derived mesenchymal stem cells (BMSCs). The surface conditions of the substrates, polymer component type of the nanocomposite coatings, and post-deposition processing are the key parameters that contribute to the efficacy of the nanocomposite coatings in regulating substrate degradation and bone cell responses. Specifically, the effects of metallic surface versus alkaline heat-treated hydroxide surface of the substrates on coating quality were investigated. For the nanocomposite coatings, nanophase hydroxyapatite (nHA) was dispersed in three types of biodegradable polymers, i.e., poly(lactic-co-glycolic acid) (PLGA), poly(l-lactic acid) (PLLA), or poly(caprolactone) (PCL) to determine which polymer component could provide integrated properties for slowest Mg degradation. The nanocomposite coatings with or without post-deposition processing, i.e., melting, annealing, were compared to determine which processing route improved the properties of the nanocomposite coatings most significantly. The results showed that optimizing the coating processes addressed the delamination issue. The melted then annealed nHA/PCL coating on the metallic Mg substrates showed the slowest degradation and the best coating adhesion, among all the combinations of conditions studied; and, it improved the adhesion density of BMSCs. This study elucidated the key parameters for optimizing nanocomposite coatings on Mg-based substrates for skeletal implant applications, and provided rational design guidelines for the nanocomposite coatings on Mg alloys for potential clinical translation of biodegradable Mg-based implants. This manuscript describes the systemic optimization of nanocomposite coatings to control the degradation and bioactivity of magnesium for skeletal implant applications. The key parameters influencing the integrity and functions of the nanocomposite coatings on magnesium were identified, guidelines for the optimization of the coatings were established, and the benefits of coating optimization were demonstrated through reduced magnesium degradation and increased bone marrow derived mesenchymal stem cell (BMSC) adhesion in vitro. The guidelines developed in this manuscript are valuable for the biometal field to improve the design of bioresorbable implants and devices, which will advance the clinical translation of magnesium-based implants. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  16. TWT transmitter fault prediction based on ANFIS

    NASA Astrophysics Data System (ADS)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  17. Modeling and Analysis of Process Parameters for Evaluating Shrinkage Problems During Plastic Injection Molding of a DVD-ROM Cover

    NASA Astrophysics Data System (ADS)

    Öktem, H.

    2012-01-01

    Plastic injection molding plays a key role in the production of high-quality plastic parts. Shrinkage is one of the most significant problems of a plastic part in terms of quality in the plastic injection molding. This article focuses on the study of the modeling and analysis of the effects of process parameters on the shrinkage by evaluating the quality of the plastic part of a DVD-ROM cover made with Acrylonitrile Butadiene Styrene (ABS) polymer material. An effective regression model was developed to determine the mathematical relationship between the process parameters (mold temperature, melt temperature, injection pressure, injection time, and cooling time) and the volumetric shrinkage by utilizing the analysis data. Finite element (FE) analyses designed by Taguchi (L27) orthogonal arrays were run in the Moldflow simulation program. Analysis of variance (ANOVA) was then performed to check the adequacy of the regression model and to determine the effect of the process parameters on the shrinkage. Experiments were conducted to control the accuracy of the regression model with the FE analyses obtained from Moldflow. The results show that the regression model agrees very well with the FE analyses and the experiments. From this, it can be concluded that this study succeeded in modeling the shrinkage problem in our application.

  18. Rotary wave-ejector enhanced pulse detonation engine

    NASA Astrophysics Data System (ADS)

    Nalim, M. R.; Izzy, Z. A.; Akbari, P.

    2012-01-01

    The use of a non-steady ejector based on wave rotor technology is modeled for pulse detonation engine performance improvement and for compatibility with turbomachinery components in hybrid propulsion systems. The rotary wave ejector device integrates a pulse detonation process with an efficient momentum transfer process in specially shaped channels of a single wave-rotor component. In this paper, a quasi-one-dimensional numerical model is developed to help design the basic geometry and operating parameters of the device. The unsteady combustion and flow processes are simulated and compared with a baseline PDE without ejector enhancement. A preliminary performance assessment is presented for the wave ejector configuration, considering the effect of key geometric parameters, which are selected for high specific impulse. It is shown that the rotary wave ejector concept has significant potential for thrust augmentation relative to a basic pulse detonation engine.

  19. Stochastic analysis of multiphase flow in porous media: II. Numerical simulations

    NASA Astrophysics Data System (ADS)

    Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.

    1996-08-01

    The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.

  20. Securing Digital Audio using Complex Quadratic Map

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria Gunawan, Tjandra; Satria, Yudi

    2018-03-01

    In This digital era, exchanging data are common and easy to do, therefore it is vulnerable to be attacked and manipulated from unauthorized parties. One data type that is vulnerable to attack is digital audio. So, we need data securing method that is not vulnerable and fast. One of the methods that match all of those criteria is securing the data using chaos function. Chaos function that is used in this research is complex quadratic map (CQM). There are some parameter value that causing the key stream that is generated by CQM function to pass all 15 NIST test, this means that the key stream that is generated using this CQM is proven to be random. In addition, samples of encrypted digital sound when tested using goodness of fit test are proven to be uniform, so securing digital audio using this method is not vulnerable to frequency analysis attack. The key space is very huge about 8.1×l031 possible keys and the key sensitivity is very small about 10-10, therefore this method is also not vulnerable against brute-force attack. And finally, the processing speed for both encryption and decryption process on average about 450 times faster that its digital audio duration.

  1. Understanding controls of hydrologic processes across two monolithological catchments using model-data integration

    NASA Astrophysics Data System (ADS)

    Xiao, D.; Shi, Y.; Li, L.

    2016-12-01

    Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.

  2. Identification of sensitive parameters in the modeling of SVOC reemission processes from soil to atmosphere.

    PubMed

    Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc

    2014-09-15

    Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Concept design theory and model for multi-use space facilities: Analysis of key system design parameters through variance of mission requirements

    NASA Astrophysics Data System (ADS)

    Reynerson, Charles Martin

    This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.

  4. Electrobioremediation of oil spills.

    PubMed

    Daghio, Matteo; Aulenta, Federico; Vaiopoulou, Eleni; Franzetti, Andrea; Arends, Jan B A; Sherry, Angela; Suárez-Suárez, Ana; Head, Ian M; Bestetti, Giuseppina; Rabaey, Korneel

    2017-05-01

    Annually, thousands of oil spills occur across the globe. As a result, petroleum substances and petrochemical compounds are widespread contaminants causing concern due to their toxicity and recalcitrance. Many remediation strategies have been developed using both physicochemical and biological approaches. Biological strategies are most benign, aiming to enhance microbial metabolic activities by supplying limiting inorganic nutrients, electron acceptors or donors, thus stimulating oxidation or reduction of contaminants. A key issue is controlling the supply of electron donors/acceptors. Bioelectrochemical systems (BES) have emerged, in which an electrical current serves as either electron donor or acceptor for oil spill bioremediation. BES are highly controllable and can possibly also serve as biosensors for real time monitoring of the degradation process. Despite being promising, multiple aspects need to be considered to make BES suitable for field applications including system design, electrode materials, operational parameters, mode of action and radius of influence. The microbiological processes, involved in bioelectrochemical contaminant degradation, are currently not fully understood, particularly in relation to electron transfer mechanisms. Especially in sulfate rich environments, the sulfur cycle appears pivotal during hydrocarbon oxidation. This review provides a comprehensive analysis of the research on bioelectrochemical remediation of oil spills and of the key parameters involved in the process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Controllable superhydrophobic aluminum surfaces with tunable adhesion fabricated by femtosecond laser

    NASA Astrophysics Data System (ADS)

    Song, Yuxin; Wang, Cong; Dong, Xinran; Yin, Kai; Zhang, Fan; Xie, Zheng; Chu, Dongkai; Duan, Ji'an

    2018-06-01

    In this study, a facile and detailed strategy to fabricate superhydrophobic aluminum surfaces with controllable adhesion by femtosecond laser ablation is presented. The influences of key femtosecond laser processing parameters including the scanning speed, laser power and interval on the wetting properties of the laser-ablated surfaces are investigated. It is demonstrated that the adhesion between water and superhydrophobic surface can be effectively tuned from extremely low adhesion to high adhesion by adjusting laser processing parameters. At the same time, the mechanism is discussed for the changes of the wetting behaviors of the laser-ablated surfaces. These superhydrophobic surfaces with tunable adhesion have many potential applications, such as self-cleaning surface, oil-water separation, anti-icing surface and liquid transportation.

  6. Simulation based analysis of laser beam brazing

    NASA Astrophysics Data System (ADS)

    Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael

    2016-03-01

    Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.

  7. Turbine Engine Testing.

    DTIC Science & Technology

    1981-01-01

    per-rev, ring weighting factor, etc.) and with compression system design . A detailed description of the SAE methodology is provided in Ref. 1...offers insights into the practical application of experimental aeromechanical procedures and establishes the process of valid design assessment, avoiding...considerations given to the total engine system. Design Verification in the Experimental Laboratory Certain key parameters are influencing the design of modern

  8. Minimization of operational impacts on spectrophotometer color measurements for cotton

    USDA-ARS?s Scientific Manuscript database

    A key cotton quality and processing property that is gaining increasing importance is the color of the cotton. Cotton fiber in the U.S. is classified for color using the Uster® High Volume Instrument (HVI), using the parameters Rd and +b. Rd and +b are specific to cotton fiber and are not typical ...

  9. Thermooptics of magnetoactive media: Faraday isolators for high average power lasers

    NASA Astrophysics Data System (ADS)

    Khazanov, E. A.

    2016-09-01

    The Faraday isolator, one of the key high-power laser elements, provides optical isolation between a master oscillator and a power amplifier or between a laser and its target, for example, a gravitational wave detector interferometer. However, the absorbed radiation inevitably heats the magnetoactive medium and leads to thermally induced polarization and phase distortions in the laser beam. This self-action process limits the use of Faraday isolators in high average power lasers. A unique property of magnetoactive medium thermooptics is that parasitic thermal effects arise on the background of circular birefringence rather than in an isotropic medium. Also, even insignificant polarization distortions of the radiation result in a worse isolation ratio, which is the key characteristic of the Faraday isolator. All possible laser beam distortions are analyzed for their deteriorating effect on the Faraday isolator parameters. The mechanisms responsible for and key physical parameters associated with different kinds of distortions are identified and discussed. Methods for compensating and suppressing parasitic thermal effects are described in detail, the published experimental data are systematized, and avenues for further research are discussed based on the results achieved.

  10. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  11. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  12. Low Velocity Impact Behavior of Basalt Fiber-Reinforced Polymer Composites

    NASA Astrophysics Data System (ADS)

    Shishevan, Farzin Azimpour; Akbulut, Hamid; Mohtadi-Bonab, M. A.

    2017-06-01

    In this research, we studied low velocity impact response of homogenous basalt fiber-reinforced polymer (BFRP) composites and then compared the impact key parameters with carbon fiber-reinforced polymer (CFRP) homogenous composites. BFRPs and CFRPs were fabricated by vacuum-assisted resin transfer molding (VARTM) method. Fabricated composites included 60% fiber and 40% epoxy matrix. Basalt and carbon fibers used as reinforcement materials were weaved in 2/2 twill textile tip in the structures of BFRP and CFRP composites. We also utilized the energy profile method to determine penetration and perforation threshold energies. The low velocity impact tests were carried out in 30, 60, 80, 100, 120 and 160 J energy magnitudes, and impact response of BFRPs was investigated by related force-deflection, force-time, deflection-time and absorbed energy-time graphics. The related impact key parameters such as maximum contact force, absorbed energy, deflection and duration time were compared with CFRPs for various impact energy levels. As a result, due to the higher toughness of basalt fibers, a better low velocity impact performance of BFRP than that of CFRP was observed. The effects of fabrication parameters, such as curing process, were studied on the low velocity impact behavior of BFRP. The results of tested new fabricated materials show that the change of fabrication process and curing conditions improves the impact behavior of BFRPs up to 13%.

  13. Microbial quantification in activated sludge: the hits and misses.

    PubMed

    Hall, S J; Keller, J; Blackall, L L

    2003-01-01

    Since the implementation of the activated sludge process for treating wastewater, there has been a reliance on chemical and physical parameters to monitor the system. However, in biological nutrient removal (BNR) processes, the microorganisms responsible for some of the transformations should be used to monitor the processes with the overall goal to achieve better treatment performance. The development of in situ identification and rapid quantification techniques for key microorganisms involved in BNR are required to achieve this goal. This study explored the quantification of Nitrospira, a key organism in the oxidation of nitrite to nitrate in BNR. Two molecular genetic microbial quantification techniques were evaluated: real-time polymerase chain reaction (PCR) and fluorescence in situ hybridisation (FISH) followed by digital image analysis. A correlation between the Nitrospira quantitative data and the nitrate production rate, determined in batch tests, was attempted. The disadvantages and advantages of both methods will be discussed.

  14. Two dimensional radial gas flows in atmospheric pressure plasma-enhanced chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Kim, Gwihyun; Park, Seran; Shin, Hyunsu; Song, Seungho; Oh, Hoon-Jung; Ko, Dae Hong; Choi, Jung-Il; Baik, Seung Jae

    2017-12-01

    Atmospheric pressure (AP) operation of plasma-enhanced chemical vapor deposition (PECVD) is one of promising concepts for high quality and low cost processing. Atmospheric plasma discharge requires narrow gap configuration, which causes an inherent feature of AP PECVD. Two dimensional radial gas flows in AP PECVD induces radial variation of mass-transport and that of substrate temperature. The opposite trend of these variations would be the key consideration in the development of uniform deposition process. Another inherent feature of AP PECVD is confined plasma discharge, from which volume power density concept is derived as a key parameter for the control of deposition rate. We investigated deposition rate as a function of volume power density, gas flux, source gas partial pressure, hydrogen partial pressure, plasma source frequency, and substrate temperature; and derived a design guideline of deposition tool and process development in terms of deposition rate and uniformity.

  15. Uncertainty quantification of overpressure buildup through inverse modeling of compaction processes in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni M.; Ruffo, Paolo; Guadagnini, Alberto

    2017-03-01

    This study illustrates a procedure conducive to a preliminary risk analysis of overpressure development in sedimentary basins characterized by alternating depositional events of sandstone and shale layers. The approach rests on two key elements: (1) forward modeling of fluid flow and compaction, and (2) application of a model-complexity reduction technique based on a generalized polynomial chaos expansion (gPCE). The forward model considers a one-dimensional vertical compaction processes. The gPCE model is then used in an inverse modeling context to obtain efficient model parameter estimation and uncertainty quantification. The methodology is applied to two field settings considered in previous literature works, i.e. the Venture Field (Scotian Shelf, Canada) and the Navarin Basin (Bering Sea, Alaska, USA), relying on available porosity and pressure information for model calibration. It is found that the best result is obtained when porosity and pressure data are considered jointly in the model calibration procedure. Uncertainty propagation from unknown input parameters to model outputs, such as pore pressure vertical distribution, is investigated and quantified. This modeling strategy enables one to quantify the relative importance of key phenomena governing the feedback between sediment compaction and fluid flow processes and driving the buildup of fluid overpressure in stratified sedimentary basins characterized by the presence of low-permeability layers. The results here illustrated (1) allow for diagnosis of the critical role played by the parameters of quantitative formulations linking porosity and permeability in compacted shales and (2) provide an explicit and detailed quantification of the effects of their uncertainty in field settings.

  16. Thermal - Hydraulic Behavior of Unsaturated Bentonite and Sand-Bentonite Material as Seal for Nuclear Waste Repository: Numerical Simulation of Column Experiments

    NASA Astrophysics Data System (ADS)

    Ballarini, E.; Graupner, B.; Bauer, S.

    2015-12-01

    For deep geological repositories of high-level radioactive waste (HLRW), bentonite and sand bentonite mixtures are investigated as buffer materials to form a a sealing layer. This sealing layer surrounds the canisters and experiences an initial drying due to the heat produced by HLRW and a successive re-saturation with fluid from the host rock. These complex thermal, hydraulic and mechanical processes interact and were investigated in laboratory column experiments using MX-80 clay pellets as well as a mixture of 35% sand and 65% bentonite. The aim of this study is to both understand the individual processes taking place in the buffer materials and to identify the key physical parameters that determine the material behavior under heating and hydrating conditions. For this end, detailed and process-oriented numerical modelling was applied to the experiments, simulating heat transport, multiphase flow and mechanical effects from swelling. For both columns, the same set of parameters was assigned to the experimental set-up (i.e. insulation, heater and hydration system), while the parameters of the buffer material were adapted during model calibration. A good fit between model results and data was achieved for temperature, relative humidity, water intake and swelling pressure, thus explaining the material behavior. The key variables identified by the model are the permeability and relative permeability, the water retention curve and the thermal conductivity of the buffer material. The different hydraulic and thermal behavior of the two buffer materials observed in the laboratory observations was well reproduced by the numerical model.

  17. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  18. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  19. Variations in embodied energy and carbon emission intensities of construction materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan Omar, Wan-Mohd-Sabki; School of Environmental Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis; Doh, Jeung-Hwan, E-mail: j.doh@griffith.edu.au

    2014-11-15

    Identification of parameter variation allows us to conduct more detailed life cycle assessment (LCA) of energy and carbon emission material over their lifecycle. Previous research studies have demonstrated that hybrid LCA (HLCA) can generally overcome the problems of incompleteness and accuracy of embodied energy (EE) and carbon (EC) emission assessment. Unfortunately, the current interpretation and quantification procedure has not been extensively and empirically studied in a qualitative manner, especially in hybridising between the process LCA and I-O LCA. To determine this weakness, this study empirically demonstrates the changes in EE and EC intensities caused by variations to key parameters inmore » material production. Using Australia and Malaysia as a case study, the results are compared with previous hybrid models to identify key parameters and issues. The parameters considered in this study are technological changes, energy tariffs, primary energy factors, disaggregation constant, emission factors, and material price fluctuation. It was found that changes in technological efficiency, energy tariffs and material prices caused significant variations in the model. Finally, the comparison of hybrid models revealed that non-energy intensive materials greatly influence the variations due to high indirect energy and carbon emission in upstream boundary of material production, and as such, any decision related to these materials should be considered carefully. - Highlights: • We investigate the EE and EC intensity variation in Australia and Malaysia. • The influences of parameter variations on hybrid LCA model were evaluated. • Key significant contribution to the EE and EC intensity variation were identified. • High indirect EE and EC content caused significant variation in hybrid LCA models. • Non-energy intensive material caused variation between hybrid LCA models.« less

  20. Beam engineering for zero conicity cutting and drilling with ultra fast laser (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Letan, Amelie; Mishchik, Konstantin; Audouard, Eric; Hoenninger, Clemens; Mottay, Eric P.

    2017-03-01

    With the development of high average power, high repetition rate, industrial ultrafast lasers, it is now possible to achieve a high throughput with femtosecond laser processing, providing that the operating parameters are finely tuned to the application. Femtosecond lasers play a key role in these processes, due to their ability to high quality micro processing. They are able to drill high thickness holes (up to 1 mm) with arbitrary shapes, such as zero-conicity or even inversed taper, but can also perform zero-taper cutting. A clear understanding of all the processing steps necessary to optimize the processing speed is a main challenge for industrial developments. Indeed, the laser parameters are not independent of the beam steering devices. Pulses energy and repetition rate have to be precisely adjusted to the beam angle with the sample, and to the temporal and spatial sequences of pulses superposition. The purpose of the present work is to identify the role of these parameters for high aspect ratio drilling and cutting not only with experimental trials, but also with numerical estimations, using a simple engineering model based on the two temperature description of ultra-fast ablation. Assuming a nonlinear logarithmic response of the materials to ultrafast pulses, each material can be described by only two adjustable parameters. Simple assumptions allow to predict the effect of beam velocity and non-normal incident beams to estimate profile shapes and processing time.

  1. Secure chaotic transmission of electrocardiography signals with acousto-optic modulation under profiled beam propagation.

    PubMed

    Almehmadi, Fares S; Chatterjee, Monish R

    2015-01-10

    Electrocardiography (ECG) signals are used for both medical purposes and identifying individuals. It is often necessary to encrypt this highly sensitive information before it is transmitted over any channel. A closed-loop acousto-optic hybrid device acting as a chaotic modulator is applied to ECG signals to achieve this encryption. Recently improved modeling of this approach using profiled optical beams has shown it to be very sensitive to key parameters that characterize the encryption and decryption process, exhibiting its potential for secure transmission of analog and digital signals. Here the encryption and decryption is demonstrated for ECG signals, both analog and digital versions, illustrating strong encryption without significant distortion. Performance analysis pertinent to both analog and digital transmission of the ECG waveform is also carried out using output signal-to-noise, signal-to-distortion, and bit-error-rate measures relative to the key parameters and presence of channel noise in the system.

  2. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  3. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  4. Measuring UV Curing Parameters of Commercial Photopolymers used in Additive Manufacturing.

    PubMed

    Bennett, Joe

    2017-12-01

    A testing methodology was developed to expose photopolymer resins and measure the cured material to determine two key parameters related to the photopolymerization process: E c (critical energy to initiate polymerization) and D p (penetration depth of curing light). Five commercially available resins were evaluated under exposure from 365 nm and 405 nm light at varying power densities and energies. Three different methods for determining the thickness of the cured resin were evaluated. Caliper measurements, stylus profilometry, and confocal laser scanning microscopy showed similar results for hard materials while caliper measurement of a soft, elastomeric material proved inaccurate. Working curves for the five photopolymers showed unique behavior both within and among the resins as a function of curing light wavelength. E c and D p for the five resins showed variations as large as 10×. Variations of this magnitude, if unknown to the user and not controlled for, will clearly affect printed part quality. This points to the need for a standardized approach for determining and disseminating these, and perhaps, other key parameters.

  5. [Correlation between physical characteristics of sticks and quality of traditional Chinese medicine pills prepared by plastic molded method].

    PubMed

    Wang, Ling; Xian, Jiechen; Hong, Yanlong; Lin, Xiao; Feng, Yi

    2012-05-01

    To quantify the physical characteristics of sticks of traditional Chinese medicine (TCM) honeyed pills prepared by the plastic molded method and the correlation of adhesiveness and plasticity-related parameters of sticks and quality of pills, in order to find major parameters and the appropriate range impacting pill quality. Sticks were detected by texture analyzer for their physical characteristic parameters such as hardness and compression action, and pills were observed by visual evaluation for their quality. The correlation of both data was determined by the stepwise discriminant analysis. Stick physical characteristic parameter l(CD) can exactly depict the adhesiveness, with the discriminant equation of Y0 - Y1 = 6.415 - 41.594l(CD). When Y0 < Y1, pills were scattered well; when Y0 > Y1, pills were adhesive with each other. Pills' physical characteristic parameters l(CD) and l(AC), Ar, Tr can exactly depict smoothness of pills, with the discriminant equation of Z0 - Z1 = -195.318 + 78.79l(AC) - 3 258. 982Ar + 3437.935Tr. When Z0 < Z1, pills were smooth on surface. When Z0 > Z1, pills were rough on surface. The stepwise discriminant analysis is made to show the obvious correlation between key physical characteristic parameters l(CD) and l(AC), Ar, Tr of sticks and appearance quality of pills, defining the molding process for preparing pills by the plastic molded and qualifying ranges of key physical characteristic parameters characterizing intermediate sticks, in order to provide theoretical basis for prescription screening and technical parameter adjustment for pills.

  6. High pressure rinsing system comparison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. Sertore; M. Fusetti; P. Michelato

    2007-06-01

    High pressure rinsing (HPR) is a key process for the surface preparation of high field superconducting cavities. A portable apparatus for the water jet characterization, based on the transferred momentum between the water jet and a load cell, has been used in different laboratories. This apparatus allows to collected quantitative parameters that characterize the HPR water jet. In this paper, we present a quantitative comparison of the different water jet produced by various nozzles routinely used in different laboratories for the HPR process

  7. Multiprotein DNA Looping

    NASA Astrophysics Data System (ADS)

    Vilar, Jose M. G.; Saiz, Leonor

    2006-06-01

    DNA looping plays a fundamental role in a wide variety of biological processes, providing the backbone for long range interactions on DNA. Here we develop the first model for DNA looping by an arbitrarily large number of proteins and solve it analytically in the case of identical binding. We uncover a switchlike transition between looped and unlooped phases and identify the key parameters that control this transition. Our results establish the basis for the quantitative understanding of fundamental cellular processes like DNA recombination, gene silencing, and telomere maintenance.

  8. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  9. Experimentally validated computational modeling of organic binder burnout from green ceramic compacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewsuk, K.G.; Cochran, R.J.; Blackwell, B.F.

    The properties and performance of a ceramic component is determined by a combination of the materials from which it was fabricated and how it was processed. Most ceramic components are manufactured by dry pressing a powder/binder system in which the organic binder provides formability and green compact strength. A key step in this manufacturing process is the removal of the binder from the powder compact after pressing. The organic binder is typically removed by a thermal decomposition process in which heating rate, temperature, and time are the key process parameters. Empirical approaches are generally used to design the burnout time-temperaturemore » cycle, often resulting in excessive processing times and energy usage, and higher overall manufacturing costs. Ideally, binder burnout should be completed as quickly as possible without damaging the compact, while using a minimum of energy. Process and computational modeling offer one means to achieve this end. The objective of this study is to develop an experimentally validated computer model that can be used to better understand, control, and optimize binder burnout from green ceramic compacts.« less

  10. Improving security of the ping-pong protocol

    NASA Astrophysics Data System (ADS)

    Zawadzki, Piotr

    2013-01-01

    A security layer for the asymptotically secure ping-pong protocol is proposed and analyzed in the paper. The operation of the improvement exploits inevitable errors introduced by the eavesdropping in the control and message modes. Its role is similar to the privacy amplification algorithms known from the quantum key distribution schemes. Messages are processed in blocks which guarantees that an eavesdropper is faced with a computationally infeasible problem as long as the system parameters are within reasonable limits. The introduced additional information preprocessing does not require quantum memory registers and confidential communication is possible without prior key agreement or some shared secret.

  11. Strategies for Maximizing Successful Drug Substance Technology Transfer Using Engineering, Shake-Down, and Wet Test Runs.

    PubMed

    Abraham, Sushil; Bain, David; Bowers, John; Larivee, Victor; Leira, Francisco; Xie, Jasmina

    2015-01-01

    The technology transfer of biological products is a complex process requiring control of multiple unit operations and parameters to ensure product quality and process performance. To achieve product commercialization, the technology transfer sending unit must successfully transfer knowledge about both the product and the process to the receiving unit. A key strategy for maximizing successful scale-up and transfer efforts is the effective use of engineering and shake-down runs to confirm operational performance and product quality prior to embarking on good manufacturing practice runs such as process performance qualification runs. We consider key factors to consider in making the decision to perform shake-down or engineering runs. We also present industry benchmarking results of how engineering runs are used in drug substance technology transfers alongside the main themes and best practices that have emerged. Our goal is to provide companies with a framework for ensuring the "right first time" technology transfers with effective deployment of resources within increasingly aggressive timeline constraints. © PDA, Inc. 2015.

  12. Advanced multivariate data analysis to determine the root cause of trisulfide bond formation in a novel antibody-peptide fusion.

    PubMed

    Goldrick, Stephen; Holmes, William; Bond, Nicholas J; Lewis, Gareth; Kuiper, Marcel; Turner, Richard; Farid, Suzanne S

    2017-10-01

    Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody-peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high-throughput (HT) micro-bioreactor system (Ambr TM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on-line and off-line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale-up. Biotechnol. Bioeng. 2017;114: 2222-2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  13. Statistical Bayesian method for reliability evaluation based on ADT data

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  14. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  15. Advanced control of dissolved oxygen concentration in fed batch cultures during recombinant protein production.

    PubMed

    Kuprijanov, A; Gnoth, S; Simutis, R; Lübbert, A

    2009-02-01

    Design and experimental validation of advanced pO(2) controllers for fermentation processes operated in the fed-batch mode are described. In most situations, the presented controllers are able to keep the pO(2) in fermentations for recombinant protein productions exactly on the desired value. The controllers are based on the gain-scheduling approach to parameter-adaptive proportional-integral controllers. In order to cope with the most often appearing distortions, the basic gain-scheduling feedback controller was complemented with a feedforward control component. This feedforward/feedback controller significantly improved pO(2) control. By means of numerical simulations, the controller behavior was tested and its parameters were determined. Validation runs were performed with three Escherichia coli strains producing different recombinant proteins. It is finally shown that the new controller leads to significant improvements in the signal-to-noise ratio of other key process variables and, thus, to a higher process quality.

  16. Cost of ownership for inspection equipment

    NASA Astrophysics Data System (ADS)

    Dance, Daren L.; Bryson, Phil

    1993-08-01

    Cost of Ownership (CoO) models are increasingly a part of the semiconductor equipment evaluation and selection process. These models enable semiconductor manufacturers and equipment suppliers to quantify a system in terms of dollars per wafer. Because of the complex nature of the semiconductor manufacturing process, there are several key attributes that must be considered in order to accurately reflect the true 'cost of ownership'. While most CoO work to date has been applied to production equipment, the need to understand cost of ownership for inspection and metrology equipment presents unique challenges. Critical parameters such as detection sensitivity as a function of size and type of defect are not included in current CoO models yet are, without question, major factors in the technical evaluation process and life-cycle cost. This paper illustrates the relationship between these parameters, as components of the alpha and beta risk, and cost of ownership.

  17. Modeling a Material's Instantaneous Velocity during Acceleration Driven by a Detonation's Gas-Push Process

    NASA Astrophysics Data System (ADS)

    Backofen, Joseph E.

    2005-07-01

    This paper will describe both the scientific findings and the model developed in order to quantfy a material's instantaneous velocity versus position, time, or the expansion ratio of an explosive's gaseous products while its gas pressure is accelerating the material. The formula derived to represent this gas-push process for the 2nd stage of the BRIGS Two-Step Detonation Propulsion Model was found to fit very well the published experimental data available for twenty explosives. When the formula's two key parameters (the ratio Vinitial / Vfinal and ExpansionRatioFinal) were adjusted slightly from the average values describing closely many explosives to values representing measured data for a particular explosive, the formula's representation of that explosive's gas-push process was improved. The time derivative of the velocity formula representing acceleration and/or pressure compares favorably to Jones-Wilkins-Lee equation-of-state model calculations performed using published JWL parameters.

  18. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  19. Multi-parameter comparison of a standardized mixed meal tolerance test in healthy and type 2 diabetic subjects: the PhenFlex challenge.

    PubMed

    Wopereis, Suzan; Stroeve, Johanna H M; Stafleu, Annette; Bakker, Gertruud C M; Burggraaf, Jacobus; van Erk, Marjan J; Pellis, Linette; Boessen, Ruud; Kardinaal, Alwine A F; van Ommen, Ben

    2017-01-01

    A key feature of metabolic health is the ability to adapt upon dietary perturbations. Recently, it was shown that metabolic challenge tests in combination with the new generation biomarkers allow the simultaneous quantification of major metabolic health processes. Currently, applied challenge tests are largely non-standardized. A systematic review defined an optimal nutritional challenge test, the "PhenFlex test" (PFT). This study aimed to prove that PFT modulates all relevant processes governing metabolic health thereby allowing to distinguish subjects with different metabolic health status. Therefore, 20 healthy and 20 type 2 diabetic (T2D) male subjects were challenged both by PFT and oral glucose tolerance test (OGTT). During the 8-h response time course, 132 parameters were quantified that report on 26 metabolic processes distributed over 7 organs (gut, liver, adipose, pancreas, vasculature, muscle, kidney) and systemic stress. In healthy subjects, 110 of the 132 parameters showed a time course response. Patients with T2D showed 18 parameters to be significantly different after overnight fasting compared to healthy subjects, while 58 parameters were different in the post-challenge time course after the PFT. This demonstrates the added value of PFT in distinguishing subjects with different health status. The OGTT and PFT response was highly comparable for glucose metabolism as identical amounts of glucose were present in both challenge tests. Yet the PFT reports on additional processes, including vasculature, systemic stress, and metabolic flexibility. The PFT enables the quantification of all relevant metabolic processes involved in maintaining or regaining homeostasis of metabolic health. Studying both healthy subjects and subjects with impaired metabolic health showed that the PFT revealed new processes laying underneath health. This study provides the first evidence towards adopting the PFT as gold standard in nutrition research.

  20. Optimation of Operation System Integration between Main and Feeder Public Transport (Case Study: Trans Jakarta-Kopaja Bus Services)

    NASA Astrophysics Data System (ADS)

    Miharja, M.; Priadi, Y. N.

    2018-05-01

    Promoting a better public transport is a key strategy to cope with urban transport problems which are mostly caused by a huge private vehicle usage. A better public transport service quality not only focuses on one type of public transport mode, but also concerns on inter modes service integration. Fragmented inter mode public transport service leads to a longer trip chain as well as average travel time which would result in its failure to compete with a private vehicle. This paper examines the optimation process of operation system integration between Trans Jakarta Bus as the main public transport mode and Kopaja Bus as feeder public transport service in Jakarta. Using scoring-interview method combined with standard parameters in operation system integration, this paper identifies the key factors that determine the success of the two public transport operation system integrations. The study found that some key integration parameters, such as the cancellation of “system setoran”, passenger get in-get out at official stop points, and systematic payment, positively contribute to a better service integration. However, some parameters such as fine system, time and changing point reliability, and information system reliability are among those which need improvement. These findings are very useful for the authority to set the right strategy to improve operation system integration between Trans Jakarta and Kopaja Bus services.

  1. A primer of statistical methods for correlating parameters and properties of electrospun poly(L-lactide) scaffolds for tissue engineering--PART 1: design of experiments.

    PubMed

    Seyedmahmoud, Rasoul; Rainer, Alberto; Mozetic, Pamela; Maria Giannitelli, Sara; Trombetta, Marcella; Traversa, Enrico; Licoccia, Silvia; Rinaldi, Antonio

    2015-01-01

    Tissue engineering scaffolds produced by electrospinning are of enormous interest, but still lack a true understanding about the fundamental connection between the outstanding functional properties, the architecture, the mechanical properties, and the process parameters. Fragmentary results from several parametric studies only render some partial insights that are hard to compare and generally miss the role of parameters interactions. To bridge this gap, this article (Part-1 of 2) features a case study on poly-L-lactide scaffolds to demonstrate how statistical methods such as design of experiments can quantitatively identify the correlations existing between key scaffold properties and control parameters, in a systematic, consistent, and comprehensive manner disentangling main effects from interactions. The morphological properties (i.e., fiber distribution and porosity) and mechanical properties (Young's modulus) are "charted" as a function of molecular weight (MW) and other electrospinning process parameters (the Xs), considering the single effect as well as interactions between Xs. For the first time, the major role of the MW emerges clearly in controlling all scaffold properties. The correlation between mechanical and morphological properties is also addressed. © 2014 Wiley Periodicals, Inc.

  2. Estimation of the viscosities of liquid binary alloys

    NASA Astrophysics Data System (ADS)

    Wu, Min; Su, Xiang-Yu

    2018-01-01

    As one of the most important physical and chemical properties, viscosity plays a critical role in physics and materials as a key parameter to quantitatively understanding the fluid transport process and reaction kinetics in metallurgical process design. Experimental and theoretical studies on liquid metals are problematic. Today, there are many empirical and semi-empirical models available with which to evaluate the viscosity of liquid metals and alloys. However, the parameter of mixed energy in these models is not easily determined, and most predictive models have been poorly applied. In the present study, a new thermodynamic parameter Δ G is proposed to predict liquid alloy viscosity. The prediction equation depends on basic physical and thermodynamic parameters, namely density, melting temperature, absolute atomic mass, electro-negativity, electron density, molar volume, Pauling radius, and mixing enthalpy. Our results show that the liquid alloy viscosity predicted using the proposed model is closely in line with the experimental values. In addition, if the component radius difference is greater than 0.03 nm at a certain temperature, the atomic size factor has a significant effect on the interaction of the binary liquid metal atoms. The proposed thermodynamic parameter Δ G also facilitates the study of other physical properties of liquid metals.

  3. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  4. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/Bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  5. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  6. Light Weight Biomorphous Cellular Ceramics from Cellulose Templates

    NASA Technical Reports Server (NTRS)

    Singh, Mrityunjay; Yee, Bo-Moon; Gray, Hugh R. (Technical Monitor)

    2003-01-01

    Bimorphous ceramics are a new class of materials that can be fabricated from the cellulose templates derived from natural biopolymers. These biopolymers are abundantly available in nature and are produced by the photosynthesis process. The wood cellulose derived carbon templates have three- dimensional interconnectivity. A wide variety of non-oxide and oxide based ceramics have been fabricated by template conversion using infiltration and reaction-based processes. The cellular anatomy of the cellulose templates plays a key role in determining the processing parameters (pyrolysis, infiltration conditions, etc.) and resulting ceramic materials. The processing approach, microstructure, and mechanical properties of the biomorphous cellular ceramics (silicon carbide and oxide based) have been discussed.

  7. Reaction-mediated entropic effect on phase separation in a binary polymer system

    NASA Astrophysics Data System (ADS)

    Sun, Shujun; Guo, Miaocai; Yi, Xiaosu; Zhang, Zuoguang

    2017-10-01

    We present a computer simulation to study the phase separation behavior induced by polymerization in a binary system comprising polymer chains and reactive monomers. We examined the influence of interaction parameter between components and monomer concentration on the reaction-induced phase separation. The simulation results demonstrate that increasing interaction parameter (enthalpic effect) would accelerate phase separation, while entropic effect plays a key role in the process of phase separation. Furthermore, scanning electron microscopy observations illustrate identical morphologies as found in theoretical simulation. This study may enrich our comprehension of phase separation in polymer mixture.

  8. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  9. Towards Prognostics of Power MOSFETs: Accelerated Aging and Precursors of Failure

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Saxena, Abhinav; Wysocki, Philip; Saha, Sankalita; Goebel, Kai

    2010-01-01

    This paper presents research results dealing with power MOSFETs (metal oxide semiconductor field effect transistor) within the prognostics and health management of electronics. Experimental results are presented for the identification of the on-resistance as a precursor to failure of devices with die-attach degradation as a failure mechanism. Devices are aged under power cycling in order to trigger die-attach damage. In situ measurements of key electrical and thermal parameters are collected throughout the aging process and further used for analysis and computation of the on-resistance parameter. Experimental results show that the devices experience die-attach damage and that the on-resistance captures the degradation process in such a way that it could be used for the development of prognostics algorithms (data-driven or physics-based).

  10. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  11. Optimisation study of a vehicle bumper subsystem with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Farkas, L.; Moens, D.; Donders, S.; Vandepitte, D.

    2012-10-01

    This paper deals with the design and optimisation for crashworthiness of a vehicle bumper subsystem, which is a key scenario for vehicle component design. The automotive manufacturers and suppliers have to find optimal design solutions for such subsystems that comply with the conflicting requirements of the regulatory bodies regarding functional performance (safety and repairability) and regarding the environmental impact (mass). For the bumper design challenge, an integrated methodology for multi-attribute design engineering of mechanical structures is set up. The integrated process captures the various tasks that are usually performed manually, this way facilitating the automated design iterations for optimisation. Subsequently, an optimisation process is applied that takes the effect of parametric uncertainties into account, such that the system level of failure possibility is acceptable. This optimisation process is referred to as possibility-based design optimisation and integrates the fuzzy FE analysis applied for the uncertainty treatment in crash simulations. This process is the counterpart of the reliability-based design optimisation used in a probabilistic context with statistically defined parameters (variabilities).

  12. The effect of thermal processing on microstructure and mechanical properties in a nickel-iron alloy

    NASA Astrophysics Data System (ADS)

    Yang, Ling

    The correlation between processing conditions, resulted microstructure and mechanical properties is of interest in the field of metallurgy for centuries. In this work, we investigated the effect of thermal processing parameters on microstructure, and key mechanical properties to turbine rotor design: tensile yield strength and crack growth resistance, for a nickel-iron based superalloy Inconel 706. The first step of the designing of experiments is to find parameter ranges for thermal processing. Physical metallurgy on superalloys was combined with finite element analysis to estimate variations in thermal histories for a large Alloy 706 forging, and the results were adopted for designing of experiments. Through the systematic study, correlation was found between the processing parameters and the microstructure. Five different types of grain boundaries were identified by optical metallography, fractography, and transmission electron microscopy, and they were found to be associated with eta precipitation at the grain boundaries. Proportions of types of boundaries, eta size, spacing and angle respect to the grain boundary were found to be dependent on processing parameters. Differences in grain interior precipitates were also identified, and correlated with processing conditions. Further, a strong correlation between microstructure and mechanical properties was identified. The grain boundary precipitates affect the time dependent crack propagation resistance, and different types of boundaries have different levels of resistance. Grain interior precipitates were correlated with tensile yield strength. It was also found that there is a strong environmental effect on time dependent crack propagation resistance, and the sensitivity to environmental damage is microstructure dependent. The microstructure with eta decorated on grain boundaries by controlled processing parameters is more resistant to environmental damage through oxygen embrittlement than material without eta phase on grain boundaries. Effort was made to explore the mechanisms of improving the time dependent crack propagation resistance through thermal processing, several mechanisms were identified in both environment dependent and environment independent category, and they were ranked based on their contributions in affecting crack propagation.

  13. The Role of Parvalbumin, Sarcoplasmatic Reticulum Calcium Pump Rate, Rates of Cross-Bridge Dynamics, and Ryanodine Receptor Calcium Current on Peripheral Muscle Fatigue: A Simulation Study

    PubMed Central

    Neumann, Verena

    2016-01-01

    A biophysical model of the excitation-contraction pathway, which has previously been validated for slow-twitch and fast-twitch skeletal muscles, is employed to investigate key biophysical processes leading to peripheral muscle fatigue. Special emphasis hereby is on investigating how the model's original parameter sets can be interpolated such that realistic behaviour with respect to contraction time and fatigue progression can be obtained for a continuous distribution of the model's parameters across the muscle units, as found for the functional properties of muscles. The parameters are divided into 5 groups describing (i) the sarcoplasmatic reticulum calcium pump rate, (ii) the cross-bridge dynamics rates, (iii) the ryanodine receptor calcium current, (iv) the rates of binding of magnesium and calcium ions to parvalbumin and corresponding dissociations, and (v) the remaining processes. The simulations reveal that the first two parameter groups are sensitive to contraction time but not fatigue, the third parameter group affects both considered properties, and the fourth parameter group is only sensitive to fatigue progression. Hence, within the scope of the underlying model, further experimental studies should investigate parvalbumin dynamics and the ryanodine receptor calcium current to enhance the understanding of peripheral muscle fatigue. PMID:27980606

  14. Discrete Event Simulation Modeling and Analysis of Key Leader Engagements

    DTIC Science & Technology

    2012-06-01

    to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing

  15. Integrating Materials, Manufacturing, Design and Validation for Sustainability in Future Transport Systems

    NASA Astrophysics Data System (ADS)

    Price, M. A.; Murphy, A.; Butterfield, J.; McCool, R.; Fleck, R.

    2011-05-01

    The predictive methods currently used for material specification, component design and the development of manufacturing processes, need to evolve beyond the current `metal centric' state of the art, if advanced composites are to realise their potential in delivering sustainable transport solutions. There are however, significant technical challenges associated with this process. Deteriorating environmental, political, economic and social conditions across the globe have resulted in unprecedented pressures to improve the operational efficiency of the manufacturing sector generally and to change perceptions regarding the environmental credentials of transport systems in particular. There is a need to apply new technologies and develop new capabilities to ensure commercial sustainability in the face of twenty first century economic and climatic conditions as well as transport market demands. A major technology gap exists between design, analysis and manufacturing processes in both the OEMs, and the smaller companies that make up the SME based supply chain. As regulatory requirements align with environmental needs, manufacturers are increasingly responsible for the broader lifecycle aspects of vehicle performance. These include not only manufacture and supply but disposal and re-use or re-cycling. In order to make advances in the reduction of emissions coupled with improved economic efficiency through the provision of advanced lightweight vehicles, four key challenges are identified as follows: Material systems, Manufacturing systems, Integrated design methods using digital manufacturing tools and Validation systems. This paper presents a project which has been designed to address these four key issues, using at its core, a digital framework for the creation and management of key parameters related to the lifecycle performance of thermoplastic composite parts and structures. It aims to provide capability for the proposition, definition, evaluation and demonstration of advanced lightweight structures for new generation vehicles in the context of whole life performance parameters.

  16. Efficient calculation of higher-order optical waveguide dispersion.

    PubMed

    Mores, J A; Malheiros-Silveira, G N; Fragnito, H L; Hernández-Figueroa, H E

    2010-09-13

    An efficient numerical strategy to compute the higher-order dispersion parameters of optical waveguides is presented. For the first time to our knowledge, a systematic study of the errors involved in the higher-order dispersions' numerical calculation process is made, showing that the present strategy can accurately model those parameters. Such strategy combines a full-vectorial finite element modal solver and a proper finite difference differentiation algorithm. Its performance has been carefully assessed through the analysis of several key geometries. In addition, the optimization of those higher-order dispersion parameters can also be carried out by coupling to the present scheme a genetic algorithm, as shown here through the design of a photonic crystal fiber suitable for parametric amplification applications.

  17. Successes and Challenges in Linking Observations and Modeling of Marine and Terrestrial Cryospheric Processes

    NASA Astrophysics Data System (ADS)

    Herzfeld, U. C.; Hunke, E. C.; Trantow, T.; Greve, R.; McDonald, B.; Wallin, B.

    2014-12-01

    Understanding of the state of the cryosphere and its relationship to other components of the Earth system requires both models of geophysical processes and observations of geophysical properties and processes, however linking observations and models is far from trivial. This paper looks at examples from sea ice and land ice model-observation linkages to examine some approaches, challenges and solutions. In a sea-ice example, ice deformation is analyzed as a key process that indicates fundamental changes in the Arctic sea ice cover. Simulation results from the Los Alamos Sea-Ice Model CICE, which is also the sea-ice component of the Community Earth System Model (CESM), are compared to parameters indicative of deformation as derived from mathematical analysis of remote sensing data. Data include altimeter, micro-ASAR and image data from manned and unmanned aircraft campaigns (NASA OIB and Characterization of Arctic Sea Ice Experiment, CASIE). The key problem to linking data and model results is the derivation of matching parameters on both the model and observation side.For terrestrial glaciology, we include an example of a surge process in a glacier system and and example of a dynamic ice sheet model for Greenland. To investigate the surge of the Bering Bagley Glacier System, we use numerical forward modeling experiments and, on the data analysis side, a connectionist approach to analyze crevasse provinces. In the Greenland ice sheet example, we look at the influence of ice surface and bed topography, as derived from remote sensing data, on on results from a dynamic ice sheet model.

  18. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    PubMed

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  19. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    NASA Astrophysics Data System (ADS)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  20. Airborne Hyperspectral Imaging of Seagrass and Coral Reef

    NASA Astrophysics Data System (ADS)

    Merrill, J.; Pan, Z.; Mewes, T.; Herwitz, S.

    2013-12-01

    This talk presents the process of project preparation, airborne data collection, data pre-processing and comparative analysis of a series of airborne hyperspectral projects focused on the mapping of seagrass and coral reef communities in the Florida Keys. As part of a series of large collaborative projects funded by the NASA ROSES program and the Florida Fish and Wildlife Conservation Commission and administered by the NASA UAV Collaborative, a series of airborne hyperspectral datasets were collected over six sites in the Florida Keys in May 2012, October 2012 and May 2013 by Galileo Group, Inc. using a manned Cessna 172 and NASA's SIERRA Unmanned Aerial Vehicle. Precise solar and tidal data were used to calculate airborne collection parameters and develop flight plans designed to optimize data quality. Two independent Visible and Near-Infrared (VNIR) hyperspectral imaging systems covering 400-100nm were used to collect imagery over six Areas of Interest (AOIs). Multiple collections were performed over all sites across strict solar windows in the mornings and afternoons. Independently developed pre-processing algorithms were employed to radiometrically correct, synchronize and georectify individual flight lines which were then combined into color balanced mosaics for each Area of Interest. The use of two different hyperspectral sensor as well as environmental variations between each collection allow for the comparative analysis of data quality as well as the iterative refinement of flight planning and collection parameters.

  1. Determination of key diffusion and partition parameters and their use in migration modelling of benzophenone from low-density polyethylene (LDPE) into different foodstuffs.

    PubMed

    Maia, Joaquim; Rodríguez-Bernaldo de Quirós, Ana; Sendón, Raquel; Cruz, José Manuel; Seiler, Annika; Franz, Roland; Simoneau, Catherine; Castle, Laurence; Driffield, Malcolm; Mercea, Peter; Oldring, Peter; Tosa, Valer; Paseiro, Perfecto

    2016-01-01

    The mass transport process (migration) of a model substance, benzophenone (BZP), from LDPE into selected foodstuffs at three temperatures was studied. A mathematical model based on Fick's Second Law of Diffusion was used to simulate the migration process and a good correlation between experimental and predicted values was found. The acquired results contribute to a better understanding of this phenomenon and the parameters so-derived were incorporated into the migration module of the recently launched FACET tool (Flavourings, Additives and Food Contact Materials Exposure Tool). The migration tests were carried out at different time-temperature conditions, and BZP was extracted from LDPE and analysed by HPLC-DAD. With all data, the parameters for migration modelling (diffusion and partition coefficients) were calculated. Results showed that the diffusion coefficients (within both the polymer and the foodstuff) are greatly affected by the temperature and food's physical state, whereas the partition coefficient was affected significantly only by food characteristics, particularly fat content.

  2. Parameter-induced uncertainty quantification of a regional N2O and NO3 inventory using the biogeochemical model LandscapeDNDC

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus

    2014-05-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.

  3. Radiation environment study of near space in China area

    NASA Astrophysics Data System (ADS)

    Fan, Dongdong; Chen, Xingfeng; Li, Zhengqiang; Mei, Xiaodong

    2015-10-01

    Aerospace activity becomes research hotspot for worldwide aviation big countries. Solar radiation study is the prerequisite for aerospace activity to carry out, but lack of observation in near space layer becomes the barrier. Based on reanalysis data, input key parameters are determined and simulation experiments are tried separately to simulate downward solar radiation and ultraviolet radiation transfer process of near space in China area. Results show that atmospheric influence on the solar radiation and ultraviolet radiation transfer process has regional characteristic. As key factors such as ozone are affected by atmospheric action both on its density, horizontal and vertical distribution, meteorological data of stratosphere needs to been considered and near space in China area is divided by its activity feature. Simulated results show that solar and ultraviolet radiation is time, latitude and ozone density-variant and has complicated variation characteristics.

  4. A perturbation method to the tent map based on Lyapunov exponent and its application

    NASA Astrophysics Data System (ADS)

    Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu

    2015-10-01

    Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).

  5. A national-scale analysis of the impacts of drought on water quality in UK rivers

    NASA Astrophysics Data System (ADS)

    Coxon, G.; Howden, N. J. K.; Freer, J. E.; Whitehead, P. G.; Bussi, G.

    2015-12-01

    Impacts of droughts on water quality qre difficult to quanitify but are essential to manage ecosystems and maintain public water supply. During drought, river water quality is significantly changed by increased residence times, reduced dilution and enhanced biogeochemical processes. But, the impact severity varies between catchments and depends on multiple factors including the sensitivity of the river to drought conditions, anthropogenic influences in the catchment and different delivery patterns of key nutrient, contaminant and mineral sources. A key constraint is data availability for key water quality parameters such that impacts of drought periods on certain determinands can be identified. We use national-scale water quality monitoring data to investigate the impacts of drought periods on water quality in the United Kingdom (UK). The UK Water Quality Sampling Harmonised Monitoring Scheme (HMS) dataset consists of >200 UK sites with weekly to monthly sampling of many water quality variables over the past 40 years. This covers several major UK droughts in 1975-1976, 1983-1984,1989-1992, 1995 and 2003, which cover severity, spatial and temporal extent, and how this affects the temporal impact of the drought on water quality. Several key water quality parameters, including water temperature, nitrate, dissolved organic carbon, orthophosphate, chlorophyll and pesticides, are selected from the database. These were chosen based on their availability for many of the sites, high sampling resolution and importance to the drinking water function and ecological status of the river. The water quality time series were then analysed to investigate whether water quality during droughts deviated significantly from non-drought periods and examined how the results varied spatially, for different drought periods and for different water quality parameters. Our results show that there is no simple conclusion as to the effects of drought on water quality in UK rivers; impacts are diverse both in terms of timing, magnitude and duration. We consider several scenarios in which management interventions may alleviate water quality pressures, and discuss how the many interacting factors need to be better characterised to support detailed mechanistic models to improve our process understanding.

  6. A Toolkit to Study Sensitivity of the Geant4 Predictions to the Variations of the Physics Model Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fields, Laura; Genser, Krzysztof; Hatcher, Robert

    Geant4 is the leading detector simulation toolkit used in high energy physics to design detectors and to optimize calibration and reconstruction software. It employs a set of carefully validated physics models to simulate interactions of particles with matter across a wide range of interaction energies. These models, especially the hadronic ones, rely largely on directly measured cross-sections and phenomenological predictions with physically motivated parameters estimated by theoretical calculation or measurement. Because these models are tuned to cover a very wide range of possible simulation tasks, they may not always be optimized for a given process or a given material. Thismore » raises several critical questions, e.g. how sensitive Geant4 predictions are to the variations of the model parameters, or what uncertainties are associated with a particular tune of a Geant4 physics model, or a group of models, or how to consistently derive guidance for Geant4 model development and improvement from a wide range of available experimental data. We have designed and implemented a comprehensive, modular, user-friendly software toolkit to study and address such questions. It allows one to easily modify parameters of one or several Geant4 physics models involved in the simulation, and to perform collective analysis of multiple variants of the resulting physics observables of interest and comparison against a variety of corresponding experimental data. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. flexible run-time configurable workflow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented and illustrated with results obtained with Geant4 key hadronic models.« less

  7. Research on the Mean Logistic Delay Time of the Development Phrass

    NASA Astrophysics Data System (ADS)

    Na, Hou; Yi, Li; Wang, Yi-Gang; Liu, Jun-jie; Bo, Zhang; Lv, Xue-Zhi

    MIDT is a key parameter affecting operational availability though equipment designing, operation and support management. In operation process, how to strengthen the support management, layout rationally supports resource, provide support resource of the equipment maintenance, in order to avoid or reduce support; ensure MLDT satisfied to Ao's requests. It's an urgently solved question that how to assort with the RMS of equipment.

  8. Subsurface Damage in Polishing Process of Silicon Carbide Ceramic

    PubMed Central

    Gu, Yan; Zhu, Wenhui; Lin, Jieqiong; Lu, Mingming; Kang, Mingshuo

    2018-01-01

    Subsurface damage (SSD) in the polishing process of silicon carbide (SiC) ceramic presents one of the most significant challenges for practical applications. In this study, the theoretical models of SSD depth are established on the basis of the material removal mechanism and indentation fracture mechanics in the SiC ceramic polishing process. In addition, the three-dimensional (3D) models of single grit polishing are also developed by using the finite element simulation; thereby, the dynamic effects of different process parameters on SSD depth are analyzed. The results demonstrate that the material removal was mainly in brittle mode when the cutting depth was larger than the critical depth of the brittle material. The SSD depth increased as the polishing depth and abrasive grain size increased, and decreased with respect to the increase in polishing speed. The experimental results suggested a good agreement with the theoretical simulation results in terms of SSD depth as a function of polishing depth, spindle speed, and abrasive grain size. This study provides a mechanistic insight into the dependence of SSD on key operational parameters in the polishing process of SiC ceramic. PMID:29584694

  9. Novel water-air circulation quenching process for AISI 4140 steel

    NASA Astrophysics Data System (ADS)

    Zheng, Liyun; Zheng, Dawei; Zhao, Lixin; Wang, Lihui; Zhang, Kai

    2013-11-01

    AISI 4140 steel is usually used after quenching and tempering. During the heat treatment process in industry production, there are some problems, such as quenching cracks, related to water-cooling and low hardness due to oil quenching. A water-air circulation quenching process can solve the problems of quenching cracks with water and the high cost quenching with oil, which is flammable, unsafe and not enough to obtain the required hardness. The control of the water-cooling and air-cooling time is a key factor in the process. This paper focuses on the quenching temperature, water-air cycle time and cycle index to prevent cracking for AISI 4140 steel. The optimum heat treatment parameters to achieve a good match of the strength and toughness of AISI 4140 steel were obtained by repeated adjustment of the water-air circulation quenching process parameters. The tensile strength, Charpy impact energy at -10 °C and hardness of the heat treated AISI 4140 steel after quenching and tempering were approximately 1098 MPa, 67.5 J and 316 HB, respectively.

  10. A crunch on thermocompression flip chip bonding

    NASA Astrophysics Data System (ADS)

    Suppiah, Sarveshvaran; Ong, Nestor Rubio; Sauli, Zaliman; Sarukunaselan, Karunavani; Alcain, Jesselyn Barro; Mahmed, Norsuria; Retnasamy, Vithyacharan

    2017-09-01

    This study discussed the evolution and important findings, critical technical challenges, solutions and bonding equipment of flip chip thermo compression bonding (TCB). The bonding force, temperature and time were the key bonding parameters that need to be tweaked based on the researches done by others. TCB technology worked well with both pre-applied underfill and flux (still under development). Lower throughput coupled with higher processing costs was example of challenges in the TCB technology. The paper is concluded with a brief description of the current equipment used in thermo compression process.

  11. Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables

    DTIC Science & Technology

    2013-06-01

    1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team

  12. Application of a data assimilation method via an ensemble Kalman filter to reactive urea hydrolysis transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juxiu Tong; Bill X. Hu; Hai Huang

    2014-03-01

    With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less

  13. CHAM: weak signals detection through a new multivariate algorithm for process control

    NASA Astrophysics Data System (ADS)

    Bergeret, François; Soual, Carole; Le Gratiet, B.

    2016-10-01

    Derivatives technologies based on core CMOS processes are significantly aggressive in term of design rules and process control requirements. Process control plan is a derived from Process Assumption (PA) calculations which result in a design rule based on known process variability capabilities, taking into account enough margin to be safe not only for yield but especially for reliability. Even though process assumptions are calculated with a 4 sigma known process capability margin, efficient and competitive designs are challenging the process especially for derivatives technologies in 40 and 28nm nodes. For wafer fab process control, PA are declined in monovariate (layer1 CD, layer2 CD, layer2 to layer1 overlay, layer3 CD etc….) control charts with appropriated specifications and control limits which all together are securing the silicon. This is so far working fine but such system is not really sensitive to weak signals coming from interactions of multiple key parameters (high layer2 CD combined with high layer3 CD as an example). CHAM is a software using an advanced statistical algorithm specifically designed to detect small signals, especially when there are many parameters to control and when the parameters can interact to create yield issues. In this presentation we will first present the CHAM algorithm, then the case-study on critical dimensions, with the results, and we will conclude on future work. This partnership between Ippon and STM is part of E450LMDAP, European project dedicated to metrology and lithography development for future technology nodes, especially 10nm.

  14. Wearable Environmental and Physiological Sensing Unit

    NASA Technical Reports Server (NTRS)

    Spremo, Stevan; Ahlman, Jim; Stricker, Ed; Santos, Elmer

    2007-01-01

    The wearable environmental and physiological sensing unit (WEPS) is a prototype of systems to be worn by emergency workers (e.g., firefighters and members of hazardous-material response teams) to increase their level of safety. The WEPS includes sensors that measure a few key physiological and environmental parameters, a microcontroller unit that processes the digitized outputs of the sensors, and a radio transmitter that sends the processed sensor signals to a computer in a mobile command center for monitoring by a supervisor. The monitored parameters serve as real-time indications of the wearer s physical condition and level of activity, and of the degree and type of danger posed by the wearer s environment. The supervisor could use these indications to determine, for example, whether the wearer should withdraw in the face of an increasing hazard or whether the wearer should be rescued.

  15. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  16. Uncertainty Quantification for CO2-Enhanced Oil Recovery

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Middleton, R.; Bauman, J.; Viswanathan, H.; Fessenden-Rahn, J.; Pawar, R.; Lee, S.

    2013-12-01

    CO2-Enhanced Oil Recovery (EOR) is currently an option for permanently sequestering CO2 in oil reservoirs while increasing oil/gas productions economically. In this study we have developed a framework for understanding CO2 storage potential within an EOR-sequestration environment at the Farnsworth Unit of the Anadarko Basin in northern Texas. By coupling a EOR tool--SENSOR (CEI, 2011) with a uncertainty quantification tool PSUADE (Tong, 2011), we conduct an integrated Monte Carlo simulation of water, oil/gas components and CO2 flow and reactive transport in the heterogeneous Morrow formation to identify the key controlling processes and optimal parameters for CO2 sequestration and EOR. A global sensitivity and response surface analysis are conducted with PSUADE to build numerically the relationship among CO2 injectivity, oil/gas production, reservoir parameters and distance between injection and production wells. The results indicate that the reservoir permeability and porosity are the key parameters to control the CO2 injection, oil and gas (CH4) recovery rates. The distance between the injection and production wells has large impact on oil and gas recovery and net CO2 injection rates. The CO2 injectivity increases with the increasing reservoir permeability and porosity. The distance between injection and production wells is the key parameter for designing an EOR pattern (such as a five (or nine)-spot pattern). The optimal distance for a five-spot-pattern EOR in this site is estimated from the response surface analysis to be around 400 meters. Next, we are building the machinery into our risk assessment framework CO2-PENS to utilize these response surfaces and evaluate the operation risk for CO2 sequestration and EOR at this site.

  17. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  18. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  19. Utilizing a one-dimensional multispecies model to simulate the nutrient reduction and biomass structure in two types of H2-based membrane-aeration biofilm reactors (H2-MBfR): model development and parametric analysis.

    PubMed

    Wang, Zuowei; Xia, Siqing; Xu, Xiaoyin; Wang, Chenhui

    2016-02-01

    In this study, a one-dimensional multispecies model (ODMSM) was utilized to simulate NO3(-)-N and ClO4(-) reduction performances in two kinds of H2-based membrane-aeration biofilm reactors (H2-MBfR) within different operating conditions (e.g., NO3(-)-N/ClO4(-) loading rates, H2 partial pressure, etc.). Before the simulation process, we conducted the sensitivity analysis of some key parameters which would fluctuate in different environmental conditions, then we used the experimental data to calibrate the more sensitive parameters μ1 and μ2 (maximum specific growth rates of denitrification bacteria and perchlorate reduction bacteria) in two H2-MBfRs, and the diversity of the two key parameters' values in two types of reactors may be resulted from the different carbon source fed in the reactors. From the simulation results of six different operating conditions (four in H2-MBfR 1 and two in H2-MBfR 2), the applicability of the model was approved, and the variation of the removal tendency in different operating conditions could be well simulated. Besides, the rationality of operating parameters (H2 partial pressure, etc.) could be judged especially in condition of high nutrients' loading rates. To a certain degree, the model could provide theoretical guidance to determine the operating parameters on some specific conditions in practical application.

  20. Towards Improving our Understanding on the Retrievals of Key Parameters Characterising Land Surface Interactions from Space: Introduction & First Results from the PREMIER-EO Project

    NASA Astrophysics Data System (ADS)

    Ireland, Gareth; North, Matthew R.; Petropoulos, George P.; Srivastava, Prashant K.; Hodges, Crona

    2015-04-01

    Acquiring accurate information on the spatio-temporal variability of soil moisture content (SM) and evapotranspiration (ET) is of key importance to extend our understanding of the Earth system's physical processes, and is also required in a wide range of multi-disciplinary research studies and applications. The utility and applicability of Earth Observation (EO) technology provides an economically feasible solution to derive continuous spatio-temporal estimates of key parameters characterising land surface interactions, including ET as well as SM. Such information is of key value to practitioners, decision makers and scientists alike. The PREMIER-EO project recently funded by High Performance Computing Wales (HPCW) is a research initiative directed towards the development of a better understanding of EO technology's present ability to derive operational estimations of surface fluxes and SM. Moreover, the project aims at addressing knowledge gaps related to the operational estimation of such parameters, and thus contribute towards current ongoing global efforts towards enhancing the accuracy of those products. In this presentation we introduce the PREMIER-EO project, providing a detailed overview of the research aims and objectives for the 1 year duration of the project's implementation. Subsequently, we make available the initial results of the work carried out herein, in particular, related to an all-inclusive and robust evaluation of the accuracy of existing operational products of ET and SM from different ecosystems globally. The research outcomes of this project, once completed, will provide an important contribution towards addressing the knowledge gaps related to the operational estimation of ET and SM. This project results will also support efforts ongoing globally towards the operational development of related products using technologically advanced EO instruments which were launched recently or planned be launched in the next 1-2 years. Key Words: PREMIER-EO, HPC Wales, Soil Moisture, Evapotranspiration, , Earth Observation

  1. Spatio-Temporal Process Simulation of Dam-Break Flood Based on SPH

    NASA Astrophysics Data System (ADS)

    Wang, H.; Ye, F.; Ouyang, S.; Li, Z.

    2018-04-01

    On the basis of introducing the SPH (Smooth Particle Hydrodynamics) simulation method, the key research problems were given solutions in this paper, which ere the spatial scale and temporal scale adapting to the GIS(Geographical Information System) application, the boundary condition equations combined with the underlying surface, and the kernel function and parameters applicable to dam-break flood simulation. In this regards, a calculation method of spatio-temporal process emulation with elaborate particles for dam-break flood was proposed. Moreover the spatio-temporal process was dynamic simulated by using GIS modelling and visualization. The results show that the method gets more information, objectiveness and real situations.

  2. Landsat-5 bumper-mode geometric correction

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.

    2004-01-01

    The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.

  3. Extending the performance of KrF laser for microlithography by using novel F2 control technology

    NASA Astrophysics Data System (ADS)

    Zambon, Paolo; Gong, Mengxiong; Carlesi, Jason; Padmabandu, Gunasiri G.; Binder, Mike; Swanson, Ken; Das, Palash P.

    2000-07-01

    Exposure tools for 248nm lithography have reached a level of maturity comparable to those based on i-line. With this increase in maturity, there is a concomitant requirement for greater flexibility from the laser by the process engineers. Usually, these requirements pertain to energy, spectral width and repetition rate. By utilizing a combination of laser parameters, the process engineers are often able to optimize throughput, reduce cost-of-operation or achieve greater process margin. Hitherto, such flexibility of laser operation was possible only via significant changes to various laser modules. During our investigation, we found that the key measure of the laser that impacts the aforementioned parameters is its F2 concentration. By monitoring and controlling its slope efficiency, the laser's F2 concentration may be precisely controlled. Thus a laser may tune to operate under specifications as diverse as 7mJ, (Delta) (lambda) FWHM < 0.3 pm and 10mJ, (Delta) (lambda) FWHM < 0.6pm and still meet the host of requirements necessary for lithography. We discus this new F2 control technique and highlight some laser performance parameters.

  4. Methods to speed up the gain recovery of an SOA

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Wang, Yongjun; Meng, Qingwen; Zhao, Rui

    2008-01-01

    The semiconductor optical amplifiers (SOAs) are employed in all optical networking and all optical signal processing due to the excellent nonlinearity and high speed. The gain recovery time is the key parameter to describe the response speed of the SOA. The relationship between the gain dynamics and a few operation parameters is obtained in this article. A few simple formula and some simulations are demonstrated, from which, a few methods to improve the response speed of the SOA can be concluded as following, lengthening the active area, or lessening the cross area, increasing the injection current, increasing the probe power, operating with a CW holding beam.

  5. Sensor for the working surface cleanliness definition in vacuum

    NASA Astrophysics Data System (ADS)

    Deulin, E. A.; Mashurov, S. S.; Gatsenko, A. A.

    2016-07-01

    Modern development of nanotechnology as one of the modern science priority directions is impossible to imagine without the use of vacuum systems and technologies. And the better the vacuum (lower the pressure), the “cleaner” we get a surface, which is very important for nanotechnology. Determination of the cleanliness of the surface or the amount of molecular layers of adsorbed gases on the working surface of the products especially in industry, where the cleanliness of the working surface is a key parameter of the technological process and has a significant influence on the output parameters of the final product is the main goal of this work.

  6. Measurement and modelization of silica opal optical properties

    NASA Astrophysics Data System (ADS)

    Avoine, Amaury; Hong, Phan Ngoc; Frederich, Hugo; Aregahegn, Kifle; Bénalloul, Paul; Coolen, Laurent; Schwob, Catherine; Thu Nga, Pham; Gallas, Bruno; Maître, Agnès

    2014-03-01

    We present the synthesis process and optical characterization of artificial silica opals. The specular reflection spectra are analyzed and compared to band structure calculations and finite difference time domain (FDTD) simulations. The silica optical index is a key parameter to correctly describe an opal and is usually not known and treated as a free parameter. Here we propose a method to infer the silica index, as well as the silica spheres diameter, from the reflection spectra and we validate it by comparison with two independent infrared methods for the index and, scanning electron microscopy (SEM) and atomic force microscopy (AFM) measurements for the spheres diameter.

  7. Optical guidance vidicon test program

    NASA Technical Reports Server (NTRS)

    Eiseman, A. R.; Stanton, R. H.; Voge, C. C.

    1976-01-01

    A laboratory and field test program was conducted to quantify the optical navigation parameters of the Mariner vidicons. A scene simulator and a camera were designed and built for vidicon tests under a wide variety of conditions. Laboratory tests characterized error sources important to the optical navigation process and field tests verified star sensitivity and characterized comet optical guidance parameters. The equipment, tests and data reduction techniques used are described. Key test results are listed. A substantial increase in the understanding of the use of selenium vidicons as detectors for spacecraft optical guidance was achieved, indicating a reduction in residual offset errors by a factor of two to four to the single pixel level.

  8. A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies

    NASA Astrophysics Data System (ADS)

    Kornfeld, A.; Van der Tol, C.; Berry, J. A.

    2014-12-01

    Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.

  9. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  10. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  11. Residual stress evaluation of components produced via direct metal laser sintering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kemerling, Brandon; Lippold, John C.; Fancher, Christopher M.

    Direct metal laser sintering is an additive manufacturing process which is capable of fabricating three-dimensional components using a laser energy source and metal powder particles. Despite the numerous benefits offered by this technology, the process maturity is low with respect to traditional subtractive manufacturing methods. Relationships between key processing parameters and final part properties are generally lacking and require further development. In this study, residual stresses were evaluated as a function of key process variables. The variables evaluated included laser scan strategy and build plate preheat temperature. Residual stresses were measured experimentally via neutron diffraction and computationally via finite elementmore » analysis. Good agreement was shown between the experimental and computational results. Results showed variations in the residual stress profile as a function of laser scan strategy. Compressive stresses were dominant along the build height (z) direction, and tensile stresses were dominant in the x and y directions. Build plate preheating was shown to be an effective method for alleviating residual stress due to the reduction in thermal gradient.« less

  12. Residual stress evaluation of components produced via direct metal laser sintering

    DOE PAGES

    Kemerling, Brandon; Lippold, John C.; Fancher, Christopher M.; ...

    2018-03-22

    Direct metal laser sintering is an additive manufacturing process which is capable of fabricating three-dimensional components using a laser energy source and metal powder particles. Despite the numerous benefits offered by this technology, the process maturity is low with respect to traditional subtractive manufacturing methods. Relationships between key processing parameters and final part properties are generally lacking and require further development. In this study, residual stresses were evaluated as a function of key process variables. The variables evaluated included laser scan strategy and build plate preheat temperature. Residual stresses were measured experimentally via neutron diffraction and computationally via finite elementmore » analysis. Good agreement was shown between the experimental and computational results. Results showed variations in the residual stress profile as a function of laser scan strategy. Compressive stresses were dominant along the build height (z) direction, and tensile stresses were dominant in the x and y directions. Build plate preheating was shown to be an effective method for alleviating residual stress due to the reduction in thermal gradient.« less

  13. Health policy--why research it and how: health political science.

    PubMed

    de Leeuw, Evelyne; Clavier, Carole; Breton, Eric

    2014-09-23

    The establishment of policy is key to the implementation of actions for health. We review the nature of policy and the definition and directions of health policy. In doing so, we explicitly cast a health political science gaze on setting parameters for researching policy change for health. A brief overview of core theories of the policy process for health promotion is presented, and illustrated with empirical evidence. The key arguments are that (a) policy is not an intervention, but drives intervention development and implementation; (b) understanding policy processes and their pertinent theories is pivotal for the potential to influence policy change; (c) those theories and associated empirical work need to recognise the wicked, multi-level, and incremental nature of elements in the process; and, therefore, (d) the public health, health promotion, and education research toolbox should more explicitly embrace health political science insights. The rigorous application of insights from and theories of the policy process will enhance our understanding of not just how, but also why health policy is structured and implemented the way it is.

  14. Post-cracking characteristics of high performance fiber reinforced cementitious composites

    NASA Astrophysics Data System (ADS)

    Suwannakarn, Supat W.

    The application of high performance fiber reinforced cement composites (HPFRCC) in structural systems depends primarily on the material's tensile response, which is a direct function of fiber and matrix characteristics, the bond between them, and the fiber content or volume fraction. The objective of this dissertation is to evaluate and model the post-cracking behavior of HPFRCC. In particular, it focused on the influential parameters controlling tensile behavior and the variability associated with them. The key parameters considered include: the stress and strain at first cracking, the stress and strain at maximum post-cracking, the shape of the stress-strain or stress-elongation response, the multiple cracking process, the shape of the resistance curve after crack localization, the energy associated with the multiple cracking process, and the stress versus crack opening response of a single crack. Both steel fibers and polymeric fibers, perceived to have the greatest potential for current commercial applications, are considered. The main variables covered include fiber type (Torex, Hooked, PVA, and Spectra) and fiber volume fraction (ranging from 0.75% to 2.0%). An extensive experimental program is carried out using direct tensile tests and stress-versus crack opening displacement tests on notched tensile prisms. The key experimental results were analysed and modeled using simple prediction equations which, combined with a composite mechanics approach, allowed for predicting schematic simplified stress-strain and stress-displacement response curves for use in structural modeling. The experimental data show that specimens reinforced with Torex fibers performs best, follows by Hooked and Spectra fibers, then PVA fibers. Significant variability in key parameters was observed througout suggesting that variability must be studied further. The new information obtained can be used as input for material models for finite element analysis and can provide greater confidence in using the HPFRC composites in structural applications. It also provides a good foundation to integrate these composites in conventional structural analysis and design.

  15. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  16. RANS computations for identification of 1-D cavitation model parameters: application to full load cavitation vortex rope

    NASA Astrophysics Data System (ADS)

    Alligné, S.; Decaix, J.; Müller, A.; Nicolet, C.; Avellan, F.; Münch, C.

    2017-04-01

    Due to the massive penetration of alternative renewable energies, hydropower is a key energy conversion technology for stabilizing the electrical power network by using hydraulic machines at off design operating conditions. At full load, the axisymmetric cavitation vortex rope developing in Francis turbines acts as an internal source of energy, leading to an instability commonly referred to as self-excited surge. 1-D models are developed to predict this phenomenon and to define the range of safe operating points for a hydropower plant. These models require a calibration of several parameters. The present work aims at identifying these parameters by using CFD results as objective functions for an optimization process. A 2-D Venturi and 3-D Francis turbine are considered.

  17. Analog design optimization methodology for ultralow-power circuits using intuitive inversion-level and saturation-level parameters

    NASA Astrophysics Data System (ADS)

    Eimori, Takahisa; Anami, Kenji; Yoshimatsu, Norifumi; Hasebe, Tetsuya; Murakami, Kazuaki

    2014-01-01

    A comprehensive design optimization methodology using intuitive nondimensional parameters of inversion-level and saturation-level is proposed, especially for ultralow-power, low-voltage, and high-performance analog circuits with mixed strong, moderate, and weak inversion metal-oxide-semiconductor transistor (MOST) operations. This methodology is based on the synthesized charge-based MOST model composed of Enz-Krummenacher-Vittoz (EKV) basic concepts and advanced-compact-model (ACM) physics-based equations. The key concept of this methodology is that all circuit and system characteristics are described as some multivariate functions of inversion-level parameters, where the inversion level is used as an independent variable representative of each MOST. The analog circuit design starts from the first step of inversion-level design using universal characteristics expressed by circuit currents and inversion-level parameters without process-dependent parameters, followed by the second step of foundry-process-dependent design and the last step of verification using saturation-level criteria. This methodology also paves the way to an intuitive and comprehensive design approach for many kinds of analog circuit specifications by optimization using inversion-level log-scale diagrams and saturation-level criteria. In this paper, we introduce an example of our design methodology for a two-stage Miller amplifier.

  18. Biodegradation modelling of a dissolved gasoline plume applying independent laboratory and field parameters

    NASA Astrophysics Data System (ADS)

    Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.

    2000-12-01

    Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.

  19. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties

    PubMed Central

    Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073

  20. Software Computes Tape-Casting Parameters

    NASA Technical Reports Server (NTRS)

    deGroh, Henry C., III

    2003-01-01

    Tcast2 is a FORTRAN computer program that accelerates the setup of a process in which a slurry containing metal particles and a polymeric binder is cast, to a thickness regulated by a doctor blade, onto fibers wound on a rotating drum to make a green precursor of a metal-matrix/fiber composite tape. Before Tcast2, setup parameters were determined by trial and error in time-consuming multiple iterations of the process. In Tcast2, the fiber architecture in the final composite is expressed in terms of the lateral distance between fibers and the thickness-wise distance between fibers in adjacent plies. The lateral distance is controlled via the manner of winding. The interply spacing is controlled via the characteristics of the slurry and the doctor-blade height. When a new combination of fibers and slurry is first cast and dried to a green tape, the shrinkage from the wet to the green condition and a few other key parameters of the green tape are measured. These parameters are provided as input to Tcast2, which uses them to compute the doctor-blade height and fiber spacings needed to obtain the desired fiber architecture and fiber volume fraction in the final composite.

  1. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties.

    PubMed

    Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.

  2. Modern methods for the quality management of high-rate melt solidification

    NASA Astrophysics Data System (ADS)

    Vasiliev, V. A.; Odinokov, S. A.; Serov, M. M.

    2016-12-01

    The quality management of high-rate melt solidification needs combined solution obtained by methods and approaches adapted to a certain situation. Technological audit is recommended to estimate the possibilities of the process. Statistical methods are proposed with the choice of key parameters. Numerical methods, which can be used to perform simulation under multifactor technological conditions, and an increase in the quality of decisions are of particular importance.

  3. New methodology to baseline and match AME polysilicon etcher using advanced diagnostic tools

    NASA Astrophysics Data System (ADS)

    Poppe, James; Shipman, John; Reinhardt, Barbara E.; Roussel, Myriam; Hedgecock, Raymond; Fonda, Arturo

    1999-09-01

    As process controls tighten in the semiconductor industry, the need to understand the variables that determine system performance become more important. For plasma etch systems, process success depends on the control of key parameters such as: vacuum integrity, pressure, gas flows, and RF power. It is imperative to baseline, monitor, and control these variables. This paper presents an overview of the methods and tools used by Motorola BMC fabrication facility to characterize an Applied Materials polysilicon etcher. Tool performance data obtained from our traditional measurement techniques are limited in their scope and do not provide a complete picture of the ultimate tool performance. Presently the BMC traditional characterization tools provide a snapshot of the static operation of the equipment under test (EUT); however, complete evaluation of the dynamic performance cannot be monitored without the aid of specialized diagnostic equipment. To provide us with a complete system baseline evaluation of the polysilicon etcher, three diagnostic tools were utilized: Lucas Labs Vacuum Diagnostic System, Residual Gas Analyzer, and the ENI Voltage/Impedance Probe. The diagnostic methodology used to baseline and match key parameters of qualified production equipment has had an immense impact on other equipment characterization in the facility. It has resulted in reduced cycle time for new equipment introduction as well.

  4. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  5. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  6. Advanced Algorithms and High-Performance Testbed for Large-Scale Site Characterization and Subsurface Target Detecting Using Airborne Ground Penetrating SAR

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Collier, James B.; Citak, Ari

    1997-01-01

    A team of US Army Corps of Engineers, Omaha District and Engineering and Support Center, Huntsville, let Propulsion Laboratory (JPL), Stanford Research Institute (SRI), and Montgomery Watson is currently in the process of planning and conducting the largest ever survey at the Former Buckley Field (60,000 acres), in Colorado, by using SRI airborne, ground penetrating, Synthetic Aperture Radar (SAR). The purpose of this survey is the detection of surface and subsurface Unexploded Ordnance (UXO) and in a broader sense the site characterization for identification of contaminated as well as clear areas. In preparation for such a large-scale survey, JPL has been developing advanced algorithms and a high-performance restbed for processing of massive amount of expected SAR data from this site. Two key requirements of this project are the accuracy (in terms of UXO detection) and speed of SAR data processing. The first key feature of this testbed is a large degree of automation and a minimum degree of the need for human perception in the processing to achieve an acceptable processing rate of several hundred acres per day. For accurate UXO detection, novel algorithms have been developed and implemented. These algorithms analyze dual polarized (HH and VV) SAR data. They are based on the correlation of HH and VV SAR data and involve a rather large set of parameters for accurate detection of UXO. For each specific site, this set of parameters can be optimized by using ground truth data (i.e., known surface and subsurface UXOs). In this paper, we discuss these algorithms and their successful application for detection of surface and subsurface anti-tank mines by using a data set from Yuma proving Ground, A7, acquired by SRI SAR.

  7. Exploiting Auto-Collimation for Real-Time Onboard Monitoring of Space Optical Camera Geometric Parameters

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, H.; Liu, D.; Miu, Y.

    2018-05-01

    Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.

  8. Review of concrete biodeterioration in relation to nuclear waste.

    PubMed

    Turick, Charles E; Berry, Christopher J

    2016-01-01

    Storage of radioactive waste in concrete structures is a means of containing wastes and related radionuclides generated from nuclear operations in many countries. Previous efforts related to microbial impacts on concrete structures that are used to contain radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the research in this field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete structures used to store or dispose of radioactive waste. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources such as components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The microbial contribution to degradation of the concrete structures containing radioactive waste is a constant possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Parameters to focus on for modeling activities and possible options for mitigation that would minimize concrete biodegradation are discussed and include key conditions that drive microbial activity on concrete surfaces. Copyright © 2015. Published by Elsevier Ltd.

  9. Multiphase porous media modelling: A novel approach to predicting food processing performance.

    PubMed

    Khan, Md Imran H; Joardder, M U H; Kumar, Chandan; Karim, M A

    2018-03-04

    The development of a physics-based model of food processing is essential to improve the quality of processed food and optimize energy consumption. Food materials, particularly plant-based food materials, are complex in nature as they are porous and have hygroscopic properties. A multiphase porous media model for simultaneous heat and mass transfer can provide a realistic understanding of transport processes and thus can help to optimize energy consumption and improve food quality. Although the development of a multiphase porous media model for food processing is a challenging task because of its complexity, many researchers have attempted it. The primary aim of this paper is to present a comprehensive review of the multiphase models available in the literature for different methods of food processing, such as drying, frying, cooking, baking, heating, and roasting. A critical review of the parameters that should be considered for multiphase modelling is presented which includes input parameters, material properties, simulation techniques and the hypotheses. A discussion on the general trends in outcomes, such as moisture saturation, temperature profile, pressure variation, and evaporation patterns, is also presented. The paper concludes by considering key issues in the existing multiphase models and future directions for development of multiphase models.

  10. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  11. The influence of high shear mixing on ternary dry powder inhaler formulations.

    PubMed

    Hertel, Mats; Schwarz, Eugen; Kobler, Mirjam; Hauptstein, Sabine; Steckel, Hartwig; Scherließ, Regina

    2017-12-20

    The blending process is a key step in the production of dry powder inhaler formulations, but only little is known about the influence of process parameters. This is especially true for high shear blending of ternary formulations. For this reason, this study aims to investigate the influence of high shear mixing process parameters (mixing time and rotation speed) on the fine particle fraction (FPF) of ternary mixtures when using budesonide as model drug, two different carrier materials and two different mixing orders. Prolonged mixing time and higher rotation speeds led to lower FPFs, possibly due to higher press-on forces acting on the active pharmaceutical ingredients (API). In addition, a clear correlation between the energy consumption of the blender (the energy input into the blend) and the reduction of the FPF could be shown. Furthermore blending the carrier and the fines before adding the API was also found to be favorable. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Analysis of flow field characteristics in IC equipment chamber based on orthogonal design

    NASA Astrophysics Data System (ADS)

    Liu, W. F.; Yang, Y. Y.; Wang, C. N.

    2017-01-01

    This paper aims to study the influence of the configuration of processing chamber as a part of IC equipment on flow field characteristics. Four parameters, including chamber height, chamber diameter, inlet mass flow rate and outlet area, are arranged using orthogonally design method to study their influence on flow distribution in the processing chamber with the commercial software-Fluent. The velocity, pressure and temperature distribution above the holder were analysed respectively. The velocity difference value of the gas flow above the holder is defined as the evaluation criteria to evaluate the uniformity of the gas flow. The quantitative relationship between key parameters and the uniformity of gas flow was found through analysis of experimental results. According to our study, the chamber height is the most significant factor, and then follows the outlet area, chamber diameter and inlet mass flow rate. This research can provide insights into the study and design of configuration of etcher, plasma enhanced chemical vapor deposition (PECVD) equipment, and other systems with similar configuration and processing condition.

  13. Numerical study on injection parameters optimization of thin wall and biodegradable polymers parts

    NASA Astrophysics Data System (ADS)

    Santos, C.; Mendes, A.; Carreira, P.; Mateus, A.; Malça, C.

    2017-07-01

    Nowadays, the molds industry searches new markets, with diversified and added value products. The concept associated to the production of thin walled and biodegradable parts mostly manufactured by injection process has assumed a relevant importance due to environmental and economic factors. The growth of a global consciousness about the harmful effects of the conventional polymers in our life quality associated with the legislation imposed, become key factors for the choice of a particular product by the consumer. The target of this work is to provide an integrated solution for the injection of parts with thin walls and manufactured using biodegradable materials. This integrated solution includes the design and manufacture processes of the mold as well as to find the optimum values for the injection parameters in order to become the process effective and competitive. For this, the Moldflow software was used. It was demonstrated that this computational tool provides an effective responsiveness and it can constitute an important tool in supporting the injection molding of thin-walled and biodegradable parts.

  14. Mass production of bacterial communities adapted to the degradation of volatile organic compounds (TEX).

    PubMed

    Lapertot, Miléna; Seignez, Chantal; Ebrahimi, Sirous; Delorme, Sandrine; Peringer, Paul

    2007-06-01

    This study focuses on the mass cultivation of bacteria adapted to the degradation of a mixture composed of toluene, ethylbenzene, o-, m- and p-xylenes (TEX). For the cultivation process Substrate Pulse Batch (SPB) technique was adapted under well-automated conditions. The key parameters to be monitored were handled by LabVIEW software including, temperature, pH, dissolved oxygen and turbidity. Other parameters, such as biomass, ammonium or residual substrate concentrations needed offline measurements. SPB technique has been successfully tested experimentally on TEX. The overall behavior of the mixed bacterial population was observed and discussed along the cultivation process. Carbon and nitrogen limitations were shown to affect the integrity of the bacterial cells as well as their production of exopolymeric substances (EPS). Average productivity and yield values successfully reached the industrial specifications, which were 0.45 kg(DW)m(-3) d(-1) and 0.59 g(DW)g (C) (-1) , respectively. Accuracy and reproducibility of the obtained results present the controlled SPB process as a feasible technique.

  15. [Determination of process variable pH in solid-state fermentation by FT-NIR spectroscopy and extreme learning machine (ELM)].

    PubMed

    Liu, Guo-hai; Jiang, Hui; Xiao, Xia-hong; Zhang, Dong-juan; Mei, Cong-li; Ding, Yu-han

    2012-04-01

    Fourier transform near-infrared (FT-NIR) spectroscopy was attempted to determine pH, which is one of the key process parameters in solid-state fermentation of crop straws. First, near infrared spectra of 140 solid-state fermented product samples were obtained by near infrared spectroscopy system in the wavelength range of 10 000-4 000 cm(-1), and then the reference measurement results of pH were achieved by pH meter. Thereafter, the extreme learning machine (ELM) was employed to calibrate model. In the calibration model, the optimal number of PCs and the optimal number of hidden-layer nodes of ELM network were determined by the cross-validation. Experimental results showed that the optimal ELM model was achieved with 1040-1 topology construction as follows: R(p) = 0.961 8 and RMSEP = 0.104 4 in the prediction set. The research achievement could provide technological basis for the on-line measurement of the process parameters in solid-state fermentation.

  16. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    NASA Astrophysics Data System (ADS)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  17. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie

    2014-12-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.

  18. Meta-analysis using Dirichlet process.

    PubMed

    Muthukumarana, Saman; Tiwari, Ram C

    2016-02-01

    This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.

  19. Solar oxidation and removal of arsenic--Key parameters for continuous flow applications.

    PubMed

    Gill, L W; O'Farrell, C

    2015-12-01

    Solar oxidation to remove arsenic from water has previously been investigated as a batch process. This research has investigated the kinetic parameters for the design of a continuous flow solar reactor to remove arsenic from contaminated groundwater supplies. Continuous flow recirculated batch experiments were carried out under artificial UV light to investigate the effect of different parameters on arsenic removal efficiency. Inlet water arsenic concentrations of up to 1000 μg/L were reduced to below 10 μg/L requiring 12 mg/L iron after receiving 12 kJUV/L radiation. Citrate however was somewhat surprisingly found to promote a detrimental effect on the removal process in the continuous flow reactor studies which is contrary to results found in batch scale tests. The impact of other typical water groundwater quality parameters (phosphate and silica) on the process due to their competition with arsenic for photooxidation products revealed a much higher sensitivity to phosphate ions compared to silicate. Other results showed no benefit from the addition of TiO2 photocatalyst but enhanced arsenic removal at higher temperatures up to 40 °C. Overall, these results have indicated the kinetic envelope from which a continuous flow SORAS single pass system could be more confidently designed for a full-scale community groundwater application at a village level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. On the Network Convergence Process in RPL over IEEE 802.15.4 Multihop Networks: Improvement and Trade-Offs

    PubMed Central

    Kermajani, Hamidreza; Gomez, Carles

    2014-01-01

    The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs. PMID:25004154

  1. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing✩

    PubMed Central

    Seppala, Jonathan E.; Migler, Kalman D.

    2016-01-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters. PMID:29167755

  2. Fermentation of Saccharomyces cerevisiae - Combining kinetic modeling and optimization techniques points out avenues to effective process design.

    PubMed

    Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara

    2018-09-14

    A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. On the network convergence process in RPL over IEEE 802.15.4 multihop networks: improvement and trade-offs.

    PubMed

    Kermajani, Hamidreza; Gomez, Carles

    2014-07-07

    The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs.

  4. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing.

    PubMed

    Seppala, Jonathan E; Migler, Kalman D

    2016-10-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters.

  5. Achieving mask order processing automation, interoperability and standardization based on P10

    NASA Astrophysics Data System (ADS)

    Rodriguez, B.; Filies, O.; Sadran, D.; Tissier, Michel; Albin, D.; Stavroulakis, S.; Voyiatzis, E.

    2007-02-01

    Last year the MUSCLE (Masks through User's Supply Chain: Leadership by Excellence) project was presented. Here is the project advancement. A key process in mask supply chain management is the exchange of technical information for ordering masks. This process is large, complex, company specific and error prone, and leads to longer cycle times and higher costs due to missing or wrong inputs. Its automation and standardization could produce significant benefits. We need to agree on the standard for mandatory and optional parameters, and also a common way to describe parameters when ordering. A system was created to improve the performance in terms of Key Performance Indicators (KPIs) such as cycle time and cost of production. This tool allows us to evaluate and measure the effect of factors, as well as the effect of implementing the improvements of the complete project. Next, a benchmark study and a gap analysis were performed. These studies show the feasibility of standardization, as there is a large overlap in requirements. We see that the SEMI P10 standard needs enhancements. A format supporting the standard is required, and XML offers the ability to describe P10 in a flexible way. Beyond using XML for P10, the semantics of the mask order should also be addressed. A system design and requirements for a reference implementation for a P10 based management system are presented, covering a mechanism for the evolution and for version management and a design for P10 editing and data validation.

  6. How gamma radiation processing systems are benefiting from the latest advances in information technology

    NASA Astrophysics Data System (ADS)

    Gibson, Wayne H.; Levesque, Daniel

    2000-03-01

    This paper discusses how gamma irradiation plants are putting the latest advances in computer and information technology to use for better process control, cost savings, and strategic advantages. Some irradiator operations are gaining significant benefits by integrating computer technology and robotics with real-time information processing, multi-user databases, and communication networks. The paper reports on several irradiation facilities that are making good use of client/server LANs, user-friendly graphics interfaces, supervisory control and data acquisition (SCADA) systems, distributed I/O with real-time sensor devices, trending analysis, real-time product tracking, dynamic product scheduling, and automated dosimetry reading. These plants are lowering costs by fast and reliable reconciliation of dosimetry data, easier validation to GMP requirements, optimizing production flow, and faster release of sterilized products to market. There is a trend in the manufacturing sector towards total automation using "predictive process control". Real-time verification of process parameters "on-the-run" allows control parameters to be adjusted appropriately, before the process strays out of limits. Applying this technology to the gamma radiation process, control will be based on monitoring the key parameters such as time, and making adjustments during the process to optimize quality and throughput. Dosimetry results will be used as a quality control measurement rather than as a final monitor for the release of the product. Results are correlated with the irradiation process data to quickly and confidently reconcile variations. Ultimately, a parametric process control system utilizing responsive control, feedback and verification will not only increase productivity and process efficiency, but can also result in operating within tighter dose control set points.

  7. Features and selection of vascular access devices.

    PubMed

    Sansivero, Gail Egan

    2010-05-01

    To review venous anatomy and physiology, discuss assessment parameters before vascular access device (VAD) placement, and review VAD options. Journal articles, personal experience. A number of VAD options are available in clinical practice. Access planning should include comprehensive assessment, with attention to patient participation in the planning and selection process. Careful consideration should be given to long-term access needs and preservation of access sites. Oncology nurses are uniquely suited to perform a key role in VAD planning and placement. With knowledge of infusion therapy, anatomy and physiology, device options, and community resources, nurses can be key leaders in preserving vascular access and improving the safety and comfort of infusion therapy. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Image security based on iterative random phase encoding in expanded fractional Fourier transform domains

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjun; Chen, Hang; Blondel, Walter; Shen, Zhenmin; Liu, Shutian

    2018-06-01

    A novel image encryption method is proposed by using the expanded fractional Fourier transform, which is implemented with a pair of lenses. Here the centers of two lenses are separated at the cross section of axis in optical system. The encryption system is addressed with Fresnel diffraction and phase modulation for the calculation of information transmission. The iterative process with the transform unit is utilized for hiding secret image. The structure parameters of a battery of lenses can be used for additional keys. The performance of encryption method is analyzed theoretically and digitally. The results show that the security of this algorithm is enhanced markedly by the added keys.

  9. Rapid and solvent-saving liquefaction of woody biomass using microwave-ultrasonic assisted technology.

    PubMed

    Lu, Zexiang; Wu, Zhengguo; Fan, Liwei; Zhang, Hui; Liao, Yiqiang; Zheng, Deyong; Wang, Siqun

    2016-01-01

    A novel process to rapidly liquefy sawdust using reduced quantities of solvent, was successfully carried out via microwave-ultrasonic assisted technology (MUAT) in a sulphuric acid/polyethylene glycol 400-glycerol catalytic system. The influences of some key parameters on the liquefaction yield were investigated. The results showed that compared with traditional liquefaction, the introduction of MUAT allowed the solvent dosage to be halved and shortened the liquefaction time from 60 to 20 min. The liquefaction yield reached 91% under the optimal conditions. However, the influence on the yield of some parameters such as catalyst concentration, was similar to that of traditional liquefaction, indicating that the application of MUAT possibly only intensified heat and mass transfer rather than altering either the degradation mechanism or pathway. The introduction of MUAT as a process intensification technology has good industrial application potential for woody biomass liquefaction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Parameter analysis on the ultrasonic TSV-filling process and electrochemical characters

    NASA Astrophysics Data System (ADS)

    Wang, Fuliang; Ren, Xinyu; Wang, Yan; Zeng, Peng; Zhou, Zhaohua; Xiao, Hongbin; Zhu, Wenhui

    2017-10-01

    As one of the key technologies in 3D packaging, through silicon via (TSV) interconnection technology has become a focus recently. In this paper, an electrodeposition method for TSV filling with the assistance of ultrasound and additives are introduced. Two important parameters i.e. current density and ultrasonic power are studied for TSV filling process and electrochemical properties. It is found that ultrasound can improve the quality of TSV-filling and change the TSV-filling mode. The experimental results also indicate that the filling rate enhances more significantly with decreasing current density under ultrasonic conditions than under silent conditions. In addition, according to the voltammetry curve, the increase of ultrasonic power can significantly increase the current density of cupric reduction, and decrease the thickness of diffusion layer. So that the reduction speed of copper ions is accelerated, resulting in a higher TSV-filling rate.

  11. Optimization of a growth process for as-grown 2D materials-based devices

    NASA Astrophysics Data System (ADS)

    Lindquist, Miles; Khadka, Sudiksha; Aleithan, Shrouq; Blumer, Ari; Wickramasinghe, Thushan; Thorat, Ruhi; Kordesch, Martin; Stinaff, Eric

    We will present the effects of varying key parameters of a deterministic growth method for producing self-contacted 2D transition metal dichalcogenides. Chemical vapor deposition is used to grow a film of 2D material nucleated around and seeded from metallic features prepared by photolithography and sputtering on a Si/SiO2 substrate prior to growth. We will focus on a particular method of growing variable MoS2 based device structures. The goal of this work is to arrive at robust platform for growing a variety of device structures by systematically altering parameters such as the amount of reactants used, the heat of the substrate and oxide powder, and the flow rate of argon gas used. These results will help advance a comprehensive process for the scalable production of as-grown, complex, 2D materials-based device architectures.

  12. Wavefront attributes in anisotropic media

    NASA Astrophysics Data System (ADS)

    Vanelle, C.; Abakumov, I.; Gajewski, D.

    2018-07-01

    Surface-measured wavefront attributes are the key ingredient to multiparameter methods, which are nowadays standard tools in seismic data processing. However, most operators are restricted to application to isotropic media. Whereas application of an isotropic operator will still lead to satisfactory stack results, further processing steps that interpret isotropic stacking parameters in terms of wavefront attributes will lead to erroneous results if anisotropy is present but not accounted for. In this paper, we derive relationships between the stacking parameters and anisotropic wavefront attributes that allow us to apply the common reflection surface type operator to 3-D media with arbitrary anisotropy for the zero-offset and finite-offset configurations including converted waves. The operator itself is expressed in terms of wavefront attributes that are measured in the acquisition surface, that is, no model assumptions are made. Numerical results confirm that the accuracy of the new anisotropic operator is of the same magnitude as that of its isotropic counterpart.

  13. ICFA Beam Dynamics Newsletter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pikin, A.

    2017-11-21

    Electron beam ion sources technology made significant progress since 1968 when this method of producing highly charged ions in a potential trap within electron beam was proposed by E. Donets. Better understanding of physical processes in EBIS, technological advances and better simulation tools determined significant progress in key EBIS parameters: electron beam current and current density, ion trap capacity, attainable charge states. Greatly increased the scope of EBIS and EBIT applications. An attempt is made to compile some of EBIS engineering problems and solutions and to demonstrate a present stage of understanding the processes and approaches to build a bettermore » EBIS.« less

  14. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  15. Minimization of the hole overcut and cylindricity errors during rotary ultrasonic drilling of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.

    2018-04-01

    Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.

  16. Panorama parking assistant system with improved particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Li, Zhichao; Jiang, Weigang; Wang, Xin'an; Xu, Yong

    2013-10-01

    A panorama parking assistant system (PPAS) for the automotive aftermarket together with a practical improved particle swarm optimization method (IPSO) are proposed in this paper. In the PPAS system, four fisheye cameras are installed in the vehicle with different views, and four channels of video frames captured by the cameras are processed as a 360-deg top-view image around the vehicle. Besides the embedded design of PPAS, the key problem for image distortion correction and mosaicking is the efficiency of parameter optimization in the process of camera calibration. In order to address this problem, an IPSO method is proposed. Compared with other parameter optimization methods, the proposed method allows a certain range of dynamic change for the intrinsic and extrinsic parameters, and can exploit only one reference image to complete all of the optimization; therefore, the efficiency of the whole camera calibration is increased. The PPAS is commercially available, and the IPSO method is a highly practical way to increase the efficiency of the installation and the calibration of PPAS in automobile 4S shops.

  17. Identification of a thermo-elasto-viscoplastic behavior law for the simulation of thermoforming of high impact polystyrene

    NASA Astrophysics Data System (ADS)

    Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.

    2018-05-01

    Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.

  18. Manufacturing Methods and Technology Program Automatic In-Process Microcircuit Evaluation.

    DTIC Science & Technology

    1980-10-01

    methods of controlling the AIME system are with the computer and associated inter- face (CPU control), and with controls located on the front panels...Sync and Blanking signals When the AIME system is being operated by the front panel controls , the computer does not influence the system operation. SU...the color video monitor display. The operator controls these parameters by 1) depressing the appropriate key on the keyboard, 2) observing on the

  19. Fault-tolerant composite Householder reflection

    NASA Astrophysics Data System (ADS)

    Torosov, Boyan T.; Kyoseva, Elica; Vitanov, Nikolay V.

    2015-07-01

    We propose a fault-tolerant implementation of the quantum Householder reflection, which is a key operation in various quantum algorithms, quantum-state engineering, generation of arbitrary unitaries, and entanglement characterization. We construct this operation using the modular approach of composite pulses and a relation between the Householder reflection and the quantum phase gate. The proposed implementation is highly insensitive to variations in the experimental parameters, which makes it suitable for high-fidelity quantum information processing.

  20. On-rate based optimization of structure-kinetic relationship--surfing the kinetic map.

    PubMed

    Schoop, Andreas; Dey, Fabian

    2015-10-01

    In the lead discovery process residence time has become an important parameter for the identification and characterization of the most efficacious compounds in vivo. To enable the success of compound optimization by medicinal chemistry toward a desired residence time the understanding of structure-kinetic relationship (SKR) is essential. This article reviews various approaches to monitor SKR and suggests using the on-rate as the key monitoring parameter. The literature is reviewed and examples of compound series with low variability as well as with significant changes in on-rates are highlighted. Furthermore, findings of kinetic on-rate changes are presented and potential underlying rationales are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  2. Mapping land water and energy balance relations through conditional sampling of remote sensing estimates of atmospheric forcing and surface states

    NASA Astrophysics Data System (ADS)

    Farhadi, Leila; Entekhabi, Dara; Salvucci, Guido

    2016-04-01

    In this study, we develop and apply a mapping estimation capability for key unknown parameters that link the surface water and energy balance equations. The method is applied to the Gourma region in West Africa. The accuracy of the estimation method at point scale was previously examined using flux tower data. In this study, the capability is scaled to be applicable with remotely sensed data products and hence allow mapping. Parameters of the system are estimated through a process that links atmospheric forcing (precipitation and incident radiation), surface states, and unknown parameters. Based on conditional averaging of land surface temperature and moisture states, respectively, a single objective function is posed that measures moisture and temperature-dependent errors solely in terms of observed forcings and surface states. This objective function is minimized with respect to parameters to identify evapotranspiration and drainage models and estimate water and energy balance flux components. The uncertainty of the estimated parameters (and associated statistical confidence limits) is obtained through the inverse of Hessian of the objective function, which is an approximation of the covariance matrix. This calibration-free method is applied to the mesoscale region of Gourma in West Africa using multiplatform remote sensing data. The retrievals are verified against tower-flux field site data and physiographic characteristics of the region. The focus is to find the functional form of the evaporative fraction dependence on soil moisture, a key closure function for surface and subsurface heat and moisture dynamics, using remote sensing data.

  3. Demonstration of emulator-based Bayesian calibration of safety analysis codes: Theory and formulation

    DOE PAGES

    Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert

    2015-05-28

    System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less

  4. A systemic approach to explore the flexibility of energy stores at the cellular scale: Examples from muscle cells.

    PubMed

    Taghipoor, Masoomeh; van Milgen, Jaap; Gondret, Florence

    2016-09-07

    Variations in energy storage and expenditure are key elements for animals adaptation to rapidly changing environments. Because of the multiplicity of metabolic pathways, metabolic crossroads and interactions between anabolic and catabolic processes within and between different cells, the flexibility of energy stores in animal cells is difficult to describe by simple verbal, textual or graphic terms. We propose a mathematical model to study the influence of internal and external challenges on the dynamic behavior of energy stores and its consequence on cell energy status. The role of the flexibility of energy stores on the energy equilibrium at the cellular level is illustrated through three case studies: variation in eating frequency (i.e., glucose input), level of physical activity (i.e., ATP requirement), and changes in cell characteristics (i.e., maximum capacity of glycogen storage). Sensitivity analysis has been performed to highlight the most relevant parameters of the model; model simulations have then been performed to illustrate how variation in these key parameters affects cellular energy balance. According to this analysis, glycogen maximum accumulation capacity and homeostatic energy demand are among the most important parameters regulating muscle cell metabolism to ensure its energy equilibrium. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Key Processes of Silicon-On-Glass MEMS Fabrication Technology for Gyroscope Application.

    PubMed

    Ma, Zhibo; Wang, Yinan; Shen, Qiang; Zhang, Han; Guo, Xuetao

    2018-04-17

    MEMS fabrication that is based on the silicon-on-glass (SOG) process requires many steps, including patterning, anodic bonding, deep reactive ion etching (DRIE), and chemical mechanical polishing (CMP). The effects of the process parameters of CMP and DRIE are investigated in this study. The process parameters of CMP, such as abrasive size, load pressure, and pH value of SF1 solution are examined to optimize the total thickness variation in the structure and the surface quality. The ratio of etching and passivation cycle time and the process pressure are also adjusted to achieve satisfactory performance during DRIE. The process is optimized to avoid neither the notching nor lag effects on the fabricated silicon structures. For demonstrating the capability of the modified CMP and DRIE processes, a z-axis micro gyroscope is fabricated that is based on the SOG process. Initial test results show that the average surface roughness of silicon is below 1.13 nm and the thickness of the silicon is measured to be 50 μm. All of the structures are well defined without the footing effect by the use of the modified DRIE process. The initial performance test results of the resonant frequency for the drive and sense modes are 4.048 and 4.076 kHz, respectively. The demands for this kind of SOG MEMS device can be fulfilled using the optimized process.

  6. The treatment of uncertainties in reactive pollution dispersion models at urban scales.

    PubMed

    Tomlin, A S; Ziehn, T; Goodman, P; Tate, J E; Dixon, N S

    2016-07-18

    The ability to predict NO2 concentrations ([NO2]) within urban street networks is important for the evaluation of strategies to reduce exposure to NO2. However, models aiming to make such predictions involve the coupling of several complex processes: traffic emissions under different levels of congestion; dispersion via turbulent mixing; chemical processes of relevance at the street-scale. Parameterisations of these processes are challenging to quantify with precision. Predictions are therefore subject to uncertainties which should be taken into account when using models within decision making. This paper presents an analysis of mean [NO2] predictions from such a complex modelling system applied to a street canyon within the city of York, UK including the treatment of model uncertainties and their causes. The model system consists of a micro-scale traffic simulation and emissions model, and a Reynolds averaged turbulent flow model coupled to a reactive Lagrangian particle dispersion model. The analysis focuses on the sensitivity of predicted in-street increments of [NO2] at different locations in the street to uncertainties in the model inputs. These include physical characteristics such as background wind direction, temperature and background ozone concentrations; traffic parameters such as overall demand and primary NO2 fraction; as well as model parameterisations such as roughness lengths, turbulent time- and length-scales and chemical reaction rate coefficients. Predicted [NO2] is shown to be relatively robust with respect to model parameterisations, although there are significant sensitivities to the activation energy for the reaction NO + O3 as well as the canyon wall roughness length. Under off-peak traffic conditions, demand is the key traffic parameter. Under peak conditions where the network saturates, road-side [NO2] is relatively insensitive to changes in demand and more sensitive to the primary NO2 fraction. The most important physical parameter was found to be the background wind direction. The study highlights the key parameters required for reliable [NO2] estimations suggesting that accurate reference measurements for wind direction should be a critical part of air quality assessments for in-street locations. It also highlights the importance of street scale chemical processes in forming road-side [NO2], particularly for regions of high NOx emissions such as close to traffic queues.

  7. How the twain can meet: Prospect theory and models of heuristics in risky choice.

    PubMed

    Pachur, Thorsten; Suter, Renata S; Hertwig, Ralph

    2017-03-01

    Two influential approaches to modeling choice between risky options are algebraic models (which focus on predicting the overt decisions) and models of heuristics (which are also concerned with capturing the underlying cognitive process). Because they rest on fundamentally different assumptions and algorithms, the two approaches are usually treated as antithetical, or even incommensurable. Drawing on cumulative prospect theory (CPT; Tversky & Kahneman, 1992) as the currently most influential instance of a descriptive algebraic model, we demonstrate how the two modeling traditions can be linked. CPT's algebraic functions characterize choices in terms of psychophysical (diminishing sensitivity to probabilities and outcomes) as well as psychological (risk aversion and loss aversion) constructs. Models of heuristics characterize choices as rooted in simple information-processing principles such as lexicographic and limited search. In computer simulations, we estimated CPT's parameters for choices produced by various heuristics. The resulting CPT parameter profiles portray each of the choice-generating heuristics in psychologically meaningful ways-capturing, for instance, differences in how the heuristics process probability information. Furthermore, CPT parameters can reflect a key property of many heuristics, lexicographic search, and track the environment-dependent behavior of heuristics. Finally, we show, both in an empirical and a model recovery study, how CPT parameter profiles can be used to detect the operation of heuristics. We also address the limits of CPT's ability to capture choices produced by heuristics. Our results highlight an untapped potential of CPT as a measurement tool to characterize the information processing underlying risky choice. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. System-level view of geospace dynamics: Challenges for high-latitude ground-based observations

    NASA Astrophysics Data System (ADS)

    Donovan, E.

    2014-12-01

    Increasingly, research programs including GEM, CEDAR, GEMSIS, GO Canada, and others are focusing on how geospace works as a system. Coupling sits at the heart of system level dynamics. In all cases, coupling is accomplished via fundamental processes such as reconnection and plasma waves, and can be between regions, energy ranges, species, scales, and energy reservoirs. Three views of geospace are required to attack system level questions. First, we must observe the fundamental processes that accomplish the coupling. This "observatory view" requires in situ measurements by satellite-borne instruments or remote sensing from powerful well-instrumented ground-based observatories organized around, for example, Incoherent Scatter Radars. Second, we need to see how this coupling is controlled and what it accomplishes. This demands quantitative observations of the system elements that are being coupled. This "multi-scale view" is accomplished by networks of ground-based instruments, and by global imaging from space. Third, if we take geospace as a whole, the system is too complicated, so at the top level we need time series of simple quantities such as indices that capture important aspects of the system level dynamics. This requires a "key parameter view" that is typically provided through indices such as AE and DsT. With the launch of MMS, and ongoing missions such as THEMIS, Cluster, Swarm, RBSP, and ePOP, we are entering a-once-in-a-lifetime epoch with a remarkable fleet of satellites probing processes at key regions throughout geospace, so the observatory view is secure. With a few exceptions, our key parameter view provides what we need. The multi-scale view, however, is compromised by space/time scales that are important but under-sampled, combined extent of coverage and resolution that falls short of what we need, and inadequate conjugate observations. In this talk, I present an overview of what we need for taking system level research to its next level, and how high latitude ground based observations can address these challenges.

  9. Study of gas production from shale reservoirs with multi-stage hydraulic fracturing horizontal well considering multiple transport mechanisms.

    PubMed

    Guo, Chaohua; Wei, Mingzhen; Liu, Hong

    2018-01-01

    Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.

  10. Study of gas production from shale reservoirs with multi-stage hydraulic fracturing horizontal well considering multiple transport mechanisms

    PubMed Central

    Wei, Mingzhen; Liu, Hong

    2018-01-01

    Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489

  11. Hiereachical Bayesian Model for Combining Geochemical and Geophysical Data for Environmental Applications Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2013-05-01

    Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less

  12. Engineering model for ultrafast laser microprocessing

    NASA Astrophysics Data System (ADS)

    Audouard, E.; Mottay, E.

    2016-03-01

    Ultrafast laser micro-machining relies on complex laser-matter interaction processes, leading to a virtually athermal laser ablation. The development of industrial ultrafast laser applications benefits from a better understanding of these processes. To this end, a number of sophisticated scientific models have been developed, providing valuable insights in the physics of the interaction. Yet, from an engineering point of view, they are often difficult to use, and require a number of adjustable parameters. We present a simple engineering model for ultrafast laser processing, applied in various real life applications: percussion drilling, line engraving, and non normal incidence trepanning. The model requires only two global parameters. Analytical results are derived for single pulse percussion drilling or simple pass engraving. Simple assumptions allow to predict the effect of non normal incident beams to obtain key parameters for trepanning drilling. The model is compared to experimental data on stainless steel with a wide range of laser characteristics (time duration, repetition rate, pulse energy) and machining conditions (sample or beam speed). Ablation depth and volume ablation rate are modeled for pulse durations from 100 fs to 1 ps. Trepanning time of 5.4 s with a conicity of 0.15° is obtained for a hole of 900 μm depth and 100 μm diameter.

  13. Quantifying Hydro-biogeochemical Model Sensitivity in Assessment of Climate Change Effect on Hyporheic Zone Processes

    NASA Astrophysics Data System (ADS)

    Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.

    2016-12-01

    The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.

  14. Denitrification in Agricultural Soils: Integrated control and Modelling at various scales (DASIM)

    NASA Astrophysics Data System (ADS)

    Müller, Christoph; Well, Reinhard; Böttcher, Jürgen; Butterbach-Bahl, Klaus; Dannenmann, Michael; Deppe, Marianna; Dittert, Klaus; Dörsch, Peter; Horn, Marcus; Ippisch, Olaf; Mikutta, Robert; Senbayram, Mehmet; Vogel, Hans-Jörg; Wrage-Mönnig, Nicole; Müller, Carsten

    2016-04-01

    The new research unit DASIM brings together the expertise of 11 working groups to study the process of denitrification at unprecedented spatial and temporal resolution. Based on state-of-the art analytical techniques our aim is to develop improved denitrification models ranging from the microscale to the field/plot scale. Denitrification, the process of nitrate reduction allowing microbes to sustain respiration under anaerobic conditions, is the key process returning reactive nitrogen as N2to the atmosphere. Actively denitrifying communities in soil show distinct regulatory phenotypes (DRP) with characteristic controls on the single reaction steps and end-products. It is unresolved whether DRPs are anchored in the taxonomic composition of denitrifier communities and how environmental conditions shape them. Despite being intensively studied for more than 100 years, denitrification rates and emissions of its gaseous products can still not be satisfactorily predicted. While the impact of single environmental parameters is well understood, the complexity of the process itself with its intricate cellular regulation in response to highly variable factors in the soil matrix prevents robust prediction of gaseous emissions. Key parameters in soil are pO2, organic matter content and quality, pH and the microbial community structure, which in turn are affected by the soil structure, chemistry and soil-plant interactions. In the DASIM research unit, we aim at the quantitative prediction of denitrification rates as a function of microscale soil structure, organic matter quality, DRPs and atmospheric boundary conditions via a combination of state-of-the-art experimental and analytical tools (X-ray μCT, 15N tracing, NanoSIMS, microsensors, advanced flux detection, NMR spectroscopy, and molecular methods including next generation sequencing of functional gene transcripts). We actively seek collaboration with researchers working in the field of denitrification.

  15. Natural photosystems from an engineer's perspective: length, time, and energy scales of charge and energy transfer.

    PubMed

    Noy, Dror

    2008-01-01

    The vast structural and functional information database of photosynthetic enzymes includes, in addition to detailed kinetic records from decades of research on physical processes and chemical reaction-pathways, a variety of high and medium resolution crystal structures of key photosynthetic enzymes. Here, it is examined from an engineer's point of view with the long-term goal of reproducing the key features of natural photosystems in novel biological and non-biological solar-energy conversion systems. This survey reveals that the basic physics of the transfer processes, namely, the time constraints imposed by the rates of incoming photon flux and the various decay processes allow for a large degree of tolerance in the engineering parameters. Furthermore, the requirements to guarantee energy and electron transfer rates that yield high efficiency in natural photosystems are largely met by control of distance between chromophores and redox cofactors. This underlines a critical challenge for projected de novo designed constructions, that is, the control of spatial organization of cofactor molecules within dense array of different cofactors, some well within 1 nm from each other.

  16. The development and operation of the international solar-terrestrial physics central data handling facility

    NASA Technical Reports Server (NTRS)

    Lehtonen, Kenneth

    1994-01-01

    The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) International Solar-Terrestrial Physics (ISTP) Program is committed to the development of a comprehensive, multi-mission ground data system which will support a variety of national and international scientific missions in an effort to study the flow of energy from the sun through the Earth-space environment, known as the geospace. A major component of the ISTP ground data system is an ISTP-dedicated Central Data Handling Facility (CDHF). Acquisition, development, and operation of the ISTP CDHF were delegated by the ISTP Project Office within the Flight Projects Directorate to the Information Processing Division (IPD) within the Mission Operations and Data Systems Directorate (MO&DSD). The ISTP CDHF supports the receipt, storage, and electronic access of the full complement of ISTP Level-zero science data; serves as the linchpin for the centralized processing and long-term storage of all key parameters generated either by the ISTP CDHF itself or received from external, ISTP Program approved sources; and provides the required networking and 'science-friendly' interfaces for the ISTP investigators. Once connected to the ISTP CDHF, the online catalog of key parameters can be browsed from their remote processing facilities for the immediate electronic receipt of selected key parameters using the NASA Science Internet (NSI), managed by NASA's Ames Research Center. The purpose of this paper is twofold: (1) to describe how the ISTP CDHF was successfully implemented and operated to support initially the Japanese Geomagnetic Tail (GEOTAIL) mission and correlative science investigations, and (2) to describe how the ISTP CDHF has been enhanced to support ongoing as well as future ISTP missions. Emphasis will be placed on how various project management approaches were undertaken that proved to be highly effective in delivering an operational ISTP CDHF to the Project on schedule and within budget. Examples to be discussed include: the development of superior teams; the use of Defect Causal Analysis (DCA) concepts to improve the software development process in a pilot Total Quality Management (TQM) initiative; and the implementation of a robust architecture that will be able to support the anticipated growth in the ISTP Program science requirements with only incremental upgrades to the baseline system. Further examples include the use of automated data management software and the implementation of Government and/or industry standards, whenever possible, into the hardware and software development life-cycle. Finally, the paper will also report on several new technologies (for example, the installation of a Fiber Data Distribution Interface network) that were successfully employed.

  17. Study of the Influence of Key Process Parameters on Furfural Production.

    PubMed

    Fele Žilnik, Ljudmila; Grilc, Viktor; Mirt, Ivan; Cerovečki, Željko

    2016-01-01

    The present work reports the influence of key process variables on the furfural formation from leached chestnut-wood chips in a pressurized reactor. Effect of temperature, pressure, type and concentration of the catalyst solution, the steam flow rate or stripping module, the moisture content of the wood particles and geometric characteristics such as size and type of the reactor, particle size and bed height were considered systematically. One stage process was only taken into consideration. Lab-scale and pilot-scale studies were performed. The results of the non-catalysed laboratory experiments were compared with an actual non-catalysed (auto-catalysed) industrial process and with experiments on the pilot scale, the latter with 28% higher furfural yield compared to the others. Application of sulphuric acid as catalyst, in an amount of 0.03-0.05 g (H2SO4 100%)/g d.m. (dry material), enables a higher production of furfural at lower temperature and pressure of steam in a shorter reaction time. Pilot scale catalysed experiments have revealed very good performance for furfural formation under less severe operating conditions, with a maximum furfural yield as much as 88% of the theoretical value.

  18. Enzyme activities by indicator of quality in organic soil

    NASA Astrophysics Data System (ADS)

    Raigon Jiménez, Mo; Fita, Ana Delores; Rodriguez Burruezo, Adrián

    2016-04-01

    The analytical determination of biochemical parameters, as soil enzyme activities and those related to the microbial biomass is growing importance by biological indicator in soil science studies. The metabolic activity in soil is responsible of important processes such as mineralization and humification of organic matter. These biological reactions will affect other key processes involved with elements like carbon, nitrogen and phosphorus , and all transformations related in soil microbial biomass. The determination of biochemical parameters is useful in studies carried out on organic soil where microbial processes that are key to their conservation can be analyzed through parameters of the metabolic activity of these soils. The main objective of this work is to apply analytical methodologies of enzyme activities in soil collections of different physicochemical characteristics. There have been selective sampling of natural soils, organic farming soils, conventional farming soils and urban soils. The soils have been properly identified conserved at 4 ° C until analysis. The enzyme activities determinations have been: catalase, urease, cellulase, dehydrogenase and alkaline phosphatase, which bring together a representative group of biological transformations that occur in the soil environment. The results indicate that for natural and agronomic soil collections, the values of the enzymatic activities are within the ranges established for forestry and agricultural soils. Organic soils are generally higher level of enzymatic, regardless activity of the enzyme involved. Soil near an urban area, levels of activities have been significantly reduced. The vegetation cover applied to organic soils, results in greater enzymatic activity. So the quality of these soils, defined as the ability to maintain their biological productivity is increased with the use of cover crops, whether or spontaneous species. The practice of cover based on legumes could be used as an ideal choice for the recovery of degraded soils, because these soils have the highest levels of enzymatic activities.

  19. Recurrent seascape units identify key ecological processes along the western Antarctic Peninsula.

    PubMed

    Bowman, Jeff S; Kavanaugh, Maria T; Doney, Scott C; Ducklow, Hugh W

    2018-04-10

    The western Antarctic Peninsula (WAP) is a bellwether of global climate change and natural laboratory for identifying interactions between climate and ecosystems. The Palmer Long-Term Ecological Research (LTER) project has collected data on key ecological and environmental processes along the WAP since 1993. To better understand how key ecological parameters are changing across space and time, we developed a novel seascape classification approach based on in situ temperature, salinity, chlorophyll a, nitrate + nitrite, phosphate, and silicate. We anticipate that this approach will be broadly applicable to other geographical areas. Through the application of self-organizing maps (SOMs), we identified eight recurrent seascape units (SUs) in these data. These SUs have strong fidelity to known regional water masses but with an additional layer of biogeochemical detail, allowing us to identify multiple distinct nutrient profiles in several water masses. To identify the temporal and spatial distribution of these SUs, we mapped them across the Palmer LTER sampling grid via objective mapping of the original parameters. Analysis of the abundance and distribution of SUs since 1993 suggests two year types characterized by the partitioning of chlorophyll a into SUs with different spatial characteristics. By developing generalized linear models for correlated, time-lagged external drivers, we conclude that early spring sea ice conditions exert a strong influence on the distribution of chlorophyll a and nutrients along the WAP, but not necessarily the total chlorophyll a inventory. Because the distribution and density of phytoplankton biomass can have an impact on biomass transfer to the upper trophic levels, these results highlight anticipated links between the WAP marine ecosystem and climate. © 2018 John Wiley & Sons Ltd.

  20. Developing and implementing an accreditation system for health promoting schools in Northern India: a cross-sectional study.

    PubMed

    Thakur, Jarnail Singh; Sharma, Deepak; Jaswal, Nidhi; Bharti, Bhavneet; Grover, Ashoo; Thind, Paramjyoti

    2014-12-22

    The "Health Promoting School" (HPS) is a holistic and comprehensive approach to integrating health promotion within the community. At the time of conducting this study, there was no organized accreditation system for HPS in India. We therefore developed an accreditation system for HPSs using support from key stakeholders and implemented this system in HPS in Chandigarh territory, India. A desk review was undertaken to review HPS accreditation processes used in other countries. An HPS accreditation manual was drafted after discussions with key stakeholders. Seventeen schools (eight government and nine private) were included in the study. A workshop was held with school principals and teachers and other key stakeholders, during which parameters, domains and an accreditation checklist were discussed and finalized. The process of accreditation of these 17 schools was initiated in 2011 according to the accreditation manual. HPSs were encouraged to undertake activities to increase their accreditation grade and were reassessed in 2013 to monitor progress. Each school was graded on the basis of the accreditation scores obtained. The accreditation manual featured an accreditation checklist, with parameters, scores and domains. It categorized accreditation into four levels: bronze, silver, gold and platinum (each level having its own specific criteria and mandate). In 2011, more than half (52.9%) of the schools belonged to the bronze level and only 23.5% were at the gold level. Improvements were observed upon reassessment after 2 years (2013), with 76.4% of schools at the gold level and only 11.8% at bronze. The HPS accreditation system is feasible in school settings and was well implemented in the schools of Chandigarh. Improvements in accreditation scores between 2011 and 2013 suggest that the system may be effective in increasing levels of health promotion in communities.

  1. Application of jet-shear-layer mixing and effervescent atomization to the development of a low-NO(x) combustor. Ph.D. Thesis - Purdue Univ.

    NASA Technical Reports Server (NTRS)

    Colantonio, Renato Olaf

    1993-01-01

    An investigation was conducted to develop appropriate technologies for a low-NO(x), liquid-fueled combustor. The combustor incorporates an effervescent atomizer used to inject fuel into a premixing duct. Only a fraction of the combustion air is used in the premixing process to avoid autoignition and flashback problems. This fuel-rich mixture is introduced into the remaining combustion air by a rapid jet-shear-layer-mixing process involving radial fuel-air jets impinging on axial air jets in the primary combustion zone. Computational analysis was used to provide a better understanding of the fluid dynamics that occur in jet-shear-layer mixing and to facilitate a parametric analysis appropriate to the design of an optimum low-NO(x) combustor. A number of combustor configurations were studied to assess the key combustor technologies and to validate the modeling code. The results from the experimental testing and computational analysis indicate a low-NO(x) potential for the jet-shear-layer combustor. Key parameters found to affect NO(x) emissions are the primary combustion zone fuel-air ratio, the number of axial and radial jets, the aspect ratio and radial location of the axial air jets, and the radial jet inlet hole diameter. Each of these key parameters exhibits a low-NO(x) point from which an optimized combustor was developed. Using the parametric analysis, NO(x) emissions were reduced by a factor of 3 as compared with the emissions from conventional, liquid-fueled combustors operating at cruise conditions. Further development promises even lower NO(x) with high combustion efficiency.

  2. Hybrid Modeling of Cell Signaling and Transcriptional Reprogramming and Its Application in C. elegans Development.

    PubMed

    Fertig, Elana J; Danilova, Ludmila V; Favorov, Alexander V; Ochs, Michael F

    2011-01-01

    Modeling of signal driven transcriptional reprogramming is critical for understanding of organism development, human disease, and cell biology. Many current modeling techniques discount key features of the biological sub-systems when modeling multiscale, organism-level processes. We present a mechanistic hybrid model, GESSA, which integrates a novel pooled probabilistic Boolean network model of cell signaling and a stochastic simulation of transcription and translation responding to a diffusion model of extracellular signals. We apply the model to simulate the well studied cell fate decision process of the vulval precursor cells (VPCs) in C. elegans, using experimentally derived rate constants wherever possible and shared parameters to avoid overfitting. We demonstrate that GESSA recovers (1) the effects of varying scaffold protein concentration on signal strength, (2) amplification of signals in expression, (3) the relative external ligand concentration in a known geometry, and (4) feedback in biochemical networks. We demonstrate that setting model parameters based on wild-type and LIN-12 loss-of-function mutants in C. elegans leads to correct prediction of a wide variety of mutants including partial penetrance of phenotypes. Moreover, the model is relatively insensitive to parameters, retaining the wild-type phenotype for a wide range of cell signaling rate parameters.

  3. Microwave induced plasma for solid fuels and waste processing: A review on affecting factors and performance criteria.

    PubMed

    Ho, Guan Sem; Faizal, Hasan Mohd; Ani, Farid Nasir

    2017-11-01

    High temperature thermal plasma has a major drawback which consumes high energy. Therefore, non-thermal plasma which uses comparatively lower energy, for instance, microwave plasma is more attractive to be applied in gasification process. Microwave-induced plasma gasification also carries the advantages in terms of simplicity, compactness, lightweight, uniform heating and the ability to operate under atmospheric pressure that gains attention from researchers. The present paper synthesizes the current knowledge available for microwave plasma gasification on solid fuels and waste, specifically on affecting parameters and their performance. The review starts with a brief outline on microwave plasma setup in general, and followed by the effect of various operating parameters on resulting output. Operating parameters including fuel characteristics, fuel injection position, microwave power, addition of steam, oxygen/fuel ratio and plasma working gas flow rate are discussed along with several performance criteria such as resulting syngas composition, efficiency, carbon conversion, and hydrogen production rate. Based on the present review, fuel retention time is found to be the key parameter that influences the gasification performance. Therefore, emphasis on retention time is necessary in order to improve the performance of microwave plasma gasification of solid fuels and wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The evolution of concepts for soil erosion modelling

    NASA Astrophysics Data System (ADS)

    Kirkby, Mike

    2013-04-01

    From the earliest models for soil erosion, based on power laws relating sediment discharge or yield to slope length and gradient, the development of the Universal Soil Loss Equation was a natural step, although one that has long continued to hinder the development of better perceptual models for erosion processes. Key stumbling blocks have been: 1. The failure to go through runoff generation as a key intermediary 2. The failure to separate hydrological and strength parameters of the soil 3. The failure to treat sediment transport along a slope as a routing problem 4. The failure to analyse the nature of the dependence on vegetation Key advances have been in these directions (among others) 1. Improved understanding of the hydrological processes (e.g. infiltration and runoff, sediment entrainment) leading to KINEROS, LISEM,WEPP, PESERA 2. Recognition of selective sediment transport (e.g. transport- or supply-limited removal, grain travel distances) leading e.g. to MAHLERAN 3. Development of models adapted to particular time/space scales Some major remaining problems 1. Failure to integrate geomorphological and agronomic approaches 2. Tillage erosion - Is erosion loss of sediment or lowering of centre of mass? 3. Dynamic change during an event, as rills etc form.

  5. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  6. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses.

    PubMed

    Fuller, Robert William; Wong, Tony E; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.

  7. An experimental study on pseudoelasticity of a NiTi-based damper for civil applications

    NASA Astrophysics Data System (ADS)

    Nespoli, Adelaide; Bassani, Enrico; Della Torre, Davide; Donnini, Riccardo; Villa, Elena; Passaretti, Francesca

    2017-10-01

    In this work, a pseudoelastic damper composed by NiTi wires is tested at 0.5, 1 and 2 Hz for 1000 mechanical cycles. The damping performances were evaluated by three key parameters: the damping capacity, the dissipated energy per cycle and the maximum force. During testing, the temperature of the pseudoelastic elements was registered as well. Results show that the damper assures a bi-directional motion throughout the 1000 cycles together with the maintenance of the recentering. It was observed a stabilization process in the first 50 mechanical cycles, where the key parameters reach stable values; in particular it was found that the damping capacity and the dissipated energy both decrease with frequency. Besides, the mean temperature of the pseudoleastic elements reaches a stable value during tests and confirms the different response of the pseudoelastic wires accordingly with the specific length and stain. Finally, interesting thermal effects were observed at 1 and 2 Hz: at these frequencies and at high strains, the maximum force increases but the temperature of the NiTi wire decreases being in contraddiction with the Clausius-Clapeyron law.

  8. Nonuniform gyrotropic oscillation of skyrmion in a nanodisk

    NASA Astrophysics Data System (ADS)

    Xuan, Shengjie; Liu, Yan

    2018-04-01

    It was predicted that magnetic skyrmions have potential application in the spin nano-oscillators. The oscillation frequency is a key parameter. In this paper, we study the skyrmion relaxation in a FeGe nanodisk and find that the oscillation frequency depends on the skyrmion position. The relaxation process is associated with the variation of skyrmion diameter. By analyzing the system energy, we believe that the nonuniform gyrotropic oscillation frequency is due to the change of the skyrmion diameter.

  9. Advanced Microwave Ferrite Research (AMFeR): Phase Two

    DTIC Science & Technology

    2006-12-31

    motion for the single crystal LPE films were a qualitative success, but a complete set of parameters for these films has not yet been achieved. Key...biasing field. In order to address these issues, we investigated and optimized a new LPE flux system to grow high quality thick films and bulk single...self-biased circulators. III. Methodology: BaM thick film and bulk single crystal growth by LPE process BaFe 120 19 flux melt was prepared from a

  10. Inferring the background traffic arrival process in the Internet.

    PubMed

    Hága, Péter; Csabai, István; Vattay, Gábor

    2009-12-01

    Phase transition has been found in many complex interactivity systems. Complex networks are not exception either but there are quite few real systems where we can directly understand the emergence of this nontrivial behavior from the microscopic view. In this paper, we present the emergence of the phase transition between the congested and uncongested phases of a network link. We demonstrate a method to infer the background traffic arrival process, which is one of the key state parameters of the Internet traffic. The traffic arrival process in the Internet has been investigated in several studies, since the recognition of its self-similar nature. The statistical properties of the traffic arrival process are very important since they are fundamental in modeling the dynamical behavior. Here, we demonstrate how the widely used packet train technique can be used to determine the main properties of the traffic arrival process. We show that the packet train dispersion is sensitive to the congestion on the network path. We introduce the packet train stretch as an order parameter to describe the phase transition between the congested and uncongested phases of the bottleneck link in the path. We find that the distribution of the background traffic arrival process can be determined from the average packet train dispersion at the critical point of the system.

  11. Design of high productivity antibody capture by protein A chromatography using an integrated experimental and modeling approach.

    PubMed

    Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G

    2012-06-15

    An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Bioreactor process parameter screening utilizing a Plackett-Burman design for a model monoclonal antibody.

    PubMed

    Agarabi, Cyrus D; Schiel, John E; Lute, Scott C; Chavez, Brittany K; Boyne, Michael T; Brorson, Kurt A; Khan, Mansoora; Read, Erik K

    2015-06-01

    Consistent high-quality antibody yield is a key goal for cell culture bioprocessing. This endpoint is typically achieved in commercial settings through product and process engineering of bioreactor parameters during development. When the process is complex and not optimized, small changes in composition and control may yield a finished product of less desirable quality. Therefore, changes proposed to currently validated processes usually require justification and are reported to the US FDA for approval. Recently, design-of-experiments-based approaches have been explored to rapidly and efficiently achieve this goal of optimized yield with a better understanding of product and process variables that affect a product's critical quality attributes. Here, we present a laboratory-scale model culture where we apply a Plackett-Burman screening design to parallel cultures to study the main effects of 11 process variables. This exercise allowed us to determine the relative importance of these variables and identify the most important factors to be further optimized in order to control both desirable and undesirable glycan profiles. We found engineering changes relating to culture temperature and nonessential amino acid supplementation significantly impacted glycan profiles associated with fucosylation, β-galactosylation, and sialylation. All of these are important for monoclonal antibody product quality. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  13. Reverberation index: a novel metric by which to quantify the impact of a scientific entity on a given field.

    PubMed

    Kathleen Bandt, S; Dacey, Ralph G

    2017-09-01

    The authors propose a novel bibilometric index, the reverberation index (r-index), as a comparative assessment tool for use in determining differential reverberation between scientific fields for a given scientific entity. Conversely, this may allow comparison of 2 similar scientific entities within a single scientific field. This index is calculated using a relatively simple 3-step process. Briefly, Thompson Reuters' Web of Science is used to produce a citation report for a unique search parameter (this may be an author, journal article, or topical key word). From this citation report, a list of citing journals is retrieved from which a weighted ratio of citation patterns across journals can be calculated. This r-index is then used to compare the reverberation of the original search parameter across different fields of study or wherever a comparison is required. The advantage of this novel tool is its ability to transcend a specific component of the scientific process. This affords application to a diverse range of entities, including an author, a journal article, or a topical key word, for effective comparison of that entity's reverberation within a scientific arena. The authors introduce the context for and applications of the r-index, emphasizing neurosurgical topics and journals for illustration purposes. It should be kept in mind, however, that the r-index is readily applicable across all fields of study.

  14. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  15. Dynamic analysis of process reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadle, L.J.; Lawson, L.O.; Noel, S.D.

    1995-06-01

    The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process modelsmore » are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.« less

  16. Calculations of key magnetospheric parameters using the isotropic and anisotropic SPSU global MHD code

    NASA Astrophysics Data System (ADS)

    Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor

    2017-04-01

    As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.

  17. New evaluation parameter for wearable thermoelectric generators

    NASA Astrophysics Data System (ADS)

    Wijethunge, Dimuthu; Kim, Woochul

    2018-04-01

    Wearable devices constitute a key application area for thermoelectric devices. However, owing to new constraints in wearable applications, a few conventional device optimization techniques are not appropriate and material evaluation parameters, such as figure of merit (zT) and power factor (PF), tend to be inadequate. We illustrated the incompleteness of zT and PF by performing simulations and considering different thermoelectric materials. The results indicate a weak correlation between device performance and zT and PF. In this study, we propose a new evaluation parameter, zTwearable, which is better suited for wearable applications compared to conventional zT. Owing to size restrictions, gap filler based device optimization is extremely critical in wearable devices. With respect to the occasions in which gap fillers are used, expressions for power, effective thermal conductivity (keff), and optimum load electrical ratio (mopt) are derived. According to the new parameters, the thermal conductivity of the material has become much more critical now. The proposed new evaluation parameter, namely, zTwearable, is extremely useful in the selection of an appropriate thermoelectric material among various candidates prior to the commencement of the actual design process.

  18. Rhelogical constraints on ridge formation on Icy Satellites

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Manga, M.

    2010-12-01

    The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.

  19. Key parameters governing the densification of cubic-Li7La3Zr2O12 Li+ conductors

    NASA Astrophysics Data System (ADS)

    Yi, Eongyu; Wang, Weimin; Kieffer, John; Laine, Richard M.

    2017-06-01

    Cubic-Li7La3Zr2O12 (LLZO) is regarded as one of the most promising solid electrolytes for the construction of inherently safe, next generation all-solid-state Li batteries. Unfortunately, sintering these materials to full density with controlled grain sizes, mechanical and electrochemical properties relies on energy and equipment intensive processes. In this work, we elucidate key parameters dictating LLZO densification by tracing the compositional and structural changes during processing calcined and ball-milled Al3+ doped LLZO powders. We find that the powders undergo ion (Li+/H+) exchange during room temperature processing, such that on heating, the protonated LLZO lattice collapses and crystallizes to its constituent oxides, leading to reaction driven densification at < 1000 °C, prior to sintering of LLZO grains at higher temperatures. It is shown that small particle sizes and protonation cannot be decoupled, and actually aid densification. We conclude that using fully decomposed nanoparticle mixtures, as obtained by liquid-feed flame spray pyrolysis, provides an ideal approach to use high surface and reaction energy to drive densification, resulting in pressureless sintering of Ga3+ doped LLZO thin films (25 μm) at 1130 °C/0.3 h to ideal microstructures (95 ± 1% density, 1.2 ± 0.2 μm average grain size) normally accessible only by pressure-assisted sintering. Such films offer both high ionic conductivity (1.3 ± 0.1 mS cm-1) and record low ionic area specific resistance (2 Ω cm2).

  20. Cathodal Transcranial Direct Current Stimulation (tDCS) to the Right Cerebellar Hemisphere Affects Motor Adaptation During Gait.

    PubMed

    Fernandez, Lara; Albein-Urios, Natalia; Kirkovski, Melissa; McGinley, Jennifer L; Murphy, Anna T; Hyde, Christian; Stokes, Mark A; Rinehart, Nicole J; Enticott, Peter G

    2017-02-01

    The cerebellum appears to play a key role in the development of internal rules that allow fast, predictive adjustments to novel stimuli. This is crucial for adaptive motor processes, such as those involved in walking, where cerebellar dysfunction has been found to increase variability in gait parameters. Motor adaptation is a process that results in a progressive reduction in errors as movements are adjusted to meet demands, and within the cerebellum, this seems to be localised primarily within the right hemisphere. To examine the role of the right cerebellar hemisphere in adaptive gait, cathodal transcranial direct current stimulation (tDCS) was administered to the right cerebellar hemisphere of 14 healthy adults in a randomised, double-blind, crossover study. Adaptation to a series of distinct spatial and temporal templates was assessed across tDCS condition via a pressure-sensitive gait mat (ProtoKinetics Zeno walkway), on which participants walked with an induced 'limp' at a non-preferred pace. Variability was assessed across key spatial-temporal gait parameters. It was hypothesised that cathodal tDCS to the right cerebellar hemisphere would disrupt adaptation to the templates, reflected in a failure to reduce variability following stimulation. In partial support, adaptation was disrupted following tDCS on one of the four spatial-temporal templates used. However, there was no evidence for general effects on either the spatial or temporal domain. This suggests, under specific conditions, a coupling of spatial and temporal processing in the right cerebellar hemisphere and highlights the potential importance of task complexity in cerebellar function.

  1. Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests

    NASA Technical Reports Server (NTRS)

    Potter, C. S.; Peterson, David L. (Technical Monitor)

    1997-01-01

    Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.

  2. Rare behavior of growth processes via umbrella sampling of trajectories

    NASA Astrophysics Data System (ADS)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  3. Development and kinetic analysis of cobalt gradient formation in WC-Co composites

    NASA Astrophysics Data System (ADS)

    Guo, Jun

    2011-12-01

    Functionally graded cemented tungsten carbide (FG WC-Co) is one of the main research directions in the field of WC-Co over decades. Although it has long been recognized that FG WC-Co could outperform conventional homogeneous WC-Co owing to its potentially superior combinations of mechanical properties, until recently there has been a lack of effective and economical methods to make such materials. The lack of the technology has prevented the manufacturing and industrial applications of FG WC-Co from becoming a reality. This dissertation is a comprehensive study of an innovative atmosphere heat treatment process for producing FG WC-Co with a surface cobalt compositional gradient. The process exploited a triple phase field in W-C-Co phase diagram among three phases (solid WC, solid Co, and liquid Co) and the dependence of the migration of liquid Co on temperature and carbon content. WC-Co with a graded surface cobalt composition can be achieved by controlling the diffusion of carbon transported from atmosphere during sintering or during postsintering heat treatment. The feasibility of the process was validated by the successful preparations of FG WC-Co via both carburization and decarburization process following conventional liquid phase sintering. A study of the carburization process was undertaken to further understand and quantitatively modeled this process. The effects of key processing parameters (including heat treating temperature, atmosphere, and time) and key materials variables (involving Co content, WC grain size, and addition of grain growth inhibitors) on the formation of Co gradients were examined. Moreover, a carbon-diffusion controlled kinetic model was developed for simulating the formation of the gradient during the process. The parameters involved in this model were determined by thermodynamic calculations and regression-fit of simulation results with experimental data. In summary, this research first demonstrated the principle of the approach. Second, a model was developed to predict the gradients produced by the carbon-controlled atmosphere heat treatment process, which is useful for manufacturing WC-Co with designed gradients. FG WC-Co materials produced using this method are expected to exhibit superior performance in many applications and to have a profound impact on the manufacturing industries that use tungsten carbide tools.

  4. Elucidation and visualization of solid-state transformation and mixing in a pharmaceutical mini hot melt extrusion process using in-line Raman spectroscopy.

    PubMed

    Van Renterghem, Jeroen; Kumar, Ashish; Vervaet, Chris; Remon, Jean Paul; Nopens, Ingmar; Vander Heyden, Yvan; De Beer, Thomas

    2017-01-30

    Mixing of raw materials (drug+polymer) in the investigated mini pharma melt extruder is achieved by using co-rotating conical twin screws and an internal recirculation channel. In-line Raman spectroscopy was implemented in the barrels, allowing monitoring of the melt during processing. The aim of this study was twofold: to investigate (I) the influence of key process parameters (screw speed - barrel temperature) upon the product solid-state transformation during processing of a sustained release formulation in recirculation mode; (II) the influence of process parameters (screw speed - barrel temperature - recirculation time) upon mixing of a crystalline drug (tracer) in an amorphous polymer carrier by means of residence time distribution (RTD) measurements. The results indicated a faster mixing endpoint with increasing screw speed. Processing a high drug load formulation above the drug melting temperature resulted in the production of amorphous drug whereas processing below the drug melting point produced solid dispersions with partially amorphous/crystalline drug. Furthermore, increasing the screw speed resulted in lower drug crystallinity of the solid dispersion. RTD measurements elucidated the improved mixing capacity when using the recirculation channel. In-line Raman spectroscopy has shown to be an adequate PAT-tool for product solid-state monitoring and elucidation of the mixing behavior during processing in a mini extruder. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Powder properties and compaction parameters that influence punch sticking propensity of pharmaceuticals.

    PubMed

    Paul, Shubhajit; Taylor, Lisa J; Murphy, Brendan; Krzyzaniak, Joseph F; Dawson, Neil; Mullarney, Matthew P; Meenan, Paul; Sun, Changquan Calvin

    2017-04-15

    Punch sticking is a frequently occurring problem that challenges successful tablet manufacturing. A mechanistic understanding of the punch sticking phenomenon facilitates the design of effective strategies to solve punch sticking problems of a drug. The first step in this effort is to identify process parameters and particle properties that can profoundly affect sticking performance. This work was aimed at elucidating the key material properties and compaction parameters that influence punch sticking by statistically analyzing punch sticking data of 24 chemically diverse compounds obtained using a set of tooling with removable upper punch tip. Partial least square (PLS) analysis of the data revealed that particle surface area and tablet tensile strength are the most significant factors attributed to punch sticking. Die-wall pressure, ejection force, and take-off force also correlate with sticking, but to a lesser extent. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  7. TRIP-ID: A tool for a smart and interactive identification of Magic Formula tyre model parameters from experimental data acquired on track or test rig

    NASA Astrophysics Data System (ADS)

    Farroni, Flavio; Lamberti, Raffaele; Mancinelli, Nicolò; Timpone, Francesco

    2018-03-01

    Tyres play a key role in ground vehicles' dynamics because they are responsible for traction, braking and cornering. A proper tyre-road interaction model is essential for a useful and reliable vehicle dynamics model. In the last two decades Pacejka's Magic Formula (MF) has become a standard in simulation field. This paper presents a Tool, called TRIP-ID (Tyre Road Interaction Parameters IDentification), developed to characterize and to identify with a high grade of accuracy and reliability MF micro-parameters from experimental data deriving from telemetry or from test rig. The tool guides interactively the user through the identification process on the basis of strong diagnostic considerations about the experimental data made evident by the tool itself. A motorsport application of the tool is shown as a case study.

  8. Ultra-porous titanium oxide scaffold with high compressive strength

    PubMed Central

    Tiainen, Hanna; Lyngstadaas, S. Petter; Ellingsen, Jan Eirik

    2010-01-01

    Highly porous and well interconnected titanium dioxide (TiO2) scaffolds with compressive strength above 2.5 MPa were fabricated without compromising the desired pore architectural characteristics, such as high porosity, appropriate pore size, surface-to-volume ratio, and interconnectivity. Processing parameters and pore architectural characteristics were investigated in order to identify the key processing steps and morphological properties that contributed to the enhanced strength of the scaffolds. Cleaning of the TiO2 raw powder removed phosphates but introduced sodium into the powder, which was suggested to decrease the slurry stability. Strong correlation was found between compressive strength and both replication times and solid content in the ceramic slurry. Increase in the solid content resulted in more favourable sponge loading, which was achieved due to the more suitable rheological properties of the ceramic slurry. Repeated replication process induced only negligible changes in the pore architectural parameters indicating a reduced flaw size in the scaffold struts. The fabricated TiO2 scaffolds show great promise as load-bearing bone scaffolds for applications where moderate mechanical support is required. PMID:20711636

  9. On the derivation of a simple dynamic model of anaerobic digestion including the evolution of hydrogen.

    PubMed

    Giovannini, Giannina; Sbarciog, Mihaela; Steyer, Jean-Philippe; Chamy, Rolando; Vande Wouwer, Alain

    2018-05-01

    Hydrogen has been found to be an important intermediate during anaerobic digestion (AD) and a key variable for process monitoring as it gives valuable information about the stability of the reactor. However, simple dynamic models describing the evolution of hydrogen are not commonplace. In this work, such a dynamic model is derived using a systematic data driven-approach, which consists of a principal component analysis to deduce the dimension of the minimal reaction subspace explaining the data, followed by an identification of the kinetic parameters in the least-squares sense. The procedure requires the availability of informative data sets. When the available data does not fulfill this condition, the model can still be built from simulated data, obtained using a detailed model such as ADM1. This dynamic model could be exploited in monitoring and control applications after a re-identification of the parameters using actual process data. As an example, the model is used in the framework of a control strategy, and is also fitted to experimental data from raw industrial wine processing wastewater. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. A low-cost fabrication method for sub-millimeter wave GaAs Schottky diode

    NASA Astrophysics Data System (ADS)

    Jenabi, Sarvenaz; Deslandes, Dominic; Boone, Francois; Charlebois, Serge A.

    2017-10-01

    In this paper, a submillimeter-wave Schottky diode is designed and simulated. Effect of Schottky layer thickness on cut-off frequency is studied. A novel microfabrication process is proposed and implemented. The presented microfabrication process avoids electron-beam (e-beam) lithography which reduces the cost. Also, this process provides more flexibility in selection of design parameters and allows significant reduction in the device parasitic capacitance. A key feature of the process is that the Schottky contact, the air-bridges, and the transmission lines, are fabricated in a single lift-off step. This process relies on a planarization method that is suitable for trenches of 1-10 μm deep and is tolerant to end-point variations. The fabricated diode is measured and results are compared with simulations. A very good agreement between simulation and measurement results are observed.

  11. Practical Use of Operation Data in the Process Industry

    NASA Astrophysics Data System (ADS)

    Kano, Manabu

    This paper aims to reveal real problems in the process industry and introduce recent development to solve such problems from the viewpoint of effective use of operation data. Two topics are discussed: virtual sensor and process control. First, in order to clarify the present state and problems, a part of our recent questionnaire survey of process control is quoted. It is emphasized that maintenance is a key issue not only for soft-sensors but also for controllers. Then, new techniques are explained. The first one is correlation-based just-in-time modeling (CoJIT), which can realize higher prediction performance than conventional methods and simplify model maintenance. The second is extended fictitious reference iterative tuning (E-FRIT), which can realize data-driven PID control parameter tuning without process modeling. The great usefulness of these techniques are demonstrated through their industrial applications.

  12. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  13. Morphological effects on the selectivity of intramolecular versus intermolecular catalytic reaction on Au nanoparticles.

    PubMed

    Wang, Dan; Sun, Yuanmiao; Sun, Yinghui; Huang, Jing; Liang, Zhiqiang; Li, Shuzhou; Jiang, Lin

    2017-06-14

    It is hard for metal nanoparticle catalysts to control the selectivity of a catalytic reaction in a simple process. In this work, we obtain active Au nanoparticle catalysts with high selectivity for the hydrogenation reaction of aromatic nitro compounds, by simply employing spine-like Au nanoparticles. The density functional theory (DFT) calculations further elucidate that the morphological effect on thermal selectivity control is an internal key parameter to modulate the nitro hydrogenation process on the surface of Au spines. These results show that controlled morphological effects may play an important role in catalysis reactions of noble metal NPs with high selectivity.

  14. High contrast laser marking of alumina

    NASA Astrophysics Data System (ADS)

    Penide, J.; Quintero, F.; Riveiro, A.; Fernández, A.; del Val, J.; Comesaña, R.; Lusquiños, F.; Pou, J.

    2015-05-01

    Alumina serves as raw material for a broad range of advanced ceramic products. These elements should usually be identified by some characters or symbols printed directly on them. In this sense, laser marking is an efficient, reliable and widely implemented process in industry. However, laser marking of alumina still leads to poor results since the process is not able to produce a dark mark, yielding bad contrast. In this paper, we present an experimental study on the process of marking alumina by three different lasers working in two wavelengths: 1064 nm (Near-infrared) and 532 nm (visible, green radiation). A colorimetric analysis has been carried out in order to compare the resulting marks and its contrast. The most suitable laser operating conditions were also defined and are reported here. Moreover, the physical process of marking by NIR lasers is discussed in detail. Field Emission Scanning Electron Microscopy, High Resolution Transmission Electron Microscopy and X-ray Photoelectron Spectroscopy were also employed to analyze the results. Finally, we propose an explanation for the differences of the coloration induced under different atmospheres and laser parameters. We concluded that the atmosphere is the key parameter, being the inert one the best choice to produce the darkest marks.

  15. The impact of radiation belts region on top side ionosphere condition during last solar minimum.

    NASA Astrophysics Data System (ADS)

    Rothkaehl, Hanna; Przepiórka, Dororta; Matyjasiak, Barbara

    2014-05-01

    The wave particle interactions in radiation belts region are one of the key parameters in understanding the global physical processes which govern the near Earth environment. The populations of outer radiation belts electrons increasing in response to changes in the solar wind and the interplanetary magnetic field, and decreasing as a result of scattering into the loss cone and subsequent absorption by the atmosphere. The most important question in relation to understanding the physical processes in radiation belts region relates to estimate the ratio between acceleration and loss processes. This can be also very useful for construct adequate models adopted in Space Weather program. Moreover the wave particle interaction in inner radiation zone and in outer radiation zone have significant influence on the space plasma property at ionospheric altitude. The aim of this presentation is to show the manifestation of radiation belts region at the top side ionosphere during the last long solar minimum. The presentation of longitude and seasonal changes of plasma parameters affected by process occurred in radiation belts region has been performed on the base of the DEMETER and COSMIC 3 satellite registration. This research is partly supported by grant O N517 418440

  16. Phosphatidylcholine Membrane Fusion Is pH-Dependent.

    PubMed

    Akimov, Sergey A; Polynkin, Michael A; Jiménez-Munguía, Irene; Pavlov, Konstantin V; Batishchev, Oleg V

    2018-05-03

    Membrane fusion mediates multiple vital processes in cell life. Specialized proteins mediate the fusion process, and a substantial part of their energy is used for topological rearrangement of the membrane lipid matrix. Therefore, the elastic parameters of lipid bilayers are of crucial importance for fusion processes and for determination of the energy barriers that have to be crossed for the process to take place. In the case of fusion of enveloped viruses (e.g., influenza) with endosomal membrane, the interacting membranes are in an acidic environment, which can affect the membrane's mechanical properties. This factor is often neglected in the analysis of virus-induced membrane fusion. In the present work, we demonstrate that even for membranes composed of zwitterionic lipids, changes of the environmental pH in the physiologically relevant range of 4.0 to 7.5 can affect the rate of the membrane fusion notably. Using a continual model, we demonstrated that the key factor defining the height of the energy barrier is the spontaneous curvature of the lipid monolayer. Changes of this parameter are likely to be caused by rearrangements of the polar part of lipid molecules in response to changes of the pH of the aqueous solution bathing the membrane.

  17. Numerical modeling of laser assisted tape winding process

    NASA Astrophysics Data System (ADS)

    Zaami, Amin; Baran, Ismet; Akkerman, Remko

    2017-10-01

    Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.

  18. An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves

    PubMed Central

    Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing

    2014-01-01

    Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181

  19. A no-key-exchange secure image sharing scheme based on Shamir's three-pass cryptography protocol and the multiple-parameter fractional Fourier transform.

    PubMed

    Lang, Jun

    2012-01-30

    In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.

  20. An Asymmetric Image Encryption Based on Phase Truncated Hybrid Transform

    NASA Astrophysics Data System (ADS)

    Khurana, Mehak; Singh, Hukum

    2017-09-01

    To enhance the security of the system and to protect it from the attacker, this paper proposes a new asymmetric cryptosystem based on hybrid approach of Phase Truncated Fourier and Discrete Cosine Transform (PTFDCT) which adds non linearity by including cube and cube root operation in the encryption and decryption path respectively. In this cryptosystem random phase masks are used as encryption keys and phase masks generated after the cube operation in encryption process are reserved as decryption keys and cube root operation is required to decrypt image in decryption process. The cube and cube root operation introduced in the encryption and decryption path makes system resistant against standard attacks. The robustness of the proposed cryptosystem has been analysed and verified on the basis of various parameters by simulating on MATLAB 7.9.0 (R2008a). The experimental results are provided to highlight the effectiveness and suitability of the proposed cryptosystem and prove the system is secure.

  1. Manufacture of poly(methyl methacrylate) microspheres using membrane emulsification

    PubMed Central

    Bux, Jaiyana; Manga, Mohamed S.; Hunter, Timothy N.

    2016-01-01

    Accurate control of particle size at relatively narrow polydispersity remains a key challenge in the production of synthetic polymer particles at scale. A cross-flow membrane emulsification (XME) technique was used here in the preparation of poly(methyl methacrylate) microspheres at a 1–10 l h−1 scale, to demonstrate its application for such a manufacturing challenge. XME technology has previously been shown to provide good control over emulsion droplet sizes with careful choice of the operating conditions. We demonstrate here that, for an appropriate formulation, equivalent control can be gained for a precursor emulsion in a batch suspension polymerization process. We report here the influence of key parameters on the emulsification process; we also demonstrate the close correlation in size between the precursor emulsion and the final polymer particles. Two types of polymer particle were produced in this work: a solid microsphere and an oil-filled matrix microcapsule. This article is part of the themed issue ‘Soft interfacial materials: from fundamentals to formulation’. PMID:27298430

  2. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method

  3. Process stability and morphology optimization of very thick 4H-SiC epitaxial layers grown by chloride-based CVD

    NASA Astrophysics Data System (ADS)

    Yazdanfar, M.; Stenberg, P.; Booker, I. D.; Ivanov, I. G.; Kordina, O.; Pedersen, H.; Janzén, E.

    2013-10-01

    The development of a chemical vapor deposition (CVD) process for very thick silicon carbide (SiC) epitaxial layers suitable for high power devices is demonstrated by epitaxial growth of 200 μm thick, low doped 4H-SiC layers with excellent morphology at growth rates exceeding 100 μm/h. The process development was done in a hot wall CVD reactor without rotation using both SiCl4 and SiH4+HCl precursor approaches to chloride based growth chemistry. A C/Si ratio <1 and an optimized in-situ etch are shown to be the key parameters to achieve 200 μm thick, low doped epitaxial layers with excellent morphology.

  4. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  5. Tracer SWIW tests in propped and un-propped fractures: parameter sensitivity issues, revisited

    NASA Astrophysics Data System (ADS)

    Ghergut, Julia; Behrens, Horst; Sauter, Martin

    2017-04-01

    Single-well injection-withdrawal (SWIW) or 'push-then-pull' tracer methods appear attractive for a number of reasons: less uncertainty on design and dimensioning, and lower tracer quantities required than for inter-well tests; stronger tracer signals, enabling easier and cheaper metering, and shorter metering duration required, reaching higher tracer mass recovery than in inter-well tests; last not least: no need for a second well. However, SWIW tracer signal inversion faces a major issue: the 'push-then-pull' design weakens the correlation between tracer residence times and georeservoir transport parameters, inducing insensitivity or ambiguity of tracer signal inversion w. r. to some of those georeservoir parameters that are supposed to be the target of tracer tests par excellence: pore velocity, transport-effective porosity, fracture or fissure aperture and spacing or density (where applicable), fluid/solid or fluid/fluid phase interface density. Hydraulic methods cannot measure the transport-effective values of such parameters, because pressure signals correlate neither with fluid motion, nor with material fluxes through (fluid-rock, or fluid-fluid) phase interfaces. The notorious ambiguity impeding parameter inversion from SWIW test signals has nourished several 'modeling attitudes': (i) regard dispersion as the key process encompassing whatever superposition of underlying transport phenomena, and seek a statistical description of flow-path collectives enabling to characterize dispersion independently of any other transport parameter, as proposed by Gouze et al. (2008), with Hansen et al. (2016) offering a comprehensive analysis of the various ways dispersion model assumptions interfere with parameter inversion from SWIW tests; (ii) regard diffusion as the key process, and seek for a large-time, asymptotically advection-independent regime in the measured tracer signals (Haggerty et al. 2001), enabling a dispersion-independent characterization of multiple-scale diffusion; (iii) attempt to determine both advective and non-advective transport parameters from one and the same conservative-tracer signal (relying on 'third-party' knowledge), or from twin signals of a so-called 'dual' tracer pair, e. g.: using tracers with contrasting reactivity and partitioning behavior to determine residual saturation in depleted oilfields (Tomich et al. 1973), or to determine advective parameters (Ghergut et al. 2014); using early-time signals of conservative and sorptive tracers for propped-fracture characterization (Karmakar et al. 2015); using mid-time signals of conservative tracers for a reservoir-borne inflow profiling in multi-frac systems (Ghergut et al. 2016), etc. The poster describes new uses of type-(iii) techniques for the specific purposes of shale-gas reservoir characterization, productivity monitoring, diagnostics and engineering of 're-frac' treatments, based on parameter sensitivity findings from German BMWi research project "TRENDS" (Federal Ministry for Economic Affairs and Energy, FKZ 0325515) and from the EU-H2020 project "FracRisk" (grant no. 640979).

  6. Evaluation of the biophysical limitations on photosynthesis of four varietals of Brassica rapa

    NASA Astrophysics Data System (ADS)

    Pleban, J. R.; Mackay, D. S.; Aston, T.; Ewers, B.; Weinig, C.

    2014-12-01

    Evaluating performance of agricultural varietals can support the identification of genotypes that will increase yield and can inform management practices. The biophysical limitations of photosynthesis are amongst the key factors that necessitate evaluation. This study evaluated how four biophysical limitations on photosynthesis, stomatal response to vapor pressure deficit, maximum carboxylation rate by Rubisco (Ac), rate of photosynthetic electron transport (Aj) and triose phosphate use (At) vary between four Brassica rapa genotypes. Leaf gas exchange data was used in an ecophysiological process model to conduct this evaluation. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) integrates the carbon uptake and utilization rate limiting factors for plant growth. A Bayesian framework integrated in TREES here used net A as the target to estimate the four limiting factors for each genotype. As a first step the Bayesian framework was used for outlier detection, with data points outside the 95% confidence interval of model estimation eliminated. Next parameter estimation facilitated the evaluation of how the limiting factors on A different between genotypes. Parameters evaluated included maximum carboxylation rate (Vcmax), quantum yield (ϕJ), the ratio between Vc-max and electron transport rate (J), and trios phosphate utilization (TPU). Finally, as trios phosphate utilization has been shown to not play major role in the limiting A in many plants, the inclusion of At in models was evaluated using deviance information criteria (DIC). The outlier detection resulted in a narrowing in the estimated parameter distributions allowing for greater differentiation of genotypes. Results show genotypes vary in the how limitations shape assimilation. The range in Vc-max , a key parameter in Ac, was 203.2 - 223.9 umol m-2 s-1 while the range in ϕJ, a key parameter in AJ, was 0.463 - 0.497 umol m-2 s-1. The added complexity of the TPU limitation did not improve model performance in the genotypes assessed based on DIC. By identifying how varietals differ in their biophysical limitations on photosynthesis genotype selection can be informed for agricultural goals. Further work aims at applying this approach to a fifth limiting factor on photosynthesis, mesophyll conductance.

  7. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  8. A new approach to identify the sensitivity and importance of physical parameters combination within numerical models using the Lund-Potsdam-Jena (LPJ) model as an example

    NASA Astrophysics Data System (ADS)

    Sun, Guodong; Mu, Mu

    2017-05-01

    An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.

  9. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    PubMed

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  10. Dynamical investigation and parameter stability region analysis of a flywheel energy storage system in charging mode

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ya; Li, Yong-Li; Chang, Xiao-Yong; Wang, Nan

    2013-09-01

    In this paper, the dynamic behavior analysis of the electromechanical coupling characteristics of a flywheel energy storage system (FESS) with a permanent magnet (PM) brushless direct-current (DC) motor (BLDCM) is studied. The Hopf bifurcation theory and nonlinear methods are used to investigate the generation process and mechanism of the coupled dynamic behavior for the average current controlled FESS in the charging mode. First, the universal nonlinear dynamic model of the FESS based on the BLDCM is derived. Then, for a 0.01 kWh/1.6 kW FESS platform in the Key Laboratory of the Smart Grid at Tianjin University, the phase trajectory of the FESS from a stable state towards chaos is presented using numerical and stroboscopic methods, and all dynamic behaviors of the system in this process are captured. The characteristics of the low-frequency oscillation and the mechanism of the Hopf bifurcation are investigated based on the Routh stability criterion and nonlinear dynamic theory. It is shown that the Hopf bifurcation is directly due to the loss of control over the inductor current, which is caused by the system control parameters exceeding certain ranges. This coupling nonlinear process of the FESS affects the stability of the motor running and the efficiency of energy transfer. In this paper, we investigate into the effects of control parameter change on the stability and the stability regions of these parameters based on the averaged-model approach. Furthermore, the effect of the quantization error in the digital control system is considered to modify the stability regions of the control parameters. Finally, these theoretical results are verified through platform experiments.

  11. AmapSim: A Structural Whole-plant Simulator Based on Botanical Knowledge and Designed to Host External Functional Models

    PubMed Central

    Barczi, Jean-François; Rey, Hervé; Caraglio, Yves; de Reffye, Philippe; Barthélémy, Daniel; Dong, Qiao Xue; Fourcaud, Thierry

    2008-01-01

    Background and Aims AmapSim is a tool that implements a structural plant growth model based on a botanical theory and simulates plant morphogenesis to produce accurate, complex and detailed plant architectures. This software is the result of more than a decade of research and development devoted to plant architecture. New advances in the software development have yielded plug-in external functions that open up the simulator to functional processes. Methods The simulation of plant topology is based on the growth of a set of virtual buds whose activity is modelled using stochastic processes. The geometry of the resulting axes is modelled by simple descriptive functions. The potential growth of each bud is represented by means of a numerical value called physiological age, which controls the value for each parameter in the model. The set of possible values for physiological ages is called the reference axis. In order to mimic morphological and architectural metamorphosis, the value allocated for the physiological age of buds evolves along this reference axis according to an oriented finite state automaton whose occupation and transition law follows a semi-Markovian function. Key Results Simulations were performed on tomato plants to demostrate how the AmapSim simulator can interface external modules, e.g. a GREENLAB growth model and a radiosity model. Conclusions The algorithmic ability provided by AmapSim, e.g. the reference axis, enables unified control to be exercised over plant development parameter values, depending on the biological process target: how to affect the local pertinent process, i.e. the pertinent parameter(s), while keeping the rest unchanged. This opening up to external functions also offers a broadened field of applications and thus allows feedback between plant growth and the physical environment. PMID:17766310

  12. Automatic Adviser on stationary devices status identification and anticipated change

    NASA Astrophysics Data System (ADS)

    Shabelnikov, A. N.; Liabakh, N. N.; Gibner, Ya M.; Pushkarev, E. A.

    2018-05-01

    A task is defined to synthesize an Automatic Adviser to identify the automation systems stationary devices status using an autoregressive model of changing their key parameters. An applied model type was rationalized and the research objects monitoring process algorithm was developed. A complex of mobile objects status operation simulation and prediction results analysis was proposed. Research results are commented using a specific example of a hump yard compressor station. The work was supported by the Russian Fundamental Research Fund, project No. 17-20-01040.

  13. Design and cost drivers in 2-D braiding

    NASA Technical Reports Server (NTRS)

    Morales, Alberto

    1993-01-01

    Fundamentally, the braiding process is a highly efficient, low cost method for combining single yarns into circumferential shapes, as evidenced by the number of applications for continuous sleeving. However, this braiding approach cannot fully demonstrate that it can drastically reduce the cost of complex shape structural preforms. Factors such as part geometry, machine design and configuration, materials used, and operating parameters are described as key cost drivers and what is needed to minimize their effect on elevating the cost of structural braided preforms.

  14. Manufacturing Technology Support (MATES). Task Order 0021: Air Force Technology and Industrial Base Research and Analysis, Subtask Order 06: Direct Digital Manufacturing

    DTIC Science & Technology

    2011-08-01

    industries and key players providing equipment include Flow and OMAX. The decision tree for waterjet machining is shown in Figure 28. Figure 28...about the melt pool. Process parameters including powder flow , laser power, and scan speed are adjusted accordingly • Multiple materials o BD...project.eu.com/home/home_page_static.jsp o Working with multiple partners; one is Cochlear . Using LMD or SLM to fabricate cochlear implants with 10

  15. Evaluating Ammonium, Nitrate and Sulfate Aerosols in 3-Dimensions

    NASA Technical Reports Server (NTRS)

    Mezuman, Keren; Bauer, Susanne E.; Tsigaridis, Kostas

    2015-01-01

    The effect aerosols have on climate and air quality is a func-on of their chemical composi-on, concentra-on and spa-al distribu-on. These parameters are controlled by emissions, heterogeneous and homogeneous chemistry, where thermodynamics plays a key role, transport, which includes stratospheric-­- tropospheric exchange, and deposi-onal sinks. In this work we demonstrate the effect of some of these processes on the SO4-NH4­-NO3 system using the GISS ModelE2 Global Circula-on Model (GCM).

  16. ITO-based evolutionary algorithm to solve traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Dong, Wenyong; Sheng, Kang; Yang, Chuanhua; Yi, Yunfei

    2014-03-01

    In this paper, a ITO algorithm inspired by ITO stochastic process is proposed for Traveling Salesmen Problems (TSP), so far, many meta-heuristic methods have been successfully applied to TSP, however, as a member of them, ITO needs further demonstration for TSP. So starting from designing the key operators, which include the move operator, wave operator, etc, the method based on ITO for TSP is presented, and moreover, the ITO algorithm performance under different parameter sets and the maintenance of population diversity information are also studied.

  17. A practical guide for the fabrication of microfluidic devices using glass and silicon

    PubMed Central

    Iliescu, Ciprian; Taylor, Hayden; Avram, Marioara; Miao, Jianmin; Franssila, Sami

    2012-01-01

    This paper describes the main protocols that are used for fabricating microfluidic devices from glass and silicon. Methods for micropatterning glass and silicon are surveyed, and their limitations are discussed. Bonding methods that can be used for joining these materials are summarized and key process parameters are indicated. The paper also outlines techniques for forming electrical connections between microfluidic devices and external circuits. A framework is proposed for the synthesis of a complete glass/silicon device fabrication flow. PMID:22662101

  18. Design Principles of DNA Enzyme-Based Walkers: Translocation Kinetics and Photoregulation.

    PubMed

    Cha, Tae-Gon; Pan, Jing; Chen, Haorong; Robinson, Heather N; Li, Xiang; Mao, Chengde; Choi, Jong Hyun

    2015-07-29

    Dynamic DNA enzyme-based walkers complete their stepwise movements along the prescribed track through a series of reactions, including hybridization, enzymatic cleavage, and strand displacement; however, their overall translocation kinetics is not well understood. Here, we perform mechanistic studies to elucidate several key parameters that govern the kinetics and processivity of DNA enzyme-based walkers. These parameters include DNA enzyme core type and structure, upper and lower recognition arm lengths, and divalent metal cation species and concentration. A theoretical model is developed within the framework of single-molecule kinetics to describe overall translocation kinetics as well as each reaction step. A better understanding of kinetics and design parameters enables us to demonstrate a walker movement near 5 μm at an average speed of ∼1 nm s(-1). We also show that the translocation kinetics of DNA walkers can be effectively controlled by external light stimuli using photoisomerizable azobenzene moieties. A 2-fold increase in the cleavage reaction is observed when the hairpin stems of enzyme catalytic cores are open under UV irradiation. This study provides general design guidelines to construct highly processive, autonomous DNA walker systems and to regulate their translocation kinetics, which would facilitate the development of functional DNA walkers.

  19. Modeling seasonal variability of carbonate system parameters at the sediment -water interface in the Baltic Sea (Gdansk Deep)

    NASA Astrophysics Data System (ADS)

    Protsenko, Elizaveta; Yakubov, Shamil; Lessin, Gennady; Yakushev, Evgeniy; Sokołowski, Adam

    2017-04-01

    A one-dimensional fully-coupled benthic pelagic biogeochemical model BROM (Bottom RedOx Model) was used for simulations of seasonal variability of biogeochemical parameters in the upper sediment, Bottom Boundary Layer and the water column in the Gdansk Deep of the Baltic Sea. This model represents key biogeochemical processes of transformation of C, N, P, Si, O, S, Mn, Fe and the processes of vertical transport in the water column and the sediments. The hydrophysical block of BROM was forced by the output calculated with model GETM (General Estuarine Transport Model). In this study we focused on parameters of carbonate system at Baltic Sea, and mainly on their distributions near the sea-water interface. For validating of BROM we used field data (concentrations of main nutrients at water column and porewater of upper sediment) from the Gulf of Gdansk. The model allowed us to simulate the baseline ranges of seasonal variability of pH, Alkalinity, TIC and calcite/aragonite saturation as well as vertical fluxes of carbon in a region potentially selected for the CCS storage. This work was supported by project EEA CO2MARINE and STEMM-CCS.

  20. Key parameters design of an aerial target detection system on a space-based platform

    NASA Astrophysics Data System (ADS)

    Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng

    2018-02-01

    To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.

  1. Recent developments in photocatalytic water treatment technology: a review.

    PubMed

    Chong, Meng Nan; Jin, Bo; Chow, Christopher W K; Saint, Chris

    2010-05-01

    In recent years, semiconductor photocatalytic process has shown a great potential as a low-cost, environmental friendly and sustainable treatment technology to align with the "zero" waste scheme in the water/wastewater industry. The ability of this advanced oxidation technology has been widely demonstrated to remove persistent organic compounds and microorganisms in water. At present, the main technical barriers that impede its commercialisation remained on the post-recovery of the catalyst particles after water treatment. This paper reviews the recent R&D progresses of engineered-photocatalysts, photoreactor systems, and the process optimizations and modellings of the photooxidation processes for water treatment. A number of potential and commercial photocatalytic reactor configurations are discussed, in particular the photocatalytic membrane reactors. The effects of key photoreactor operation parameters and water quality on the photo-process performances in terms of the mineralization and disinfection are assessed. For the first time, we describe how to utilize a multi-variables optimization approach to determine the optimum operation parameters so as to enhance process performance and photooxidation efficiency. Both photomineralization and photo-disinfection kinetics and their modellings associated with the photocatalytic water treatment process are detailed. A brief discussion on the life cycle assessment for retrofitting the photocatalytic technology as an alternative waste treatment process is presented. This paper will deliver a scientific and technical overview and useful information to scientists and engineers who work in this field.

  2. Biogas Production: Microbiology and Technology.

    PubMed

    Schnürer, Anna

    Biogas, containing energy-rich methane, is produced by microbial decomposition of organic material under anaerobic conditions. Under controlled conditions, this process can be used for the production of energy and a nutrient-rich residue suitable for use as a fertilising agent. The biogas can be used for production of heat, electricity or vehicle fuel. Different substrates can be used in the process and, depending on substrate character, various reactor technologies are available. The microbiological process leading to methane production is complex and involves many different types of microorganisms, often operating in close relationships because of the limited amount of energy available for growth. The microbial community structure is shaped by the incoming material, but also by operating parameters such as process temperature. Factors leading to an imbalance in the microbial community can result in process instability or even complete process failure. To ensure stable operation, different key parameters, such as levels of degradation intermediates and gas quality, are often monitored. Despite the fact that the anaerobic digestion process has long been used for industrial production of biogas, many questions need still to be resolved to achieve optimal management and gas yields and to exploit the great energy and nutrient potential available in waste material. This chapter discusses the different aspects that need to be taken into consideration to achieve optimal degradation and gas production, with particular focus on operation management and microbiology.

  3. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  4. The Effects of Polymer Carrier, Hot Melt Extrusion Process and Downstream Processing Parameters on the Moisture Sorption Properties of Amorphous Solid Dispersions

    PubMed Central

    Feng, Xin; Vo, Anh; Patil, Hemlata; Tiwari, Roshan V.; Alshetaili, Abdullah S.; Pimparade, Manjeet B.; Repka, Michael A.

    2017-01-01

    Objective The aim of this study was to evaluate the effect of polymer carrier, hot melt extrusion (HME) and downstream processing parameters on the water uptake properties of amorphous solid dispersions. Methods Three polymers and a model drug were used to prepare amorphous solid dispersions utilizing HME technology. The sorption-desorption isotherms of solid dispersions and their physical mixtures were measured by the Dynamic Vapor Sorption system, and the effect of polymer hydrophobicity, hygroscopicity, molecular weight and the HME process were investigated. FTIR imaging was performed to understand the phase separation driven by the moisture. Key findings Solid dispersions with polymeric carriers with lower hydrophilicity, hygroscopicity, and higher molecular weight could sorb less moisture under the high RH conditions. The water uptake ability of polymer-drug solid dispersion systems were decreased compared to the physical mixture after HME, which might be due to the decreased surface area and porosity. The FTIR imaging indicated the homogeneity of the drug molecularly dispersed within the polymer matrix was changed after exposure to high RH. Conclusion Understanding the effect of formulation and processing on the moisture sorption properties of solid dispersions is essential for the development of drug products with desired physical and chemical stability. PMID:26589107

  5. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  6. Prospect theory reflects selective allocation of attention.

    PubMed

    Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph

    2018-02-01

    There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Optimization of LC-Orbitrap-HRMS acquisition and MZmine 2 data processing for nontarget screening of environmental samples using design of experiments.

    PubMed

    Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias

    2016-11-01

    Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.

  8. Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.

    PubMed

    Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha

    2012-11-30

    Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Development of a General Form CO 2 and Brine Flux Input Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansoor, K.; Sun, Y.; Carroll, S.

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO 2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probemore » variability in key parameters. This report presents the procedures used to develop a generalized model for CO 2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.« less

  10. Design of a superconducting 28 GHz ion source magnet for FRIB using a shell-based support structure

    DOE PAGES

    Felice, H.; Rochepault, E.; Hafalia, R.; ...

    2014-12-05

    The Superconducting Magnet Program at the Lawrence Berkeley National Laboratory (LBNL) is completing the design of a 28 GHz NbTi ion source magnet for the Facility for Rare Isotope Beams (FRIB). The design parameters are based on the parameters of the ECR ion source VENUS in operation at LBNL since 2002 featuring a sextupole-in-solenoids configuration. Whereas most of the magnet components (such as conductor, magnetic design, protection scheme) remain very similar to the VENUS magnet components, the support structure of the FRIB ion source uses a different concept. A shell-based support structure using bladders and keys is implemented in themore » design allowing fine tuning of the sextupole preload and reversibility of the magnet assembly process. As part of the design work, conductor insulation scheme, coil fabrication processes and assembly procedures are also explored to optimize performance. We present the main features of the design emphasizing the integrated design approach used at LBNL to achieve this result.« less

  11. Transient performance analysis of the master cylinder hydraulic system of a 6.3 MN fineblanking press

    NASA Astrophysics Data System (ADS)

    Yi, Guodong; Li, Jin

    2018-03-01

    The master cylinder hydraulic system is the core component of the fineblanking press that seriously affects the machine performance. A key issue in the design of the master cylinder hydraulic system is dealing with the heavy shock loads in the fineblanking process. In this paper, an equivalent model of the master cylinder hydraulic system is established based on typical process parameters for practical fineblanking; then, the response characteristics of the master cylinder slider to the step changes in the load and control current are analyzed, and lastly, control strategies for the proportional valve are studied based on the impact of the control parameters on the kinetic stability of the slider. The results show that the kinetic stability of the slider is significantly affected by the step change of the control current, while it is slightly affected by the step change of the system load, which can be improved by adjusting the flow rate and opening time of the proportional valve.

  12. Self-mixing interferometry as a diagnostics tool for plasma characteristics in laser microdrilling

    NASA Astrophysics Data System (ADS)

    Colombo, Paolo; Demir, Ali Gökhan; Norgia, Michele; Previtali, Barbara

    2017-05-01

    In this work, self-mixing interferometry (SMI) was used to monitor the optical path difference induced by the ablation plasma and plume. The paper develops the analytical relationships to explain the fringe appearance in the SMI during laser microdrilling. The monitoring principle was tested under a large experimental campaign of laser microdrilling on TiAlN ceramic coating with a low-ns green fibre laser. Key process parameters namely pulse energy, number and repetition rate were varied. The effect of side gas on the SMI signal characteristic was analysed. Laser induced breakdown spectroscopy (LIBS) was used to identify the plasma temperature and electron number density. The SMI signals were correlated to the plume size and its evolution as a function of process parameters, as well as electron number density estimated by spectroscopy. In addition to proving the validity of the proposed new method, the results show insights to the micromachining of the ceramic material with low ns pulses.

  13. Models based on value and probability in health improve shared decision making.

    PubMed

    Ortendahl, Monica

    2008-10-01

    Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.

  14. Removable polytetrafluoroethylene template based epitaxy of ferroelectric copolymer thin films

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Chen, Qiusong; Zhang, Jian; Wang, Hui; Cheng, Qian; Jiang, Yulong; Zhu, Guodong

    2018-04-01

    In recent years ferroelectric polymers have shown their great potentials in organic and flexible electronics. To meet the requirements of high-performance and low energy consumption of novel electronic devices and systems, structural and electrical properties of ferroelectric polymer thin films are expected to be further optimized. One possible way is to realize epitaxial growth of ferroelectric thin films via removable high-ordered polytetrafluoroethylene (PTFE) templates. Here two key parameters in epitaxy process, annealing temperature and applied pressure, are systematically studied and thus optimized through structural and electrical measurements of ferroelectric copolymer thin films. Experimental results indicate that controlled epitaxial growth is realized via suitable combination of both parameters. Annealing temperature above the melting point of ferroelectric copolymer films is required, and simultaneously moderate pressure (around 2.0 MPa here) should be applied. Over-low pressure (around 1.0 MPa here) usually results in the failure of epitaxy process, while over-high pressure (around 3.0 MPa here) often results in residual of PTFE templates on ferroelectric thin films.

  15. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  16. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.

  17. A deliberative framework to identify the need for real-life evidence building of new cancer drugs after interim funding decision.

    PubMed

    Leung, Leanne; de Lemos, Mário L; Kovacic, Laurel

    2017-01-01

    Background With the rising cost of new oncology treatments, it is no longer sustainable to base initial drug funding decisions primarily on prospective clinical trials as their performance in real-life populations are often difficult to determine. In British Columbia, an approach in evidence building is to retrospectively analyse patient outcomes using observational research on an ad hoc basis. Methods The deliberative framework was constructed in three stages: framework design, framework validation and treatment programme characterization, and key informant interview. Framework design was informed through a literature review and analyses of provincial and national decision-making processes. Treatment programmes funded between 2010 and 2013 were used for framework validation. A selection concordance rate of 80% amongst three reviewers was considered to be a validation of the framework. Key informant interviews were conducted to determine the utility of this deliberative framework. Results A multi-domain deliberative framework with 15 assessment parameters was developed. A selection concordance rate of 84.2% was achieved for content validation of the framework. Nine treatment programmes from five different tumour groups were selected for retrospective outcomes analysis. Five contributory factors to funding uncertainties were identified. Key informants agreed that the framework is a comprehensive tool that targets the key areas involved in the funding decision-making process. Conclusions The oncology-based deliberative framework can be routinely used to assess treatment programmes from the major tumour sites for retrospective outcomes analysis. Key informants indicate this is a value-added tool and will provide insight to the current prospective funding model.

  18. Video encryption using chaotic masks in joint transform correlator

    NASA Astrophysics Data System (ADS)

    Saini, Nirmala; Sinha, Aloka

    2015-03-01

    A real-time optical video encryption technique using a chaotic map has been reported. In the proposed technique, each frame of video is encrypted using two different chaotic random phase masks in the joint transform correlator architecture. The different chaotic random phase masks can be obtained either by using different iteration levels or by using different seed values of the chaotic map. The use of different chaotic random phase masks makes the decryption process very complex for an unauthorized person. Optical, as well as digital, methods can be used for video encryption but the decryption is possible only digitally. To further enhance the security of the system, the key parameters of the chaotic map are encoded using RSA (Rivest-Shamir-Adleman) public key encryption. Numerical simulations are carried out to validate the proposed technique.

  19. Effect of Thermal Budget on the Electrical Characterization of Atomic Layer Deposited HfSiO/TiN Gate Stack MOSCAP Structure

    PubMed Central

    Khan, Z. N.; Ahmed, S.; Ali, M.

    2016-01-01

    Metal Oxide Semiconductor (MOS) capacitors (MOSCAP) have been instrumental in making CMOS nano-electronics realized for back-to-back technology nodes. High-k gate stacks including the desirable metal gate processing and its integration into CMOS technology remain an active research area projecting the solution to address the requirements of technology roadmaps. Screening, selection and deposition of high-k gate dielectrics, post-deposition thermal processing, choice of metal gate structure and its post-metal deposition annealing are important parameters to optimize the process and possibly address the energy efficiency of CMOS electronics at nano scales. Atomic layer deposition technique is used throughout this work because of its known deposition kinetics resulting in excellent electrical properties and conformal structure of the device. The dynamics of annealing greatly influence the electrical properties of the gate stack and consequently the reliability of the process as well as manufacturable device. Again, the choice of the annealing technique (migration of thermal flux into the layer), time-temperature cycle and sequence are key parameters influencing the device’s output characteristics. This work presents a careful selection of annealing process parameters to provide sufficient thermal budget to Si MOSCAP with atomic layer deposited HfSiO high-k gate dielectric and TiN gate metal. The post-process annealing temperatures in the range of 600°C -1000°C with rapid dwell time provide a better trade-off between the desirable performance of Capacitance-Voltage hysteresis and the leakage current. The defect dynamics is thought to be responsible for the evolution of electrical characteristics in this Si MOSCAP structure specifically designed to tune the trade-off at low frequency for device application. PMID:27571412

  20. Recent developments in membrane-based separations in biotechnology processes: review.

    PubMed

    Rathore, A S; Shirke, A

    2011-01-01

    Membrane-based separations are the most ubiquitous unit operations in biotech processes. There are several key reasons for this. First, they can be used with a large variety of applications including clarification, concentration, buffer exchange, purification, and sterilization. Second, they are available in a variety of formats, such as depth filtration, ultrafiltration, diafiltration, nanofiltration, reverse osmosis, and microfiltration. Third, they are simple to operate and are generally robust toward normal variations in feed material and operating parameters. Fourth, membrane-based separations typically require lower capital cost when compared to other processing options. As a result of these advantages, a typical biotech process has anywhere from 10 to 20 membrane-based separation steps. In this article we review the major developments that have occurred on this topic with a focus on developments in the last 5 years.

  1. Comparative study of UV/TiO2, UV/ZnO and photo-Fenton processes for the organic reactive dye degradation in aqueous solution.

    PubMed

    Peternel, Igor T; Koprivanac, Natalija; Bozić, Ana M Loncarić; Kusić, Hrvoje M

    2007-09-05

    In this study advanced oxidation processes (AOPs), UV/TiO(2), UV/ZnO and photo-Fenton, were applied in order to degrade C.I. Reactive Red 45 (RR45) dye in aqueous solution. The effects of key operating parameters, such as initial pH, catalyst and hydrogen peroxide dosage as well as the effect of initial dye concentration on decolorization and mineralization extents were studied. Primary objective was to determine the optimal conditions for each of the processes. The influence of added zeolite on the process efficiency was also studied. UV/vis spectrophotometric and total organic carbon (TOC) measurements were performed for determination of decolorization and mineralization extents. It has been found that photo-Fenton process was the most efficient with 74.2% TOC removal and complete color removal achieved after a 1h treatment.

  2. Chromatographic analysis of age-related changes in mucosal serotonin transmission in the murine distal ileum

    PubMed Central

    2012-01-01

    Background In the upper bowel, alterations in motility and absorption of key nutrients have been observed as part of the normal ageing process. Serotonin (5-HT) is a key signalling molecule in the gastrointestinal tract and is known to influence motility, however little is known of how the ageing process alters 5-HT signalling processes in the bowel. Results An isocratic chromatographic method was able to detect all 5-HT precursors and metabolites. Using extracellular and intracellular sampling approaches, we were able to monitor all key parameters associated with the transmission process. There was no alteration in the levels of tryptophan and 5-HTP between 3 and 18 month old animals. There was a significant increase in the ratio of 5-HT:5-HTP and an increase in intracellular 5-HT between 3 and 18 month old animals suggesting an increase in 5-HT synthesis. There was also a significant increase in extracellular 5-HT with age, suggesting increased 5-HT release. There was an age-related decrease in the ratio of intracellular 5-HIAA:extracellular 5-HT, whilst the amount of 5-HIAA did not change with age. In the presence of an increase in extracellular 5-HT, the lack of an age-related change in 5-HIAA is suggestive of a decrease in re-uptake via the serotonin transporter (SERT). Conclusions We have used intracellular and extracellular sampling to provide more insight into alterations in the neurotransmission process of 5-HT during normal ageing. We observed elevated 5-HT synthesis and release and a possible decrease in the activity of SERT. Taken together these changes lead to increased 5-HT availability and may alter motility function and could lead to the changes in adsorption observed in the elderly. PMID:22494644

  3. Modeling Gross Primary Production of Agro-Forestry Ecosystems by Assimilation of Satellite-Derived Information in a Process-Based Model

    PubMed Central

    Migliavacca, Mirco; Meroni, Michele; Busetto, Lorenzo; Colombo, Roberto; Zenone, Terenzio; Matteucci, Giorgio; Manca, Giovanni; Seufert, Guenther

    2009-01-01

    In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP) using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC) with the aims of i) improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii) allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale. PMID:22399948

  4. Modeling gross primary production of agro-forestry ecosystems by assimilation of satellite-derived information in a process-based model.

    PubMed

    Migliavacca, Mirco; Meroni, Michele; Busetto, Lorenzo; Colombo, Roberto; Zenone, Terenzio; Matteucci, Giorgio; Manca, Giovanni; Seufert, Guenther

    2009-01-01

    In this paper we present results obtained in the framework of a regional-scale analysis of the carbon budget of poplar plantations in Northern Italy. We explored the ability of the process-based model BIOME-BGC to estimate the gross primary production (GPP) using an inverse modeling approach exploiting eddy covariance and satellite data. We firstly present a version of BIOME-BGC coupled with the radiative transfer models PROSPECT and SAILH (named PROSAILH-BGC) with the aims of i) improving the BIOME-BGC description of the radiative transfer regime within the canopy and ii) allowing the assimilation of remotely-sensed vegetation index time series, such as MODIS NDVI, into the model. Secondly, we present a two-step model inversion for optimization of model parameters. In the first step, some key ecophysiological parameters were optimized against data collected by an eddy covariance flux tower. In the second step, important information about phenological dates and about standing biomass were optimized against MODIS NDVI. Results obtained showed that the PROSAILH-BGC allowed simulation of MODIS NDVI with good accuracy and that we described better the canopy radiation regime. The inverse modeling approach was demonstrated to be useful for the optimization of ecophysiological model parameters, phenological dates and parameters related to the standing biomass, allowing good accuracy of daily and annual GPP predictions. In summary, this study showed that assimilation of eddy covariance and remote sensing data in a process model may provide important information for modeling gross primary production at regional scale.

  5. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2011-12-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  6. Uncertainty Quantification and Parameter Tuning: A Case Study of Convective Parameterization Scheme in the WRF Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.

    2012-04-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  7. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2012-03-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  8. Ethnographic field work in requirements engineering

    NASA Astrophysics Data System (ADS)

    Reddivari, Sandeep; Asaithambi, Asai; Niu, Nan; Wang, Wentao; Xu, Li Da; Cheng, Jing-Ru C.

    2017-01-01

    The requirements engineering (RE) processes have become a key in developing and deploying enterprise information system (EIS) for organisations and corporations in various fields and industrial sectors. Ethnography is a contextual method allowing scientific description of the stakeholders, their needs and their organisational customs. Despite the recognition in the RE literature that ethnography could be helpful, the actual leverage of the method has been limited and ad hoc. To overcome the problems, we report in this paper a systematic mapping study where the relevant literature is examined. Building on the literature review, we further identify key parameters, their variations and their connections. The improved understanding about the role of ethnography in EIS RE is then presented in a consolidated model, and the guidelines of how to apply ethnography are organised by the key factors uncovered. Our study can direct researchers towards thorough understanding about the role that ethnography plays in EIS RE, and more importantly, to help practitioners better integrate contextually rich and ecologically valid methods in their daily practices.

  9. Application of identified sensitive physical parameters in reducing the uncertainty of numerical simulation

    NASA Astrophysics Data System (ADS)

    Sun, Guodong; Mu, Mu

    2016-04-01

    An important source of uncertainty, which then causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. There are many physical parameters in numerical models in the atmospheric and oceanic sciences, and it would cost a great deal to reduce uncertainties in all physical parameters. Therefore, finding a subset of these parameters, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach. The results imply that nonlinear interactions among parameters play a key role in the uncertainty of numerical simulations in arid and semi-arid regions of China compared to those in northern, northeastern and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.

  10. Approach to design space from retrospective quality data.

    PubMed

    Puñal Peces, Daniel; García-Montoya, Encarna; Manich, Albert; Suñé-Negre, Josep Maria; Pérez-Lozano, Pilar; Miñarro, Montse; Ticó, Josep Ramon

    2016-01-01

    Nowadays, the entire manufacturing process is based on the current GMPs, which emphasize the reproducibility of the process, and companies have a lot of recorded data about their processes. The establishment of the design space (DS) from retrospective data for a wet compression process. A design of experiments (DoE) with historical data from 4 years of industrial production has been carried out using the experimental factors as the results of the previous risk analysis and eight key parameters (quality specifications) that encompassed process and quality control data. Software Statgraphics 5.0 was applied, and data were processed to obtain eight DS as well as their safe and working ranges. Experience shows that it is possible to determine DS retrospectively, being the greatest difficulty in handling and processing of high amounts of data; however, the practicality of this study is very interesting as it let have the DS with minimal investment in experiments since actual production batch data are processed statistically.

  11. Current status and challenges for automotive battery production technologies

    NASA Astrophysics Data System (ADS)

    Kwade, Arno; Haselrieder, Wolfgang; Leithoff, Ruben; Modlinger, Armin; Dietrich, Franz; Droeder, Klaus

    2018-04-01

    Production technology for automotive lithium-ion battery (LIB) cells and packs has improved considerably in the past five years. However, the transfer of developments in materials, cell design and processes from lab scale to production scale remains a challenge due to the large number of consecutive process steps and the significant impact of material properties, electrode compositions and cell designs on processes. This requires an in-depth understanding of the individual production processes and their interactions, and pilot-scale investigations into process parameter selection and prototype cell production. Furthermore, emerging process concepts must be developed at lab and pilot scale that reduce production costs and improve cell performance. Here, we present an introductory summary of the state-of-the-art production technologies for automotive LIBs. We then discuss the key relationships between process, quality and performance, as well as explore the impact of materials and processes on scale and cost. Finally, future developments and innovations that aim to overcome the main challenges are presented.

  12. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    John J. Gangloff Jr; Shatil Sinha; Suresh G. Advani

    The formation and transport of voids in composite materials remains a key research area in composite manufacturing science. Knowledge of how voids, resin, and fiber reinforcement propagate throughout a composite material continuum from green state to cured state during an automated tape layup process is key to minimizing defects induced by void-initiated stress concentrations under applied loads for a wide variety of composite applications. This paper focuses on modeling resin flow in a deforming fiber tow during an automated process of partially impregnated thermoset prepreg composite material tapes. In this work, a tow unit cell based model has been presentedmore » that determines the consolidation and impregnation of a thermoset prepreg tape under an input pressure profile. A parametric study has been performed to characterize the behavior of varying tow speed and compaction forces on the degree of consolidation. Results indicate that increased tow consolidation is achieved with slower tow speeds and higher compaction forces although the relationship is not linear. The overall modeling of this project is motivated to address optimization of the 'green state' composite properties and processing parameters to reduce or eliminate 'cured state' defects, such as porosity and de-lamination. This work is partially funded by the Department of Energy under Award number DE-EE0001367.« less

  14. Industrial applications of high-average power high-peak power nanosecond pulse duration Nd:YAG lasers

    NASA Astrophysics Data System (ADS)

    Harrison, Paul M.; Ellwi, Samir

    2009-02-01

    Within the vast range of laser materials processing applications, every type of successful commercial laser has been driven by a major industrial process. For high average power, high peak power, nanosecond pulse duration Nd:YAG DPSS lasers, the enabling process is high speed surface engineering. This includes applications such as thin film patterning and selective coating removal in markets such as the flat panel displays (FPD), solar and automotive industries. Applications such as these tend to require working spots that have uniform intensity distribution using specific shapes and dimensions, so a range of innovative beam delivery systems have been developed that convert the gaussian beam shape produced by the laser into a range of rectangular and/or shaped spots, as required by demands of each project. In this paper the authors will discuss the key parameters of this type of laser and examine why they are important for high speed surface engineering projects, and how they affect the underlying laser-material interaction and the removal mechanism. Several case studies will be considered in the FPD and solar markets, exploring the close link between the application, the key laser characteristics and the beam delivery system that link these together.

  15. Topology-optimization-based design method of flexures for mounting the primary mirror of a large-aperture space telescope.

    PubMed

    Hu, Rui; Liu, Shutian; Li, Quhao

    2017-05-20

    For the development of a large-aperture space telescope, one of the key techniques is the method for designing the flexures for mounting the primary mirror, as the flexures are the key components. In this paper, a topology-optimization-based method for designing flexures is presented. The structural performances of the mirror system under multiple load conditions, including static gravity and thermal loads, as well as the dynamic vibration, are considered. The mirror surface shape error caused by gravity and the thermal effect is treated as the objective function, and the first-order natural frequency of the mirror structural system is taken as the constraint. The pattern repetition constraint is added, which can ensure symmetrical material distribution. The topology optimization model for flexure design is established. The substructuring method is also used to condense the degrees of freedom (DOF) of all the nodes of the mirror system, except for the nodes that are linked to the mounting flexures, to reduce the computation effort during the optimization iteration process. A potential optimized configuration is achieved by solving the optimization model and post-processing. A detailed shape optimization is subsequently conducted to optimize its dimension parameters. Our optimization method deduces new mounting structures that significantly enhance the optical performance of the mirror system compared to the traditional methods, which only focus on the parameters of existing structures. Design results demonstrate the effectiveness of the proposed optimization method.

  16. Automatic rocks detection and classification on high resolution images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Aboudan, A.; Pacifici, A.; Murana, A.; Cannarsa, F.; Ori, G. G.; Dell'Arciprete, I.; Allemand, P.; Grandjean, P.; Portigliotti, S.; Marcer, A.; Lorenzoni, L.

    2013-12-01

    High-resolution images can be used to obtain rocks location and size on planetary surfaces. In particular rock size-frequency distribution is a key parameter to evaluate the surface roughness, to investigate the geologic processes that formed the surface and to assess the hazards related with spacecraft landing. The manual search for rocks on high-resolution images (even for small areas) can be a very intensive work. An automatic or semi-automatic algorithm to identify rocks is mandatory to enable further processing as determining the rocks presence, size, height (by means of shadows) and spatial distribution over an area of interest. Accurate rocks and shadows contours localization are the key steps for rock detection. An approach to contour detection based on morphological operators and statistical thresholding is presented in this work. The identified contours are then fitted using a proper geometric model of the rocks or shadows and used to estimate salient rocks parameters (position, size, area, height). The performances of this approach have been evaluated both on images of Martian analogue area of Morocco desert and on HiRISE images. Results have been compared with ground truth obtained by means of manual rock mapping and proved the effectiveness of the algorithm. The rock abundance and rocks size-frequency distribution derived on selected HiRISE images have been compared with the results of similar analyses performed for the landing site certification of Mars landers (Viking, Pathfinder, MER, MSL) and with the available thermal data from IRTM and TES.

  17. Evaluating Vertical Moisture Structure of the Madden-Julian Oscillation in Contemporary GCMs

    NASA Astrophysics Data System (ADS)

    Guan, B.; Jiang, X.; Waliser, D. E.

    2013-12-01

    The Madden-Julian Oscillation (MJO) remains a major challenge in our understanding and modeling of the tropical convection and circulation. Many models have troubles in realistically simulating key characteristics of the MJO, such as the strength, period, and eastward propagation. For models that do simulate aspects of the MJO, it remains to be understood what parameters and processes are the most critical in determining the quality of the simulations. This study focuses on the vertical structure of moisture in MJO simulations, with the aim to identify and understand the relationship between MJO simulation qualities and key parameters related to moisture. A series of 20-year simulations conducted by 26 GCMs are analyzed, including four that are coupled to ocean models and two that have a two-dimensional cloud resolving model embedded (i.e., superparameterized). TRMM precipitation and ERA-Interim reanalysis are used to evaluate the model simulations. MJO simulation qualities are evaluated based on pattern correlations of lead/lag regressions of precipitation - a measure of the model representation of the eastward propagating MJO convection. Models with strongest and weakest MJOs (top and bottom quartiles) are compared in terms of differences in moisture content, moisture convergence, moistening rate, and moist static energy. It is found that models with strongest MJOs have better representations of the observed vertical tilt of moisture. Relative importance of convection, advection, boundary layer, and large scale convection/precipitation are discussed in terms of their contribution to the moistening process. The results highlight the overall importance of vertical moisture structure in MJO simulations. The work contributes to the climatological component of the joint WCRP-WWRP/THORPEX YOTC MJO Task Force and the GEWEX Atmosphere System Study (GASS) global model evaluation project focused on the vertical structure and diabatic processes of the MJO.

  18. Numerical Modelling of Smouldering Combustion as a Remediation Technology for NAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Macphee, S. L.; Pironi, P.; Gerhard, J. I.; Rein, G.

    2009-05-01

    Smouldering combustion of non-aqueous phase liquids (NAPLs) is a novel concept that has significant potential for the remediation of contaminated industrial sites. Many common NAPLs, including coal tar, solvents, oils and petrochemicals are combustible and capable of generating substantial amounts of heat when burned. Smouldering is a flameless form of combustion in which a condensed phase fuel undergoes surface oxidation reactions within a porous matrix. Gerhard et al., 2006 (Eos Trans., 87(52), Fall Meeting Suppl. H24A) presented proof-of-concept experiments demonstrating the successful destruction of NAPLs embedded in a porous medium via smouldering. Pironi et al., 2008 (Eos Trans., 89(53), Fall Meet. Suppl. H34C) presented a series of column experiments illustrating the self-sustaining nature of the NAPL smouldering process and examined its sensitivity to a variety of key system parameters. In this work, a numerical model capable of simulating the propagation of a smouldering front in NAPL-contaminated porous media is presented. The model couples the multiphase flow code DNAPL3D-MT [Gerhard and Grant, 2007] with an analytical model for fire propagation [Richards, 1995]. The fire model is modified in this work for smouldering behaviour; in particular, incorporating a correlation of the velocity of the smouldering front to key parameters such as contaminant type, NAPL saturation, water saturation, porous media type and air injection rate developed from the column experiments. NAPL smouldering simulations are then validated against the column experiments. Furthermore, multidimensional simulations provide insight into scaling up the remediation process and are valuable for evaluating process sensitivity at the scales of in situ pilot and field applications.

  19. Application of Metagenomic Sequencing to Food Safety: Detection of Shiga Toxin-Producing Escherichia coli on Fresh Bagged Spinach

    PubMed Central

    Leonard, Susan R.; Mammel, Mark K.; Lacher, David W.

    2015-01-01

    Culture-independent diagnostics reduce the reliance on traditional (and slower) culture-based methodologies. Here we capitalize on advances in next-generation sequencing (NGS) to apply this approach to food pathogen detection utilizing NGS as an analytical tool. In this study, spiking spinach with Shiga toxin-producing Escherichia coli (STEC) following an established FDA culture-based protocol was used in conjunction with shotgun metagenomic sequencing to determine the limits of detection, sensitivity, and specificity levels and to obtain information on the microbiology of the protocol. We show that an expected level of contamination (∼10 CFU/100 g) could be adequately detected (including key virulence determinants and strain-level specificity) within 8 h of enrichment at a sequencing depth of 10,000,000 reads. We also rationalize the relative benefit of static versus shaking culture conditions and the addition of selected antimicrobial agents, thereby validating the long-standing culture-based parameters behind such protocols. Moreover, the shotgun metagenomic approach was informative regarding the dynamics of microbial communities during the enrichment process, including initial surveys of the microbial loads associated with bagged spinach; the microbes found included key genera such as Pseudomonas, Pantoea, and Exiguobacterium. Collectively, our metagenomic study highlights and considers various parameters required for transitioning to such sequencing-based diagnostics for food safety and the potential to develop better enrichment processes in a high-throughput manner not previously possible. Future studies will investigate new species-specific DNA signature target regimens, rational design of medium components in concert with judicious use of additives, such as antibiotics, and alterations in the sample processing protocol to enhance detection. PMID:26386062

  20. Evaluation of photosynthetic electrons derivation by exogenous redox mediators.

    PubMed

    Longatte, Guillaume; Fu, Han-Yi; Buriez, Olivier; Labbé, Eric; Wollman, Francis-André; Amatore, Christian; Rappaport, Fabrice; Guille-Collignon, Manon; Lemaître, Frédéric

    2015-10-01

    Oxygenic photosynthesis is the complex process that occurs in plants or algae by which the energy from the sun is converted into an electrochemical potential that drives the assimilation of carbon dioxide and the synthesis of carbohydrates. Quinones belong to a family of species commonly found in key processes of the Living, like photosynthesis or respiration, in which they act as electron transporters. This makes this class of molecules a popular candidate for biofuel cell and bioenergy applications insofar as they can be used as cargo to ship electrons to an electrode immersed in the cellular suspension. Nevertheless, such electron carriers are mostly selected empirically. This is why we report on a method involving fluorescence measurements to estimate the ability of seven different quinones to accept photosynthetic electrons downstream of photosystem II, the first protein complex in the light-dependent reactions of oxygenic photosynthesis. To this aim we use a mutant of Chlamydomonas reinhardtii, a unicellular green alga, impaired in electron downstream of photosystem II and assess the ability of quinones to restore electron flow by fluorescence. In this work, we defined and extracted a "derivation parameter" D that indicates the derivation efficiency of the exogenous quinones investigated. D then allows electing 2,6-dichlorobenzoquinone, 2,5-dichlorobenzoquinone and p-phenylbenzoquinone as good candidates. More particularly, our investigations suggested that other key parameters like the partition of quinones between different cellular compartments and their propensity to saturate these various compartments should also be taken into account in the process of selecting exogenous quinones for the purpose of deriving photoelectrons from intact algae. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Review on the Celestial Sphere Positioning of FITS Format Image Based on WCS and Research on General Visualization

    NASA Astrophysics Data System (ADS)

    Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.

    2017-11-01

    Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.

  2. Development of an in-situ multi-component reinforced Al-based metal matrix composite by direct metal laser sintering technique — Optimization of process parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Subrata Kumar, E-mail: subratagh82@gmail.com; Bandyopadhyay, Kaushik; Saha, Partha

    2014-07-01

    In the present investigation, an in-situ multi-component reinforced aluminum based metal matrix composite was fabricated by the combination of self-propagating high-temperature synthesis and direct metal laser sintering process. The different mixtures of Al, TiO{sub 2} and B{sub 4}C powders were used to initiate and maintain the self-propagating high-temperature synthesis by laser during the sintering process. It was found from the X-ray diffraction analysis and scanning electron microscopy that the reinforcements like Al{sub 2}O{sub 3}, TiC, and TiB{sub 2} were formed in the composite. The scanning electron microscopy revealed the distribution of the reinforcement phases in the composite and phase identities.more » The variable parameters such as powder layer thickness, laser power, scanning speed, hatching distance and composition of the powder mixture were optimized for higher density, lower porosity and higher microhardness using Taguchi method. Experimental investigation shows that the density of the specimen mainly depends upon the hatching distance, composition and layer thickness. On the other hand, hatching distance, layer thickness and laser power are the significant parameters which influence the porosity. The composition, laser power and layer thickness are the key influencing parameters for microhardness. - Highlights: • The reinforcements such as Al{sub 2}O{sub 3}, TiC, and TiB{sub 2} were produced in Al-MMC through SHS. • The density is mainly influenced by the material composition and hatching distance. • Hatching distance is the major influencing parameter on porosity. • The material composition is the significant parameter to enhance the microhardness. • The SEM micrographs reveal the distribution of TiC, TiB{sub 2} and Al{sub 2}O{sub 3} in the composite.« less

  3. Enhanced Self Tuning On-Board Real-Time Model (eSTORM) for Aircraft Engine Performance Health Tracking

    NASA Technical Reports Server (NTRS)

    Volponi, Al; Simon, Donald L. (Technical Monitor)

    2008-01-01

    A key technological concept for producing reliable engine diagnostics and prognostics exploits the benefits of fusing sensor data, information, and/or processing algorithms. This report describes the development of a hybrid engine model for a propulsion gas turbine engine, which is the result of fusing two diverse modeling methodologies: a physics-based model approach and an empirical model approach. The report describes the process and methods involved in deriving and implementing a hybrid model configuration for a commercial turbofan engine. Among the intended uses for such a model is to enable real-time, on-board tracking of engine module performance changes and engine parameter synthesis for fault detection and accommodation.

  4. A new supernova light curve modeling program

    NASA Astrophysics Data System (ADS)

    Jäger, Zoltán; Nagy, Andrea P.; Biro, Barna I.; Vinkó, József

    2017-12-01

    Supernovae are extremely energetic explosions that highlight the violent deaths of various types of stars. Studying such cosmic explosions may be important because of several reasons. Supernovae play a key role in cosmic nucleosynthesis processes, and they are also the anchors of methods of measuring extragalactic distances. Several exotic physical processes take place in the expanding ejecta produced by the explosion. We have developed a fast and simple semi-analytical code to model the the light curve of core collapse supernovae. This allows the determination of their most important basic physical parameters, like the the radius of the progenitor star, the mass of the ejected envelope, the mass of the radioactive nickel synthesized during the explosion, among others.

  5. Study of absorption and re-emission processes in a ternary liquid scintillation system

    NASA Astrophysics Data System (ADS)

    Xiao, Hua-Lin; Li, Xiao-Bo; Zheng, Dong; Cao, Jun; Wen, Liang-Jian; Wang, Nai-Yan

    2010-11-01

    Liquid scintillators are widely used as the neutrino target in neutrino experiments. The absorption and emission of different components of a ternary liquid scintillator (Linear Alkyl Benzene (LAB) as the solvent, 2,5-diphenyloxazole (PPO) as the fluor and p-bis-(o-methylstyryl)-benzene (bis-MSB) as wavelength shifter) are studied. It is shown that the absorption of this liquid scintillator is dominant by LAB and PPO at wavelengths less than 349 nm, and the absorption by bis-MSB becomes prevalent at the wavelength larger than 349 nm. The fluorescence quantum yields, which are the key parameters to model the absorption and re-emission processes in large liquid scintillation detectors, are measured.

  6. Online analysis and process control in recombinant protein production (review).

    PubMed

    Palmer, Shane M; Kunji, Edmund R S

    2012-01-01

    Online analysis and control is essential for efficient and reproducible bioprocesses. A key factor in real-time control is the ability to measure critical variables rapidly. Online in situ measurements are the preferred option and minimize the potential loss of sterility. The challenge is to provide sensors with a good lifespan that withstand harsh bioprocess conditions, remain stable for the duration of a process without the need for recalibration, and offer a suitable working range. In recent decades, many new techniques that promise to extend the possibilities of analysis and control, not only by providing new parameters for analysis, but also through the improvement of accepted, well practiced, measurements have arisen.

  7. Dynamics of Ranking Processes in Complex Systems

    NASA Astrophysics Data System (ADS)

    Blumm, Nicholas; Ghoshal, Gourab; Forró, Zalán; Schich, Maximilian; Bianconi, Ginestra; Bouchaud, Jean-Philippe; Barabási, Albert-László

    2012-09-01

    The world is addicted to ranking: everything, from the reputation of scientists, journals, and universities to purchasing decisions is driven by measured or perceived differences between them. Here, we analyze empirical data capturing real time ranking in a number of systems, helping to identify the universal characteristics of ranking dynamics. We develop a continuum theory that not only predicts the stability of the ranking process, but shows that a noise-induced phase transition is at the heart of the observed differences in ranking regimes. The key parameters of the continuum theory can be explicitly measured from data, allowing us to predict and experimentally document the existence of three phases that govern ranking stability.

  8. Nonisothermal glass molding for the cost-efficient production of precision freeform optics

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Dambon, Olaf; Klocke, Fritz

    2016-07-01

    Glass molding has become a key replication-based technology to satisfy intensively growing demands of complex precision optics in the today's photonic market. However, the state-of-the-art replicative technologies are still limited, mainly due to their insufficiency to meet the requirements of mass production. This paper introduces a newly developed nonisothermal glass molding in which a complex-shaped optic is produced in a very short process cycle. The innovative molding technology promises a cost-efficient production because of increased mold lifetime, less energy consumption, and high throughput from a fast process chain. At the early stage of the process development, the research focuses on an integration of finite element simulation into the process chain to reduce time and labor-intensive cost. By virtue of numerical modeling, defects including chill ripples and glass sticking in the nonisothermal molding process can be predicted and the consequent effects are avoided. In addition, the influences of process parameters and glass preforms on the surface quality, form accuracy, and residual stress are discussed. A series of experiments was carried out to validate the simulation results. The successful modeling, therefore, provides a systematic strategy for glass preform design, mold compensation, and optimization of the process parameters. In conclusion, the integration of simulation into the entire nonisothermal glass molding process chain will significantly increase the manufacturing efficiency as well as reduce the time-to-market for the mass production of complex precision yet low-cost glass optics.

  9. Mathematical support for automated geometry analysis of lathe machining of oblique peakless round-nose tools

    NASA Astrophysics Data System (ADS)

    Filippov, A. V.; Tarasov, S. Yu; Podgornyh, O. A.; Shamarin, N. N.; Filippova, E. O.

    2017-01-01

    Automatization of engineering processes requires developing relevant mathematical support and a computer software. Analysis of metal cutting kinematics and tool geometry is a necessary key task at the preproduction stage. This paper is focused on developing a procedure for determining the geometry of oblique peakless round-nose tool lathe machining with the use of vector/matrix transformations. Such an approach allows integration into modern mathematical software packages in distinction to the traditional analytic description. Such an advantage is very promising for developing automated control of the preproduction process. A kinematic criterion for the applicable tool geometry has been developed from the results of this study. The effect of tool blade inclination and curvature on the geometry-dependent process parameters was evaluated.

  10. Process design and control of a twin screw hot melt extrusion for continuous pharmaceutical tamper-resistant tablet production.

    PubMed

    Baronsky-Probst, J; Möltgen, C-V; Kessler, W; Kessler, R W

    2016-05-25

    Hot melt extrusion (HME) is a well-known process within the plastic and food industries that has been utilized for the past several decades and is increasingly accepted by the pharmaceutical industry for continuous manufacturing. For tamper-resistant formulations of e.g. opioids, HME is the most efficient production technique. The focus of this study is thus to evaluate the manufacturability of the HME process for tamper-resistant formulations. Parameters such as the specific mechanical energy (SME), as well as the melt pressure and its standard deviation, are important and will be discussed in this study. In the first step, the existing process data are analyzed by means of multivariate data analysis. Key critical process parameters such as feed rate, screw speed, and the concentration of the API in the polymers are identified, and critical quality parameters of the tablet are defined. In the second step, a relationship between the critical material, product and process quality attributes are established by means of Design of Experiments (DoEs). The resulting SME and the temperature at the die are essential data points needed to indirectly qualify the degradation of the API, which should be minimal. NIR-spectroscopy is used to monitor the material during the extrusion process. In contrast to most applications in which the probe is directly integrated into the die, the optical sensor is integrated into the cooling line of the strands. This saves costs in the probe design and maintenance and increases the robustness of the chemometric models. Finally, a process measurement system is installed to monitor and control all of the critical attributes in real-time by means of first principles, DoE models, soft sensor models, and spectroscopic information. Overall, the process is very robust as long as the screw speed is kept low. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Vision technology/algorithms for space robotics applications

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar; Defigueiredo, Rui J. P.

    1987-01-01

    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.

  12. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope.

    PubMed

    Sader, John E; Yousefi, Morteza; Friend, James R

    2014-02-01

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noise spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.

  13. Uncertainty in least-squares fits to the thermal noise spectra of nanomechanical resonators with applications to the atomic force microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sader, John E., E-mail: jsader@unimelb.edu.au; Yousefi, Morteza; Friend, James R.

    2014-02-15

    Thermal noise spectra of nanomechanical resonators are used widely to characterize their physical properties. These spectra typically exhibit a Lorentzian response, with additional white noise due to extraneous processes. Least-squares fits of these measurements enable extraction of key parameters of the resonator, including its resonant frequency, quality factor, and stiffness. Here, we present general formulas for the uncertainties in these fit parameters due to sampling noise inherent in all thermal noise spectra. Good agreement with Monte Carlo simulation of synthetic data and measurements of an Atomic Force Microscope (AFM) cantilever is demonstrated. These formulas enable robust interpretation of thermal noisemore » spectra measurements commonly performed in the AFM and adaptive control of fitting procedures with specified tolerances.« less

  14. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  15. Selected physical properties of various diesel blends

    NASA Astrophysics Data System (ADS)

    Hlaváčová, Zuzana; Božiková, Monika; Hlaváč, Peter; Regrut, Tomáš; Ardonová, Veronika

    2018-01-01

    The quality determination of biofuels requires identifying the chemical and physical parameters. The key physical parameters are rheological, thermal and electrical properties. In our study, we investigated samples of diesel blends with rape-seed methyl esters content in the range from 3 to 100%. In these, we measured basic thermophysical properties, including thermal conductivity and thermal diffusivity, using two different transient methods - the hot-wire method and the dynamic plane source. Every thermophysical parameter was measured 100 times using both methods for all samples. Dynamic viscosity was measured during the heating process under the temperature range 20-80°C. A digital rotational viscometer (Brookfield DV 2T) was used for dynamic viscosity detection. Electrical conductivity was measured using digital conductivity meter (Model 1152) in a temperature range from -5 to 30°C. The highest values of thermal parameters were reached in the diesel sample with the highest biofuel content. The dynamic viscosity of samples increased with higher concentration of bio-component rapeseed methyl esters. The electrical conductivity of blends also increased with rapeseed methyl esters content.

  16. Modelling and multi objective optimization of WEDM of commercially Monel super alloy using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu

    2016-09-01

    In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.

  17. The Compositional Dependence of the Microstructure and Properties of CMSX-4 Superalloys

    NASA Astrophysics Data System (ADS)

    Yu, Hao; Xu, Wei; Van Der Zwaag, Sybrand

    2018-01-01

    The degradation of creep resistance in Ni-based single-crystal superalloys is essentially ascribed to their microstructural evolution. Yet there is a lack of work that manages to predict (even qualitatively) the effect of alloying element concentrations on the rate of microstructural degradation. In this research, a computational model is presented to connect the rafting kinetics of Ni superalloys to their chemical composition by combining thermodynamics calculation and a modified microstructural model. To simulate the evolution of key microstructural parameters during creep, the isotropic coarsening rate and γ/ γ' misfit stress are defined as composition-related parameters, and the effect of service temperature, time, and applied stress are taken into consideration. Two commercial superalloys, for which the kinetics of the rafting process are selected as the reference alloys, and the corresponding microstructural parameters are simulated and compared with experimental observations reported in the literature. The results confirm that our physical model not requiring any fitting parameters manages to predict (semiquantitatively) the microstructural parameters for different service conditions, as well as the effects of alloying element concentrations. The model can contribute to the computational design of new Ni-based superalloys.

  18. Will Quantitative Proteomics Redefine Some of the Key Concepts in Skeletal Muscle Physiology?

    PubMed

    Gizak, Agnieszka; Rakus, Dariusz

    2016-01-11

    Molecular and cellular biology methodology is traditionally based on the reasoning called "the mechanistic explanation". In practice, this means identifying and selecting correlations between biological processes which result from our manipulation of a biological system. In theory, a successful application of this approach requires precise knowledge about all parameters of a studied system. However, in practice, due to the systems' complexity, this requirement is rarely, if ever, accomplished. Typically, it is limited to a quantitative or semi-quantitative measurements of selected parameters (e.g., concentrations of some metabolites), and a qualitative or semi-quantitative description of expression/post-translational modifications changes within selected proteins. A quantitative proteomics approach gives a possibility of quantitative characterization of the entire proteome of a biological system, in the context of the titer of proteins as well as their post-translational modifications. This enables not only more accurate testing of novel hypotheses but also provides tools that can be used to verify some of the most fundamental dogmas of modern biology. In this short review, we discuss some of the consequences of using quantitative proteomics to verify several key concepts in skeletal muscle physiology.

  19. Probabilistic inversion of expert assessments to inform projections about Antarctic ice sheet responses

    PubMed Central

    Wong, Tony E.; Keller, Klaus

    2017-01-01

    The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095

  20. Design Considerations For Imaging Charge-Coupled Device (ICCD) Star Sensors

    NASA Astrophysics Data System (ADS)

    McAloon, K. J.

    1981-04-01

    A development program is currently underway to produce a precision star sensor using imaging charge coupled device (ICCD) technology. The effort is the critical component development phase for the Air Force Multi-Mission Attitude Determination and Autonomous Navigation System (MADAN). A number of unique considerations have evolved in designing an arcsecond accuracy sensor around an ICCD detector. Three tiers of performance criteria are involved: at the spacecraft attitude determination system level, at the star sensor level, and at the detector level. Optimum attitude determination system performance involves a tradeoff between Kalman filter iteration time and sensor ICCD integration time. The ICCD star sensor lends itself to the use of a new approach in the functional interface between the attitude determination system and the sensor. At the sensor level image data processing tradeoffs are important for optimum sensor performance. These tradeoffs involve the sensor optic configuration, the optical point spread function (PSF) size and shape, the PSF position locator, and the microprocessor locator algorithm. Performance modelling of the sensor mandates the use of computer simulation programs. Five key performance parameters at the ICCD detector level are defined. ICCD error characteristics have also been isolated to five key parameters.

  1. Numerical Simulation Of Cratering Effects In Adobe

    DTIC Science & Technology

    2013-07-01

    DEVELOPMENT OF MATERIAL PARAMETERS .........................................................7 PROBLEM SETUP...37 PARAMETER ADJUSTMENTS ......................................................................................38 GLOSSARY...dependent yield surface with the Geological Yield Surface (GEO) modeled in CTH using well characterized adobe. By identifying key parameters that

  2. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  3. Assimilating solar-induced chlorophyll fluorescence into the terrestrial biosphere model BETHY-SCOPE v1.0: model description and information content

    NASA Astrophysics Data System (ADS)

    Norton, Alexander J.; Rayner, Peter J.; Koffi, Ernest N.; Scholze, Marko

    2018-04-01

    The synthesis of model and observational information using data assimilation can improve our understanding of the terrestrial carbon cycle, a key component of the Earth's climate-carbon system. Here we provide a data assimilation framework for combining observations of solar-induced chlorophyll fluorescence (SIF) and a process-based model to improve estimates of terrestrial carbon uptake or gross primary production (GPP). We then quantify and assess the constraint SIF provides on the uncertainty in global GPP through model process parameters in an error propagation study. By incorporating 1 year of SIF observations from the GOSAT satellite, we find that the parametric uncertainty in global annual GPP is reduced by 73 % from ±19.0 to ±5.2 Pg C yr-1. This improvement is achieved through strong constraint of leaf growth processes and weak to moderate constraint of physiological parameters. We also find that the inclusion of uncertainty in shortwave down-radiation forcing has a net-zero effect on uncertainty in GPP when incorporated into the SIF assimilation framework. This study demonstrates the powerful capacity of SIF to reduce uncertainties in process-based model estimates of GPP and the potential for improving our predictive capability of this uncertain carbon flux.

  4. Co-pelletization of sewage sludge and agricultural wastes.

    PubMed

    Yilmaz, Ersel; Wzorek, Małgorzata; Akçay, Selin

    2018-06-15

    This paper concerns the process of production and properties of pellets based on biomass wastes. Co-pelletization was performed for sewage sludge from municipal wastewater treatment plant and other biomass material such as animal and olive wastes. The aim of the present study was to identify the key factors affecting on the sewage sludge and agricultural residues co-pelletization processes conditions. The impact of raw material type, pellet length, moisture content and particle size on the physical properties was investigated. The technic and technological aspects of co-pelletization were discussed in detail. The physical parameters of pellets, i.e.: drop strength, absorbability and water resistance were determined. Among others, also energy parameters: low and high heat value, content of ash and volatiles were presented. Results showed the range of raw materials moisture, which is necessary to obtain good quality biofuels and also ratio of sewage sludge in pelletizing materials. The analysis of the energetic properties has indicated that the pellet generated on the basis of the sewage sludge and another biomass materials can be applied in the processes of co-combustion with coal. Those biofuels are characterised with properties making them suitable for use in thermal processes and enabling their transport and storage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  6. Engineering trade studies for a quantum key distribution system over a 30  km free-space maritime channel.

    PubMed

    Gariano, John; Neifeld, Mark; Djordjevic, Ivan

    2017-01-20

    Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.

  7. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  8. Combinatorial influence of environmental parameters on transcription factor activity.

    PubMed

    Knijnenburg, T A; Wessels, L F A; Reinders, M J T

    2008-07-01

    Cells receive a wide variety of environmental signals, which are often processed combinatorially to generate specific genetic responses. Changes in transcript levels, as observed across different environmental conditions, can, to a large extent, be attributed to changes in the activity of transcription factors (TFs). However, in unraveling these transcription regulation networks, the actual environmental signals are often not incorporated into the model, simply because they have not been measured. The unquantified heterogeneity of the environmental parameters across microarray experiments frustrates regulatory network inference. We propose an inference algorithm that models the influence of environmental parameters on gene expression. The approach is based on a yeast microarray compendium of chemostat steady-state experiments. Chemostat cultivation enables the accurate control and measurement of many of the key cultivation parameters, such as nutrient concentrations, growth rate and temperature. The observed transcript levels are explained by inferring the activity of TFs in response to combinations of cultivation parameters. The interplay between activated enhancers and repressors that bind a gene promoter determine the possible up- or downregulation of the gene. The model is translated into a linear integer optimization problem. The resulting regulatory network identifies the combinatorial effects of environmental parameters on TF activity and gene expression. The Matlab code is available from the authors upon request. Supplementary data are available at Bioinformatics online.

  9. In-Situ Waviness Characterization of Metal Plates by a Lateral Shearing Interferometric Profilometer

    PubMed Central

    Frade, María; Enguita, José María; Álvarez, Ignacio

    2013-01-01

    Characterizing waviness in sheet metal is a key process for quality control in many industries, such as automotive and home appliance manufacturing. However, there is still no known technique able to work in an automated in-floor inspection system. The literature describes many techniques developed in the last three decades, but most of them are either slow, only able to work in laboratory conditions, need very short (unsafe) working distances, or are only able to estimate certain waviness parameters. In this article we propose the use of a lateral shearing interferometric profilometer, which is able to obtain a 19 mm profile in a single acquisition, with sub-micron precision, in an uncontrolled environment, and from a working distance greater than 90 mm. This system allows direct measurement of all needed waviness parameters even with objects in movement. We describe a series of experiments over several samples of steel plates to validate the sensor and the processing method, and the results are in close agreement with those obtained with a contact stylus device. The sensor is an ideal candidate for on-line or in-machine fast automatic waviness assessment, reducing delays and costs in many metalworking processes. PMID:23584120

  10. Cutting Zone Temperature Identification During Machining of Nickel Alloy Inconel 718

    NASA Astrophysics Data System (ADS)

    Czán, Andrej; Daniš, Igor; Holubják, Jozef; Zaušková, Lucia; Czánová, Tatiana; Mikloš, Matej; Martikáň, Pavol

    2017-12-01

    Quality of machined surface is affected by quality of cutting process. There are many parameters, which influence on the quality of the cutting process. The cutting temperature is one of most important parameters that influence the tool life and the quality of machined surfaces. Its identification and determination is key objective in specialized machining processes such as dry machining of hard-to-machine materials. It is well known that maximum temperature is obtained in the tool rake face at the vicinity of the cutting edge. A moderate level of cutting edge temperature and a low thermal shock reduce the tool wear phenomena, and a low temperature gradient in the machined sublayer reduces the risk of high tensile residual stresses. The thermocouple method was used to measure the temperature directly in the cutting zone. An original thermocouple was specially developed for measuring of temperature in the cutting zone, surface and subsurface layers of machined surface. This paper deals with identification of temperature and temperature gradient during dry peripheral milling of Inconel 718. The measurements were used to identification the temperature gradients and to reconstruct the thermal distribution in cutting zone with various cutting conditions.

  11. Frequency of Tropical Ocean Deep Convection and Global Warming

    NASA Astrophysics Data System (ADS)

    Aumann, H. H.; Behrangi, A.; Ruzmaikin, A.

    2017-12-01

    The average of 36 CMIP5 models predicts about 3K of warming and a 4.7% increase in precipitation for the tropical oceans with a doubling of the CO2 by the end of this century. For this scenario we evaluate the increase in the frequency of Deep Convective Clouds (DCC) in the tropical oceans. We select only DCC which reach or penetrate the tropopause in the 15 km AIRS footprint. The evaluation is based on Probability Distribution Functions (PDFs) of the current temperatures of the tropical oceans, those predicted by the mean of the CMIP5 models and the PDF of the DCC process. The PDF of the DCC process is derived from the Atmospheric Infrared Sounder (AIRS) between the years 2003 and 2016. During this time the variability due Enso years provided a 1 K p-p change in the mean tropical SST. The key parameter is the SST associated with the onset of the DCC process. This parameter shifts only 0.5 K for each K of warming of the oceans. As a result the frequency of DCC is expected to increases by the end of this century by about 50% above the current frequency.

  12. In-situ waviness characterization of metal plates by a lateral shearing interferometric profilometer.

    PubMed

    Frade, María; Enguita, José María; Alvarez, Ignacio

    2013-04-12

    Characterizing waviness in sheet metal is a key process for quality control in many industries, such as automotive and home appliance manufacturing. However, there is still no known technique able to work in an automated in-floor inspection system. The literature describes many techniques developed in the last three decades, but most of them are either slow, only able to work in laboratory conditions, need very short (unsafe) working distances, or are only able to estimate certain waviness parameters. In this article we propose the use of a lateral shearing interferometric profilometer, which is able to obtain a 19 mm profile in a single acquisition, with sub-micron precision, in an uncontrolled environment, and from a working distance greater than 90 mm. This system allows direct measurement of all needed waviness parameters even with objects in movement. We describe a series of experiments over several samples of steel plates to validate the sensor and the processing method, and the results are in close agreement with those obtained with a contact stylus device. The sensor is an ideal candidate for on-line or in-machine fast automatic waviness assessment, reducing delays and costs in many metalworking processes.

  13. Fluid management in roll-to-roll nanoimprint lithography

    NASA Astrophysics Data System (ADS)

    Jain, A.; Bonnecaze, R. T.

    2013-06-01

    The key process parameters of UV roll-to-roll nanoimprint lithography are identified from an analysis of the fluid, curing, and peeling dynamics. The process includes merging of droplets of imprint material, curing of the imprint material from a viscous liquid to elastic solid resist, and pattern replication and detachment of the resist from template. The time and distances on the web or rigid substrate over which these processes occur are determined as function of the physical properties of the uncured liquid, the cured solid, and the roller configuration. The upper convected Maxwell equation is used to model the viscoelastic liquid and to calculate the force on the substrate and the torque on the roller. The available exposure time is found to be the rate limiting parameter and it is O(√Rho /uo), where R is the radius of the roller, ho is minimum gap between the roller and web, and uo is the velocity of the web. The residual layer thickness of the resist should be larger than the gap between the roller and the substrate to ensure complete feature filling and optimal pattern replication. For lower residual layer thickness, the droplets may not merge to form a continuous film for pattern transfer.

  14. Simulating industrial plasma reactors - A fresh perspective

    NASA Astrophysics Data System (ADS)

    Mohr, Sebastian; Rahimi, Sara; Tennyson, Jonathan; Ansell, Oliver; Patel, Jash

    2016-09-01

    A key goal of the presented research project PowerBase is to produce new integration schemes which enable the manufacturability of 3D integrated power smart systems with high precision TSV etched features. The necessary high aspect ratio etch is performed via the BOSCH process. Investigations in industrial research are often use trial and improvement experimental methods. Simulations provide an alternative way to study the influence of external parameters on the final product, whilst also giving insights into the physical processes. This presentation investigates the process of simulating an industrial ICP reactor used over high power (up to 2x5 kW) and pressure (up to 200 mTorr) ranges, analysing the specific procedures to achieve a compromise between physical correctness and computational speed, while testing commonly made assumptions. This includes, for example, the effect of different physical models and the inclusion of different gas phase and surface reactions with the aim of accurately predicting the dependence of surface rates and profiles on external parameters in SF6 and C4F8 discharges. This project has received funding from the Electronic Component Systems for European Leadership Joint Undertaking under Grant Agreement No. 662133 PowerBase.

  15. Modelling and intepreting the isotopic composition of water vapour in convective updrafts

    NASA Astrophysics Data System (ADS)

    Bolot, M.; Legras, B.; Moyer, E. J.

    2012-08-01

    The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener-Bergeron-Findeisen process). As all of these processes are related to updraft strength, droplet size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.

  16. Modelling and interpreting the isotopic composition of water vapour in convective updrafts

    NASA Astrophysics Data System (ADS)

    Bolot, M.; Legras, B.; Moyer, E. J.

    2013-08-01

    The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed-phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters, including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener-Bergeron-Findeisen process). As all of these processes are related to updraft strength, particle size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.

  17. A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.

    PubMed

    Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy

    2015-06-20

    The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Development Of Simulation Model For Fluid Catalytic Cracking

    NASA Astrophysics Data System (ADS)

    Ghosh, Sobhan

    2010-10-01

    Fluid Catalytic Cracking (FCC) is the most widely used secondary conversion process in the refining industry, for producing gasoline, olefins, and middle distillate from heavier petroleum fractions. There are more than 500 units in the world with a total processing capacity of about 17 to 20% of the crude capacity. FCC catalyst is the highest consumed catalyst in the process industry. On one hand, FCC is quite flexible with respect to it's ability to process wide variety of crudes with a flexible product yield pattern, and on the other hand, the interdependence of the major operating parameters makes the process extremely complex. An operating unit is self balancing and some fluctuations in the independent parameters are automatically adjusted by changing the temperatures and flow rates at different sections. However, a good simulation model is very useful to the refiner to get the best out of the process, in terms of selection of the best catalyst, to cope up with the day to day changing of the feed quality and the demands of different products from FCC unit. In addition, a good model is of great help in designing the process units and peripherals. A simple empirical model is often adequate to monitor the day to day operations, but they are not of any use in handling the other problems such as, catalyst selection or, design / modification of the plant. For this, a kinetic based rigorous model is required. Considering the complexity of the process, large number of chemical species undergoing "n" number of parallel and consecutive reactions, it is virtually impossible to develop a simulation model based on the kinetic parameters. The most common approach is to settle for a semi empirical model. We shall take up the key issues for developing a FCC model and the contribution of such models in the optimum operation of the plant.

  19. Stability of Detached Solidification

    NASA Technical Reports Server (NTRS)

    Mazuruk, K.; Volz, M. P.; Croell, A.

    2009-01-01

    Bridgman crystal growth can be conducted in the so-called "detached" solidification regime, where the growing crystal is detached from the crucible wall. A small gap between the growing crystal and the crucible wall, of the order of 100 micrometers or less, can be maintained during the process. A meniscus is formed at the bottom of the melt between the crystal and crucible wall. Under proper conditions, growth can proceed without collapsing the meniscus. The meniscus shape plays a key role in stabilizing the process. Thermal and other process parameters can also affect the geometrical steady-state stability conditions of solidification. The dynamic stability theory of the shaped crystal growth process has been developed by Tatarchenko. It consists of finding a simplified autonomous set of differential equations for the radius, height, and possibly other process parameters. The problem then reduces to analyzing a system of first order linear differential equations for stability. Here we apply a modified version of this theory for a particular case of detached solidification. Approximate analytical formulas as well as accurate numerical values for the capillary stability coefficients are presented. They display an unexpected singularity as a function of pressure differential. A novel approach to study the thermal field effects on the crystal shape stability has been proposed. In essence, it rectifies the unphysical assumption of the model that utilizes a perturbation of the crystal radius along the axis as being instantaneous. It consists of introducing time delay effects into the mathematical description and leads, in general, to stability over a broader parameter range. We believe that this novel treatment can be advantageously implemented in stability analyses of other crystal growth techniques such as Czochralski and float zone methods.

  20. Application of a simplified mathematical model to estimate the effect of forced aeration on composting in a closed system.

    PubMed

    Bari, Quazi H; Koenig, Albert

    2012-11-01

    The aeration rate is a key process control parameter in the forced aeration composting process because it greatly affects different physico-chemical parameters such as temperature and moisture content, and indirectly influences the biological degradation rate. In this study, the effect of a constant airflow rate on vertical temperature distribution and organic waste degradation in the composting mass is analyzed using a previously developed mathematical model of the composting process. The model was applied to analyze the effect of two different ambient conditions, namely, hot and cold ambient condition, and four different airflow rates such as 1.5, 3.0, 4.5, and 6.0 m(3) m(-2) h(-1), respectively, on the temperature distribution and organic waste degradation in a given waste mixture. The typical waste mixture had 59% moisture content and 96% volatile solids, however, the proportion could be varied as required. The results suggested that the model could be efficiently used to analyze composting under variable ambient and operating conditions. A lower airflow rate around 1.5-3.0 m(3) m(-2) h(-1) was found to be suitable for cold ambient condition while a higher airflow rate around 4.5-6.0 m(3) m(-2) h(-1) was preferable for hot ambient condition. The engineered way of application of this model is flexible which allows the changes in any input parameters within the realistic range. It can be widely used for conceptual process design, studies on the effect of ambient conditions, optimization studies in existing composting plants, and process control. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Solar Fenton and solar TiO2 catalytic treatment of ofloxacin in secondary treated effluents: evaluation of operational and kinetic parameters.

    PubMed

    Michael, I; Hapeshi, E; Michael, C; Fatta-Kassinos, D

    2010-10-01

    Two different technical approaches based on advanced oxidation processes (AOPs), solar Fenton homogeneous photocatalysis (hv/Fe(2+)/H(2)O(2)) and heterogeneous photocatalysis with titanium dioxide (TiO(2)) suspensions were studied for the chemical degradation of the fluoroquinolone ofloxacin in secondary treated effluents. A bench-scale solar simulator in combination with an appropriate photochemical batch reactor was used to evaluate and select the optimal oxidation conditions of ofloxacin spiked in secondary treated domestic effluents. The concentration profile of the examined substrate during degradation was determined by UV/Vis spectrophotometry. Mineralization was monitored by measuring the dissolved organic carbon (DOC). The concentrations of Fe(2+) and H(2)O(2) were the key factors for the solar Fenton process, while the most important parameter of the heterogeneous photocatalysis was proved to be the catalyst loading. Kinetic analyses indicated that the photodegradation of ofloxacin can be described by a pseudo-first-order reaction. The rate constant (k) for the solar Fenton process was determined at different Fe(2+) and H(2)O(2) concentrations whereas the Langmuir-Hinshelwood (LH) kinetic expression was used to assess the kinetics of the heterogeneous photocatalytic process. The conversion of ofloxacin depends on several parameters based on the various experimental conditions, which were investigated. A Daphnia magna bioassay was used to evaluate the potential toxicity of the parent compound and its photo-oxidation by-products in different stages of oxidation. In the present study solar Fenton has been demonstrated to be more effective than the solar TiO(2) process, yielding complete degradation of the examined substrate and DOC reduction of about 50% in 30 min of the photocatalytic treatment. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Diagnostics for a waste processing plasma arc furnace (invited) (abstract)a)

    NASA Astrophysics Data System (ADS)

    Woskov, P. P.

    1995-01-01

    Maintaining the quality of our environment has become an important goal of society. As part of this goal new technologies are being sought to clean up hazardous waste sites and to treat ongoing waste streams. A 1 MW pilot scale dc graphite electrode plasma arc furnace (Mark II) has been constructed at MIT under a joint program among Pacific Northwest Laboratory (PNL), MIT, and Electro-Pyrolysis, Inc. (EPI)c) for the remediation of buried wastes in the DOE complex. A key part of this program is the development of new and improved diagnostics to study, monitor, and control the entire waste remediation process for the optimization of this technology and to safeguard the environment. Continuous, real time diagnostics are needed for a variety of the waste process parameters. These parameters include internal furnace temperatures, slag fill levels, trace metals content in the off-gas stream, off-gas molecular content, feed and slag characterization, and off-gas particulate size, density, and velocity distributions. Diagnostics are currently being tested at MIT for the first three parameters. An active millimeter-wave radiometer with a novel, rotatable graphite waveguide/mirror antenna system has been implemented on Mark II for the measurement of surface emission and emissivity which can be used to determine internal furnace temperatures and fill levels. A microwave torch plasma is being evaluated for use as a excitation source in the furnace off-gas stream for continuous atomic emission spectroscopy of trace metals. These diagnostics should find applicability not only to waste remediation, but also to other high temperature processes such as incinerators, power plants, and steel plants.

  3. A correlative study on data from pork carcass and processed meat (Bauernspeck) for automatic estimation of chemical parameters by means of near-infrared spectroscopy.

    PubMed

    Boschetti, Lucio; Ottavian, Matteo; Facco, Pierantonio; Barolo, Massimiliano; Serva, Lorenzo; Balzan, Stefania; Novelli, Enrico

    2013-11-01

    The use of near-infrared spectroscopy (NIRS) is proposed in this study for the characterization of the quality parameters of a smoked and dry-cured meat product known as Bauernspeck (originally from Northern Italy), as well as of some technological traits of the pork carcass used for its manufacturing. In particular, NIRS is shown to successfully estimate several key quality parameters (including water activity, moisture, dry matter, ash and protein content), suggesting its suitability for real time application in replacement of expensive and time consuming chemical analysis. Furthermore, a correlative approach based on canonical correlation analysis was used to investigate the spectral regions that are mostly correlated to the characteristics of interest. The identification of these regions, which can be linked to the absorbance of the main functional chemical groups, is intended to provide a better understanding of the chemical structure of the substrate under investigation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Assessment of water quality monitoring for the optimal sensor placement in lake Yahuarcocha using pattern recognition techniques and geographical information systems.

    PubMed

    Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo

    2018-03-30

    Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.

  5. Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.

    PubMed

    Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L

    2018-01-01

    Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1

  6. Model reduction for experimental thermal characterization of a holding furnace

    NASA Astrophysics Data System (ADS)

    Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane

    2017-09-01

    Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts. The definition of the structure of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. Internal sensors outputs, together with this model, can be used for assessing the thermal state of the furnace through an inverse approach, for a better control. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. The internal induction heat source as well as the transient radiative transfer inside the furnace are calculated through this detailed model. A reduced lumped body model has been constructed to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm, using two synthetic temperature signals with a further validation test.

  7. Understanding controls on redox processes in floodplain sediments of the Upper Colorado River Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noël, Vincent; Boye, Kristin; Kukkadapu, Ravi K.

    River floodplains, heavily used for water supplies, housing, agriculture, mining, and industry, may have water quality jeopardized by native or exogenous metals. Redox processes mediate the accumulation and release of these species in groundwater. Understanding the physicochemical, hydrological, and biogeochemical controls on the distribution and variability and variability of redox conditions is therefore critical to developing conceptual and numerical models of contaminants transport within floodplains. The distribution and intensity of redox activity at the Rifle, CO, site within the Upper Colorado River Basin (UCRB), are believed to be controlled by textural and compositional heterogeneities. Regionally, the UCRB is impacted bymore » former uranium and vanadium ore processing, resulting in contaminations by U, Mo, V, As, Se, and Mn. Floodplains throughout the UCRB share sediment and groundwater characteristics, making redox activity regionally important to metal and radionuclide mobility. In this study, Fe and S speciation were used to track the distribution and stability of redox processes in sediment cores from three floodplain sites covering a 250 km range in the central portion of the UCRB. The results of the present study support the hypothesis that Fe(III) and sulfate reducing sediments are regionally important in the UCRB. The presence of organic carbon together with pore saturation were the key requirements for reducing conditions, dominated by sulfate-reduction. Sediment texture moderated the response of the system to external forcing, such as oxidant infusion, making fine-grain sediments resistant to change in comparison to coarser-grained sediments. Exposure to O2 and NO3- mediates the reactivity and longevity of freshly precipitated sulfides creating the potential for release of sequestered radionuclides and metals. The physical and chemical parameters of reducing zones evidenced in this study are thus thought to be key parameters on the dynamic exchange of contaminants with surrounding aquifers.« less

  8. What works for wellbeing in culture and sport? Report of a DELPHI process to support coproduction and establish principles and parameters of an evidence review.

    PubMed

    Daykin, Norma; Mansfield, Louise; Payne, Annette; Kay, Tess; Meads, Catherine; D'Innocenzo, Giorgia; Burnett, Adele; Dolan, Paul; Julier, Guy; Longworth, Louise; Tomlinson, Alan; Testoni, Stefano; Victor, Christina

    2017-09-01

    There is a growing recognition of the ways in which culture and sport can contribute to wellbeing. A strong evidence base is needed to support innovative service development and a 3-year research programme is being undertaken to capture best evidence of wellbeing impacts and outcomes of cultural and sporting activities in order to inform UK policy and practice. This article provides an overview of methods and findings from an initial coproduction process with key stakeholders that sought to explore and agree principles and parameters of the evidence review for culture, sport and wellbeing (CSW). A two-stage DELPHI process was conducted with a purposeful sample of 57 stakeholders between August and December 2015. Participants were drawn from a range of culture and sport organisations and included commissioners and managers, policy makers, representatives of service delivery organisations (SDOs) and scholars. The DELPHI 1 questionnaire was developed from extensive consultation in July and August 2015. It explored definitions of wellbeing, the role of evidence, quality assessment, and the culture and sport populations, settings and interventions that are most likely to deliver wellbeing outcomes. Following further consultation, the results, presented as a series of ranked statements, were sent back to participants (DELPHI 2), which allowed them to reflect on and, if they wished, express agreement or disagreement with the emerging consensus. A total of 40 stakeholders (70.02%) responded to the DELPHI questionnaires. DELPHI 1 mapped areas of agreement and disagreement, confirmed in DELPHI 2. The exercise drew together the key priorities for the CSW evidence review. The DELPHI process, in combination with face-to-face deliberation, enabled stakeholders to engage in complex discussion and express nuanced priorities while also allowing the group to come to an overall consensus and agree outcomes. The results will inform the CSW evidence review programme until its completion in March 2018.

  9. What works for wellbeing in culture and sport? Report of a DELPHI process to support coproduction and establish principles and parameters of an evidence review

    PubMed Central

    Daykin, Norma; Mansfield, Louise; Payne, Annette; Kay, Tess; Meads, Catherine; D’Innocenzo, Giorgia; Burnett, Adele; Dolan, Paul; Julier, Guy; Longworth, Louise; Tomlinson, Alan; Testoni, Stefano; Victor, Christina

    2016-01-01

    Aims: There is a growing recognition of the ways in which culture and sport can contribute to wellbeing. A strong evidence base is needed to support innovative service development and a 3-year research programme is being undertaken to capture best evidence of wellbeing impacts and outcomes of cultural and sporting activities in order to inform UK policy and practice. This article provides an overview of methods and findings from an initial coproduction process with key stakeholders that sought to explore and agree principles and parameters of the evidence review for culture, sport and wellbeing (CSW). Methods: A two-stage DELPHI process was conducted with a purposeful sample of 57 stakeholders between August and December 2015. Participants were drawn from a range of culture and sport organisations and included commissioners and managers, policy makers, representatives of service delivery organisations (SDOs) and scholars. The DELPHI 1 questionnaire was developed from extensive consultation in July and August 2015. It explored definitions of wellbeing, the role of evidence, quality assessment, and the culture and sport populations, settings and interventions that are most likely to deliver wellbeing outcomes. Following further consultation, the results, presented as a series of ranked statements, were sent back to participants (DELPHI 2), which allowed them to reflect on and, if they wished, express agreement or disagreement with the emerging consensus. Results: A total of 40 stakeholders (70.02%) responded to the DELPHI questionnaires. DELPHI 1 mapped areas of agreement and disagreement, confirmed in DELPHI 2. The exercise drew together the key priorities for the CSW evidence review. Conclusion: The DELPHI process, in combination with face-to-face deliberation, enabled stakeholders to engage in complex discussion and express nuanced priorities while also allowing the group to come to an overall consensus and agree outcomes. The results will inform the CSW evidence review programme until its completion in March 2018. PMID:27789779

  10. How important is vehicle safety in the new vehicle purchase process?

    PubMed

    Koppel, Sjaanie; Charlton, Judith; Fildes, Brian; Fitzharris, Michael

    2008-05-01

    Whilst there has been a significant increase in the amount of consumer interest in the safety performance of privately owned vehicles, the role that it plays in consumers' purchase decisions is poorly understood. The aims of the current study were to determine: how important vehicle safety is in the new vehicle purchase process; what importance consumers place on safety options/features relative to other convenience and comfort features, and how consumers conceptualise vehicle safety. In addition, the study aimed to investigate the key parameters associated with ranking 'vehicle safety' as the most important consideration in the new vehicle purchase. Participants recruited in Sweden and Spain completed a questionnaire about their new vehicle purchase. The findings from the questionnaire indicated that participants ranked safety-related factors (e.g., EuroNCAP (or other) safety ratings) as more important in the new vehicle purchase process than other vehicle factors (e.g., price, reliability etc.). Similarly, participants ranked safety-related features (e.g., advanced braking systems, front passenger airbags etc.) as more important than non-safety-related features (e.g., route navigation systems, air-conditioning etc.). Consistent with previous research, most participants equated vehicle safety with the presence of specific vehicle safety features or technologies rather than vehicle crash safety/test results or crashworthiness. The key parameters associated with ranking 'vehicle safety' as the most important consideration in the new vehicle purchase were: use of EuroNCAP, gender and education level, age, drivers' concern about crash involvement, first vehicle purchase, annual driving distance, person for whom the vehicle was purchased, and traffic infringement history. The findings from this study are important for policy makers, manufacturers and other stakeholders to assist in setting priorities with regard to the promotion and publicity of vehicle safety features for particular consumer groups (such as younger consumers) in order to increase their knowledge regarding vehicle safety and to encourage them to place highest priority on safety in the new vehicle purchase process.

  11. Numerical and laboratory simulation of fault motion and earthquake occurrence

    NASA Technical Reports Server (NTRS)

    Cohen, S. C.

    1978-01-01

    Simple linear rheologies were used with elastic forces driving the main events and viscoelastic forces being important for aftershock and creep occurrence. Friction and its dependence on velocity, stress, and displacement also plays a key role in determining how, when, and where fault motion occurs. The discussion of the qualitative behavior of the simulators focuses on the manner in which energy was stored in the system and released by the unstable and stable sliding processes. The numerical results emphasize the statistics of earthquake occurrence and the correlations among source parameters.

  12. [Benchmarking of university trauma centers in Germany. Research and teaching].

    PubMed

    Gebhard, F; Raschke, M; Ruchholtz, S; Meffert, R; Marzi, I; Pohlemann, T; Südkamp, N; Josten, C; Zwipp, H

    2011-07-01

    Benchmarking is a very popular business process and meanwhile is used in research as well. The aim of the present study is to elucidate key numbers of German university trauma departments regarding research and teaching. The data set is based upon the monthly reports given by the administration in each university. As a result the study shows that only well-known parameters such as fund-raising and impact factors can be used to benchmark university-based trauma centers. The German federal system does not allow a nationwide benchmarking.

  13. Bench-scale research in biomass liquefaction in support of the Albany, Oregon experimental facility

    NASA Astrophysics Data System (ADS)

    Elliott, D. C.

    1981-03-01

    The liquefaction of solid materials (wood, newsprint, animal manure) by beating to produce useful liquid fuels was investigated. Highlights of work performed include: (1) catalyst mechanism studies; (2) analytical reports on TR8 and TR9 product oils; (3) liquid chromatography/mass spectroscopy analysis of wood oil; (4) batch conversion tests on bottom material; (5) vapor pressure studies; and (6) product evaluation. It was confirmed that the key process parameters and the effects of varying operating conditions are in support of biomass liquefaction.

  14. Formulation strategies for optimizing the morphology of polymeric bulk heterojunction organic solar cells: a brief review

    NASA Astrophysics Data System (ADS)

    Vongsaysy, Uyxing; Bassani, Dario M.; Servant, Laurent; Pavageau, Bertrand; Wantz, Guillaume; Aziz, Hany

    2014-01-01

    Polymeric bulk heterojunction (BHJ) organic solar cells represent one of the most promising technologies for renewable energy with a low fabrication cost. Control over BHJ morphology is one of the key factors in obtaining high-efficiency devices. This review focuses on formulation strategies for optimizing the BHJ morphology. We address how solvent choice and the introduction of processing additives affect the morphology. We also review a number of recent studies concerning prediction methods that utilize the Hansen solubility parameters to develop efficient solvent systems.

  15. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  16. Pressurization and expulsion of cryogenic liquids: Generic requirements for a low gravity experiment

    NASA Technical Reports Server (NTRS)

    Vandresar, Neil T.; Stochl, Robert J.

    1991-01-01

    Requirements are presented for an experiment designed to obtain data for the pressurization and expulsion of a cryogenic supply tank in a low gravity environment. These requirements are of a generic nature and applicable to any cryogenic fluid of interest, condensible or non-condensible pressurants, and various low gravity test platforms such as the Space Shuttle or a free-flyer. Background information, the thermophysical process, preliminary analytical modeling, and experimental requirements are discussed. Key parameters, measurements, hardware requirements, procedures, a test matrix, and data analysis are outlined.

  17. Location specific solidification microstructure control in electron beam melting of Ti-6Al-4V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narra, Sneha P.; Cunningham, Ross; Beuth, Jack

    Relationships between prior beta grain size in solidified Ti-6Al-4V and melting process parameters in the Electron Beam Melting (EBM) process are investigated. Samples are built by varying a machine-dependent proprietary speed function to cover the process space. Optical microscopy is used to measure prior beta grain widths and assess the number of prior beta grains present in a melt pool in the raster region of the build. Despite the complicated evolution of beta grain sizes, the beta grain width scales with melt pool width. The resulting understanding of the relationship between primary machine variables and prior beta grain widths ismore » a key step toward enabling the location specific control of as-built microstructure in the EBM process. Control of grain width in separate specimens and within a single specimen is demonstrated.« less

  18. The change of steel surface chemistry regarding oxygen partial pressure and dew point

    NASA Astrophysics Data System (ADS)

    Norden, Martin; Blumenau, Marc; Wuttke, Thiemo; Peters, Klaus-Josef

    2013-04-01

    By investigating the surface state of a Ti-IF, TiNb-IF and a MnCr-DP after several series of intercritical annealing, the impact of the annealing gas composition on the selective oxidation process is discussed. On behalf of the presented results, it can be concluded that not the general oxygen partial pressure in the annealing furnace, which is a result of the equilibrium reaction of water and hydrogen, is the main driving force for the selective oxidation process. It is shown that the amounts of adsorbed gases at the strip surface and the effective oxygen partial pressure resulting from the adsorbed gases, which is mainly dependent on the water content of the annealing furnace, is driving the selective oxidation processes occurring during intercritical annealing. Thus it is concluded, that for industrial applications the dew point must be the key parameter value for process control.

  19. Process simulation of ethanol production from biomass gasification and syngas fermentation.

    PubMed

    Pardo-Planas, Oscar; Atiyeh, Hasan K; Phillips, John R; Aichele, Clint P; Mohammad, Sayeed

    2017-12-01

    The hybrid gasification-syngas fermentation platform can produce more bioethanol utilizing all biomass components compared to the biochemical conversion technology. Syngas fermentation operates at mild temperatures and pressures and avoids using expensive pretreatment processes and enzymes. This study presents a new process simulation model developed with Aspen Plus® of a biorefinery based on a hybrid conversion technology for the production of anhydrous ethanol using 1200tons per day (wb) of switchgrass. The simulation model consists of three modules: gasification, fermentation, and product recovery. The results revealed a potential production of about 36.5million gallons of anhydrous ethanol per year. Sensitivity analyses were also performed to investigate the effects of gasification and fermentation parameters that are keys for the development of an efficient process in terms of energy conservation and ethanol production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. RNA-SeQC: RNA-seq metrics for quality control and process optimization.

    PubMed

    DeLuca, David S; Levin, Joshua Z; Sivachenko, Andrey; Fennell, Timothy; Nazaire, Marc-Danie; Williams, Chris; Reich, Michael; Winckler, Wendy; Getz, Gad

    2012-06-01

    RNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis. See www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool.

  1. Single-molecule comparison of DNA Pol I activity with native and analog nucleotides

    NASA Astrophysics Data System (ADS)

    Gul, Osman; Olsen, Tivoli; Choi, Yongki; Corso, Brad; Weiss, Gregory; Collins, Philip

    2014-03-01

    DNA polymerases are critical enzymes for DNA replication, and because of their complex catalytic cycle they are excellent targets for investigation by single-molecule experimental techniques. Recently, we studied the Klenow fragment (KF) of DNA polymerase I using a label-free, electronic technique involving single KF molecules attached to carbon nanotube transistors. The electronic technique allowed long-duration monitoring of a single KF molecule while processing thousands of template strands. Processivity of up to 42 nucleotide bases was directly observed, and statistical analysis of the recordings determined key kinetic parameters for the enzyme's open and closed conformations. Subsequently, we have used the same technique to compare the incorporation of canonical nucleotides like dATP to analogs like 1-thio-2'-dATP. The analog had almost no affect on duration of the closed conformation, during which the nucleotide is incorporated. On the other hand, the analog increased the rate-limiting duration of the open conformation by almost 40%. We propose that the thiolated analog interferes with KF's recognition and binding, two key steps that determine its ensemble turnover rate.

  2. ASSESSMENT OF HOUSEHOLD CARBON FOOTPRINT REDUCTION POTENTIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, Klaas Jan; Homan, Greg; Brown, Rich

    2009-04-15

    The term ?household carbon footprint? refers to the total annual carbon emissions associated with household consumption of energy, goods, and services. In this project, Lawrence Berkeley National Laboratory developed a carbon footprint modeling framework that characterizes the key underlying technologies and processes that contribute to household carbon footprints in California and the United States. The approach breaks down the carbon footprint by 35 different household fuel end uses and 32 different supply chain fuel end uses. This level of end use detail allows energy and policy analysts to better understand the underlying technologies and processes contributing to the carbon footprintmore » of California households. The modeling framework was applied to estimate the annual home energy and supply chain carbon footprints of a prototypical California household. A preliminary assessment of parameter uncertainty associated with key model input data was also conducted. To illustrate the policy-relevance of this modeling framework, a case study was conducted that analyzed the achievable carbon footprint reductions associated with the adoption of energy efficient household and supply chain technologies.« less

  3. Three-Dimensional Imaging of the Mouse Organ of Corti Cytoarchitecture for Mechanical Modeling

    NASA Astrophysics Data System (ADS)

    Puria, Sunil; Hartman, Byron; Kim, Jichul; Oghalai, John S.; Ricci, Anthony J.; Liberman, M. Charles

    2011-11-01

    Cochlear models typically use continuous anatomical descriptions and homogenized parameters based on two-dimensional images for describing the organ of Corti. To produce refined models based more closely on the actual cochlear cytoarchitecture, three-dimensional morphometric parameters of key mechanical structures are required. Towards this goal, we developed and compared three different imaging methods: (1) A fixed cochlear whole-mount preparation using the fluorescent dye Cellmask®, which is a molecule taken up by cell membranes and clearly delineates Deiters' cells, outer hair cells, and the phalangeal process, imaged using confocal microscopy; (2) An in situ fixed preparation with hair cells labeled using anti-prestin and supporting structures labeled using phalloidin, imaged using two-photon microscopy; and (3) A membrane-tomato (mT) mouse with fluorescent proteins expressed in all cell membranes, which enables two-photon imaging of an in situ live preparation with excellent visualization of the organ of Corti. Morphometric parameters including lengths, diameters, and angles, were extracted from 3D cellular surface reconstructions of the resulting images. Preliminary results indicate that the length of the phalangeal processes decreases from the first (inner most) to third (outer most) row of outer hair cells, and that their length also likely varies from base to apex and across species.

  4. Validation of systems biology derived molecular markers of renal donor organ status associated with long term allograft function.

    PubMed

    Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert

    2018-05-03

    Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.

  5. Universal Responses of Cyclic-Oxidation Models Studied

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2003-01-01

    Oxidation is an important degradation process for materials operating in the high-temperature air or oxygen environments typical of jet turbine or rocket engines. Reaction of the combustion gases with the component material forms surface layer scales during these oxidative exposures. Typically, the instantaneous rate of reaction is inversely proportional to the existing scale thickness, giving rise to parabolic kinetics. However, more realistic applications entail periodic startup and shutdown. Some scale spallation may occur upon cooling, resulting in loss of the protective diffusion barrier provided by a fully intact scale. Upon reheating, the component will experience accelerated oxidation due to this spallation. Cyclic-oxidation testing has, therefore, been a mainstay of characterization and performance ranking for high-temperature materials. Models simulate this process by calculating how a scale spalls upon cooling and regrows upon heating (refs. 1 to 3). Recently released NASA software (COSP for Windows) allows researchers to specify a uniform layer or discrete segments of spallation (ref. 4). Families of model curves exhibit consistent regularity and trends with input parameters, and characteristic features have been empirically described in terms of these parameters. Although much insight has been gained from experimental and model curves, no equation has been derived that can describe this behavior explicitly as functions of the key oxidation parameters.

  6. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2016-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  7. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this presentation is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  8. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  9. The water retention curve and relative permeability for gas production from hydrate-bearing sediments: pore-network model simulation

    NASA Astrophysics Data System (ADS)

    Mahabadi, Nariman; Dai, Sheng; Seol, Yongkoo; Sup Yun, Tae; Jang, Jaewon

    2016-08-01

    The water retention curve and relative permeability are critical to predict gas and water production from hydrate-bearing sediments. However, values for key parameters that characterize gas and water flows during hydrate dissociation have not been identified due to experimental challenges. This study utilizes the combined techniques of micro-focus X-ray computed tomography (CT) and pore-network model simulation to identify proper values for those key parameters, such as gas entry pressure, residual water saturation, and curve fitting values. Hydrates with various saturation and morphology are realized in the pore-network that was extracted from micron-resolution CT images of sediments recovered from the hydrate deposit at the Mallik site, and then the processes of gas invasion, hydrate dissociation, gas expansion, and gas and water permeability are simulated. Results show that greater hydrate saturation in sediments lead to higher gas entry pressure, higher residual water saturation, and steeper water retention curve. An increase in hydrate saturation decreases gas permeability but has marginal effects on water permeability in sediments with uniformly distributed hydrate. Hydrate morphology has more significant impacts than hydrate saturation on relative permeability. Sediments with heterogeneously distributed hydrate tend to result in lower residual water saturation and higher gas and water permeability. In this sense, the Brooks-Corey model that uses two fitting parameters individually for gas and water permeability properly capture the effect of hydrate saturation and morphology on gas and water flows in hydrate-bearing sediments.

  10. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.

  11. Management of physical health in patients with schizophrenia: practical recommendations.

    PubMed

    Heald, A; Montejo, A L; Millar, H; De Hert, M; McCrae, J; Correll, C U

    2010-06-01

    Improved physical health care is a pressing need for patients with schizophrenia. It can be achieved by means of a multidisciplinary team led by the psychiatrist. Key priorities should include: selection of antipsychotic therapy with a low risk of weight gain and metabolic adverse effects; routine assessment, recording and longitudinal tracking of key physical health parameters, ideally by electronic spreadsheets; and intervention to control CVD risk following the same principles as for the general population. A few simple tools to assess and record key physical parameters, combined with lifestyle intervention and pharmacological treatment as indicated, could significantly improve physical outcomes. Effective implementation of strategies to optimise physical health parameters in patients with severe enduring mental illness requires engagement and communication between psychiatrists and primary care in most health settings. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  12. Channel-parameter estimation for satellite-to-submarine continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Xie, Cailang; Huang, Peng; Li, Jiawei; Zhang, Ling; Huang, Duan; Zeng, Guihua

    2018-05-01

    This paper deals with a channel-parameter estimation for continuous-variable quantum key distribution (CV-QKD) over a satellite-to-submarine link. In particular, we focus on the channel transmittances and the excess noise which are affected by atmospheric turbulence, surface roughness, zenith angle of the satellite, wind speed, submarine depth, etc. The estimation method is based on proposed algorithms and is applied to low-Earth orbits using the Monte Carlo approach. For light at 550 nm with a repetition frequency of 1 MHz, the effects of the estimated parameters on the performance of the CV-QKD system are assessed by a simulation by comparing the secret key bit rate in the daytime and at night. Our results show the feasibility of satellite-to-submarine CV-QKD, providing an unconditionally secure approach to achieve global networks for underwater communications.

  13. Boltzmann Energy-based Image Analysis Demonstrates that Extracellular Domain Size Differences Explain Protein Segregation at Immune Synapses

    PubMed Central

    Burroughs, Nigel J.; Köhler, Karsten; Miloserdov, Vladimir; Dustin, Michael L.; van der Merwe, P. Anton; Davis, Daniel M.

    2011-01-01

    Immune synapses formed by T and NK cells both show segregation of the integrin ICAM1 from other proteins such as CD2 (T cell) or KIR (NK cell). However, the mechanism by which these proteins segregate remains unclear; one key hypothesis is a redistribution based on protein size. Simulations of this mechanism qualitatively reproduce observed segregation patterns, but only in certain parameter regimes. Verifying that these parameter constraints in fact hold has not been possible to date, this requiring a quantitative coupling of theory to experimental data. Here, we address this challenge, developing a new methodology for analysing and quantifying image data and its integration with biophysical models. Specifically we fit a binding kinetics model to 2 colour fluorescence data for cytoskeleton independent synapses (2 and 3D) and test whether the observed inverse correlation between fluorophores conforms to size dependent exclusion, and further, whether patterned states are predicted when model parameters are estimated on individual synapses. All synapses analysed satisfy these conditions demonstrating that the mechanisms of protein redistribution have identifiable signatures in their spatial patterns. We conclude that energy processes implicit in protein size based segregation can drive the patternation observed in individual synapses, at least for the specific examples tested, such that no additional processes need to be invoked. This implies that biophysical processes within the membrane interface have a crucial impact on cell∶cell communication and cell signalling, governing protein interactions and protein aggregation. PMID:21829338

  14. A global resource allocation strategy governs growth transition kinetics of Escherichia coli

    PubMed Central

    Erickson, David W; Schink, Severin J.; Patsalo, Vadim; Williamson, James R.; Gerland, Ulrich; Hwa, Terence

    2018-01-01

    A grand challenge of systems biology is to predict the kinetic responses of living systems to perturbations starting from the underlying molecular interactions. Changes in the nutrient environment have long been used to study regulation and adaptation phenomena in microorganisms1–3 and they remain a topic of active investigation4–11. Although much is known about the molecular interactions that govern the regulation of key metabolic processes in response to applied perturbations12–17, they are insufficiently quantified for predictive bottom-up modelling. Here we develop a top-down approach, expanding the recently established coarse-grained proteome allocation models15,18–20 from steady-state growth into the kinetic regime. Using only qualitative knowledge of the underlying regulatory processes and imposing the condition of flux balance, we derive a quantitative model of bacterial growth transitions that is independent of inaccessible kinetic parameters. The resulting flux-controlled regulation model accurately predicts the time course of gene expression and biomass accumulation in response to carbon upshifts and downshifts (for example, diauxic shifts) without adjustable parameters. As predicted by the model and validated by quantitative proteomics, cells exhibit suboptimal recovery kinetics in response to nutrient shifts owing to a rigid strategy of protein synthesis allocation, which is not directed towards alleviating specific metabolic bottlenecks. Our approach does not rely on kinetic parameters, and therefore points to a theoretical framework for describing a broad range of such kinetic processes without detailed knowledge of the underlying biochemical reactions. PMID:29072300

  15. r.avaflow v1, an advanced open-source computational framework for the propagation and interaction of two-phase mass flows

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Fischer, Jan-Thomas; Krenn, Julia; Pudasaini, Shiva P.

    2017-02-01

    r.avaflow represents an innovative open-source computational tool for routing rapid mass flows, avalanches, or process chains from a defined release area down an arbitrary topography to a deposition area. In contrast to most existing computational tools, r.avaflow (i) employs a two-phase, interacting solid and fluid mixture model (Pudasaini, 2012); (ii) is suitable for modelling more or less complex process chains and interactions; (iii) explicitly considers both entrainment and stopping with deposition, i.e. the change of the basal topography; (iv) allows for the definition of multiple release masses, and/or hydrographs; and (v) serves with built-in functionalities for validation, parameter optimization, and sensitivity analysis. r.avaflow is freely available as a raster module of the GRASS GIS software, employing the programming languages Python and C along with the statistical software R. We exemplify the functionalities of r.avaflow by means of two sets of computational experiments: (1) generic process chains consisting in bulk mass and hydrograph release into a reservoir with entrainment of the dam and impact downstream; (2) the prehistoric Acheron rock avalanche, New Zealand. The simulation results are generally plausible for (1) and, after the optimization of two key parameters, reasonably in line with the corresponding observations for (2). However, we identify some potential to enhance the analytic and numerical concepts. Further, thorough parameter studies will be necessary in order to make r.avaflow fit for reliable forward simulations of possible future mass flow events.

  16. A generalized multi-dimensional mathematical model for charging and discharging processes in a supercapacitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A

    A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less

  17. Comparison of complex and parsimonious model structures by means of a modular hydrological model concept

    NASA Astrophysics Data System (ADS)

    Holzmann, Hubert; Massmann, Carolina

    2015-04-01

    A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.

  18. Autotrophic denitrification supported by biotite dissolution in crystalline aquifers: (2) transient mixing and denitrification dynamic during long-term pumping.

    PubMed

    Roques, Clément; Aquilina, Luc; Boisson, Alexandre; Vergnaud-Ayraud, Virginie; Labasque, Thierry; Longuevergne, Laurent; Laurencelle, Marc; Dufresne, Alexis; de Dreuzy, Jean-Raynald; Pauwels, Hélène; Bour, Olivier

    2018-04-01

    We investigated the mixing and dynamic of denitrification processes induced by long-term pumping in the crystalline aquifer of Ploemeur (Brittany, France). Hydrological and geochemical parameters have been continuously recorded over 15 boreholes in 5km 2 on a 25-year period. This extensive spatial and temporal monitoring of conservative as well as reactive compounds is a key opportunity to identify aquifer-scale transport and reactive processes in crystalline aquifers. Time series analysis of the conservative elements recorded at the pumped well were used to determine mixing fractions from different compartments of the aquifer on the basis of a Principal Component Analysis approach coupled with an end-member mixing analysis. We could reveal that pumping thus induces a thorough reorganization of fluxes known as capture, favoring infiltration and vertical fluxes in the recharge zone, and upwelling of deep and distant water at long-term time scales. These mixing fractions were then used to quantify the extent of denitrification linked to pumping. Based on the results from batch experiments described in a companion paper, our computations revealed that i) autotrophic denitrification processes are dominant in this context where carbon sources are limited, that ii) nitrate reduction does not only come from the oxidation of pyrite as classically described in previous studies analyzing denitrification processes in similar contexts, and that iii) biotite plays a critical role in sustaining the nitrate reduction process. Both nitrate reduction, sulfate production as well as fluor release ratios support the hypothesis that biotite plays a key role of electron donor in this context. The batch-to-site similarities support biotite availability and the role by bacterial communities as key controls of nitrate removal in such crystalline aquifers. However, the long term data monitoring also indicates that mixing and reactive processes evolve extremely slowly at the scale of the decade. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Parameter as a Switch Between Dynamical States of a Network in Population Decoding.

    PubMed

    Yu, Jiali; Mao, Hua; Yi, Zhang

    2017-04-01

    Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.

  20. Imaging on a Shoestring: Cost-Effective Technologies for Probing Vadose Zone Transport Processes

    NASA Astrophysics Data System (ADS)

    Corkhill, C.; Bridge, J. W.; Barns, G.; Fraser, R.; Romero-Gonzalez, M.; Wilson, R.; Banwart, S.

    2010-12-01

    Key barriers to the widespread uptake of imaging technology for high spatial resolution monitoring of porous media systems are cost and accessibility. X-ray tomography, magnetic resonance imaging (MRI), gamma and neutron radiography require highly specialised equipment, controlled laboratory environments and/or access to large synchrotron facilities. Here we present results from visible light, fluorescence and autoradiographic imaging techniques developed at low cost and applied in standard analytical laboratories, adapted where necessary at minimal capital expense. UV-visible time lapse fluorescence imaging (UV-vis TLFI) in a transparent thin bed chamber enabled microspheres labelled with fluorescent dye and a conservative fluorophore solute (disodium fluorescein) to be measured simultaneously in saturated, partially-saturated and actively draining quartz sand to elucidate empirical values for colloid transport and deposition parameters distributed throughout the flow field, independently of theoretical approximations. Key results include the first experimental quantification of the effects of ionic strength and air-water interfacial area on colloid deposition above a capillary fringe, and the first direct observations of particle mobilisation and redeposition by moving saturation gradients during drainage. UV-vis imaging was also used to study biodegradation and reactive transport in a variety of saturated conditions, applying fluorescence as a probe for oxygen and nitrate concentration gradients, pH, solute transport parameters, reduction of uranium, and mapping of two-dimensional flow fields around a model dipole flow borehole system to validate numerical models. Costs are low: LED excitation sources (< US 50), flow chambers (US 200) and detectors (although a complete scientific-grade CCD set-up costs around US$ 8000, robust datasets can be obtained using a commercial digital SLR camera) mean that set-ups can be flexible to meet changing experimental requirements. The critical limitations of UV-vis fluorescence imaging are the need for reliable fluorescent probes suited to the experimental objective, and the reliance on thin-bed (2D) transparent porous media. Autoradiographic techniques address some of these limitations permit imaging of key biogeochemical processes in opaque media using radioactive probes, without the need for specialised radiation sources. We present initial calibration data for the use of autoradiography to monitor transport parameters for radionuclides (99-technetium), and a novel application of a radioactive salt tracer as a probe for pore water content, in model porous media systems.

  1. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  2. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  3. In-situ XRD and EDS method study on the oxidation behaviour of Ni-Cu sulphide ore.

    PubMed

    Li, Guangshi; Cheng, Hongwei; Xiong, Xiaolu; Lu, Xionggang; Xu, Cong; Lu, Changyuan; Zou, Xingli; Xu, Qian

    2017-06-12

    The oxidation mechanism of sulfides is the key issue during the sulphide-metallurgy process. In this study, the phase transformation and element migration were clearly demonstrated by in-situ laboratory-based X-ray diffraction (XRD) and energy-dispersive X-ray spectroscopy (EDS), respectively. The reaction sequence and a four-step oxidation mechanism were proposed and identified. The elemental distribution demonstrated that at a low temperature, the Fe atoms diffused outward and the Ni/Cu atoms migrated toward the inner core, whereas the opposite diffusion processes were observed at a higher temperature. Importantly, the unique visual presentation of the oxidation behaviour provided by the combination of in-situ XRD and EDS might be useful for optimising the process parameters to improve the Ni/Cu extraction efficiency during Ni-Cu sulphide metallurgy.

  4. Cryogenic Etching of High Aspect Ratio 400 nm Pitch Silicon Gratings.

    PubMed

    Miao, Houxun; Chen, Lei; Mirzaeimoghri, Mona; Kasica, Richard; Wen, Han

    2016-10-01

    The cryogenic process and Bosch process are two widely used processes for reactive ion etching of high aspect ratio silicon structures. This paper focuses on the cryogenic deep etching of 400 nm pitch silicon gratings with various etching mask materials including polymer, Cr, SiO 2 and Cr-on-polymer. The undercut is found to be the key factor limiting the achievable aspect ratio for the direct hard masks of Cr and SiO 2 , while the etch selectivity responds to the limitation of the polymer mask. The Cr-on-polymer mask provides the same high selectivity as Cr and reduces the excessive undercut introduced by direct hard masks. By optimizing the etching parameters, we etched a 400 nm pitch grating to ≈ 10.6 μ m depth, corresponding to an aspect ratio of ≈ 53.

  5. Study of poly(L-lactide) microparticles based on supercritical CO2.

    PubMed

    Chen, Ai-Zheng; Pu, Xi-Ming; Kang, Yun-Qing; Liao, Li; Yao, Ya-Dong; Yin, Guang-Fu

    2007-12-01

    Poly(L-lactide) (PLLA) microparticles were prepared in supercritical anti-solvent process. The effects of several key factors on surface morphology, and particle size and particle size distribution were investigated. These factors included initial drops size, saturation ratio of PLLA solution, pressure, temperature, concentration of the organic solution, the flow rate of the solution and molecular weight of PLLA. The results indicated that the saturation ratio of PLLA solution, concentration of the organic solution and flow rate of the solution played important roles on the properties of products. Various microparticles with the mean particle size ranging from 0.64 to 6.64 microm, could be prepared by adjusting the operational parameters. Fine microparticles were obtained in a process namely solution-enhanced dispersion by supercritical fluids (SEDS) process with dichloromethane/acetone mixture as solution.

  6. Resilience of Key Biological Parameters of the Senegalese Flat Sardinella to Overfishing and Climate Change.

    PubMed

    Ba, Kamarel; Thiaw, Modou; Lazar, Najih; Sarr, Alassane; Brochier, Timothée; Ndiaye, Ismaïla; Faye, Alioune; Sadio, Oumar; Panfili, Jacques; Thiaw, Omar Thiom; Brehmer, Patrice

    2016-01-01

    The stock of the Senegalese flat sardinella, Sardinella maderensis, is highly exploited in Senegal, West Africa. Its growth and reproduction parameters are key biological indicators for improving fisheries management. This study reviewed these parameters using landing data from small-scale fisheries in Senegal and literature information dated back more than 25 years. Age was estimated using length-frequency data to calculate growth parameters and assess the growth performance index. With global climate change there has been an increase in the average sea surface temperature along the Senegalese coast but the length-weight parameters, sex ratio, size at first sexual maturity, period of reproduction and condition factor of S. maderensis have not changed significantly. The above parameters of S. maderensis have hardly changed, despite high exploitation and fluctuations in environmental conditions that affect the early development phases of small pelagic fish in West Africa. This lack of plasticity of the species regarding of the biological parameters studied should be considered when planning relevant fishery management plans.

  7. Influence of key processing parameters and seeding density effects of microencapsulated chondrocytes fabricated using electrohydrodynamic spraying.

    PubMed

    Gansau, Jennifer; Kelly, Lara; Buckley, Conor

    2018-06-11

    Cell delivery and leakage during injection remains a challenge for cell-based intervertebral disc regeneration strategies. Cellular microencapsulation may offer a promising approach to overcome these limitations by providing a protective niche during intradiscal injection. Electrohydrodynamic spraying (EHDS) is a versatile one-step approach for microencapsulation of cells using a high voltage electric field. The primary objective of this work was to characterise key processing parameters such as applied voltage (0, 5, 10 or 15kV), emitter needle gauge (21, 26 or 30G), alginate concentration (1, 2 or 3%) and flow rate (50, 100, 250 or 500 µl/min) to regulate the morphology of alginate microcapsules and subsequent cell viability when altering these parameters. The effect of initial cell seeding density (5, 10 and 20x10<sup>6</sup> cells/ml) on subsequent matrix accumulation of microencapsulated articular chondrocytes was also evaluated. Results showed that increasing alginate concentration and thus viscosity increased overall microcapsule size but also affected the geometry towards ellipsoidal-shaped gels. Altering the electric field strength and needle diameter regulated microcapsule size towards a smaller diameter with increasing voltage and smaller needle diameter. Needle size did not appear to affect cell viability when operating with lower alginate concentrations (1% and 2%), although higher concentrations (3%) and thus higher viscosity hydrogels resulted in diminished viability with decreasing needle diameter. Increasing cell density resulted in decreased cell viability and a concomitant decrease in DNA content, perhaps due to competing nutrient demands as a result of more closely packed cells. However, higher cell densities resulted in increased levels of extracellular matrix accumulated. Overall, this work highlights the potential of EHDS as a controllable and versatile approach to fabricate microcapsules for injectable delivery which can be used in a variety of applications such as drug development or cell therapies. . © 2018 IOP Publishing Ltd.

  8. Blasting Damage Predictions by Numerical Modeling in Siahbishe Pumped Storage Powerhouse

    NASA Astrophysics Data System (ADS)

    Eslami, Majid; Goshtasbi, Kamran

    2018-04-01

    One of the popular methods of underground and surface excavations is the use of blasting. Throughout this method of excavation, the loading resulted from blasting can be affected by different geo-mechanical and structural parameters of rock mass. Several factors affect turbulence in underground structures some of which are explosion, vibration, and stress impulses caused by the neighbouring blasting products. In investigating the blasting mechanism one should address the processes which expand with time and cause seismic events. To protect the adjoining structures against any probable deconstruction or damage, it is very important to model the blasting process prior to any actual operation. Efforts have been taken in the present study to demonstrate the potentiality of numerical methods in predicting the specified parameters in order to prevent any probable destruction. For this purpose the blasting process was modeled, according to its natural implementation, in one of the tunnels of Siahbishe dam by the 3DEC and AUTODYN 3D codes. 3DEC was used for modeling the blasting environment as well as the blast holes and AUTODYN 3D for modeling the explosion process in the blast hole. In this process the output of AUTODYN 3D, which is a result of modeling the blast hole and is in the form of stress waves, is entered into 3DEC. For analyzing the amount of destruction made by the blasting operation, the key parameter of Peak Particle Velocity was used. In the end, the numerical modeling results have been compared with the data recorded by the seismographs planted through the tunnel. As the results indicated 3DEC and AUTODYN 3D proved appropriate for analyzing such an issue. Therefore, by means of these two softwares one can analyze explosion processes prior to their implementation and make close estimation of the damage resulting from these processes.

  9. First-principles modeling of laser-matter interaction and plasma dynamics in nanosecond pulsed laser shock processing

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongyang; Nian, Qiong; Doumanidis, Charalabos C.; Liao, Yiliang

    2018-02-01

    Nanosecond pulsed laser shock processing (LSP) techniques, including laser shock peening, laser peen forming, and laser shock imprinting, have been employed for widespread industrial applications. In these processes, the main beneficial characteristic is the laser-induced shockwave with a high pressure (in the order of GPa), which leads to the plastic deformation with an ultrahigh strain rate (105-106/s) on the surface of target materials. Although LSP processes have been extensively studied by experiments, few efforts have been put on elucidating underlying process mechanisms through developing a physics-based process model. In particular, development of a first-principles model is critical for process optimization and novel process design. This work aims at introducing such a theoretical model for a fundamental understanding of process mechanisms in LSP. Emphasis is placed on the laser-matter interaction and plasma dynamics. This model is found to offer capabilities in predicting key parameters including electron and ion temperatures, plasma state variables (temperature, density, and pressure), and the propagation of the laser shockwave. The modeling results were validated by experimental data.

  10. Increasing molecular weight parameters of a helical polymer through polymerization in a chiral solvent.

    PubMed

    Holder, Simon J; Achilleos, Mariliz; Jones, Richard G

    2006-09-27

    In this communication, we will demonstrate that polymerization in a chiral solvent can affect the molecular weight distribution of the product by perturbing the balance of the P and M helical screw senses of the growing chains. Specifically, for the Wurtz-type synthesis of polymethylphenylsilane (PMPS) in either (R) or (S)-limonene, the weight-average molecular weight of the products (average Mw = 80 000) was twice that of PMPS synthesized in (R/S)-limonene (average Mw = 39 200). Peturbation of the helical segmentation along the polymer chains leads to a reduction in the rate of occurrence of a key termination step. This the first time that a chiral solvent has been demonstrated to have such an effect on a polymerization process in affecting molecular weight parameters in contrast to affecting tacticity.

  11. Molecular parameters of head and neck cancer metastasis

    PubMed Central

    Bhave, Sanjay L.; Teknos, Theodoros N.; Pan, Quintin; James, Arthur G.; Solove, Richard J.

    2011-01-01

    Metastasis remains a major cause of mortality in patients with head and neck squamous cell carcinoma (HNSCC). HNSCC patients with metastatic disease have extremely poor prognosis with survival rate of less than a year. Metastasis is an intricate sequential process which requires a discrete population of tumor cells to possess the capacity to intravasate from the primary tumor into systemic circulation, survive in circulation, extravasate at a distant site, and proliferate in a foreign hostile environment. Literature has accumulated to provide mechanistic insight into several signal transduction pathways, receptor tyrosine kinases (RTKs), signal transducer and activator of transcription 3 (Stat3), Rho GTPases, protein kinase Cε (PKCε), and nuclear factor-κB (NF-κB), that are involved in mediating a metastatic tumor cell phenotype in HNSCC. Here we highlight accrued information regarding the key molecular parameters of HNSCC metastasis. PMID:22077153

  12. Effects of Nano Additives in engine emission Characteristics using Blends of Lemon Balm oil with Diesel

    NASA Astrophysics Data System (ADS)

    Senthil kumar, J.; Ganesan, S.; Sivasaravanan, S.; Padmanabhan, S.; Krishnan, L.; Aniruthan, V. C.

    2017-05-01

    Economic growth in developing countries has led to enormous increase in energy demand. In India the energy demand is increasing at a rate of 6.5% every year. The crude oil demand of country is meet by bring in of about 70%. Thus the energy safety measures have become key issue for our country. Bio diesel an eco-friendly and renewable fuel alternate for diesel has been getting the consideration of researcher’s entire world. The main aim of this paper is to evaluate the engine parameters using blend of pure lemon balm oil with diesel. Also nano Additives is used as a catalyst with blends of bio fuel to enhance the Emission Characteristics of various effective gases like CO2, NOx, CO and UHC with various levels of engine process parameters.

  13. MIXI: Mobile Intelligent X-Ray Inspection System

    NASA Astrophysics Data System (ADS)

    Arodzero, Anatoli; Boucher, Salime; Kutsaev, Sergey V.; Ziskin, Vitaliy

    2017-07-01

    A novel, low-dose Mobile Intelligent X-ray Inspection (MIXI) concept is being developed at RadiaBeam Technologies. The MIXI concept relies on a linac-based, adaptive, ramped energy source of short X-ray packets of pulses, a new type of fast X-ray detector, rapid processing of detector signals for intelligent control of the linac, and advanced radiography image processing. The key parameters for this system include: better than 3 mm line pair resolution; penetration greater than 320 mm of steel equivalent; scan speed with 100% image sampling rate of up to 15 km/h; and material discrimination over a range of thicknesses up to 200 mm of steel equivalent. Its minimal radiation dose, size and weight allow MIXI to be placed on a lightweight truck chassis.

  14. Climate Change Tower Integrated Project (CCT-IP) A scientific platform to investigate processes at the surface and in the low troposphere

    NASA Astrophysics Data System (ADS)

    Vitale, Vito; Udisti, Roberto

    2010-05-01

    V.Vitale, R.Udisti, A.Viola, S.Argentini, M.Nardino, C.Lanconelli, M. Mazzola, T.Georgiadis, R.Salvatori, A.Ianniello, C.Turetta, C.Barbante, F.Spataro, M.Valt, F.Cairo, L.Diliberto, S.Becagli, R.Sparapani, R. Casacchia ************************************************************************ To improve parameterization and reduce uncertainties in climate models, experimental measurements are needed to deep the knowledge on the complex physico-chemical process that characterize the Arctic troposphere and the air-sea-land interaction. Svalbard Islands, located at the northernmost margin of the southern warm current of the Atlantic Ocean, lies in an ideal position to study the combined effects of climate change affecting the atmosphere, as well as the ocean and land. Furthermore, Ny-Ålesund represents a unique site, where international cooperation among countries can allow the monitoring of a greater number of key parameters of the Arctic physical and chemical systems. Based on these remarks, since 2008, CNR Earth and Environment Department sustained and funded the Climate Change Integrate Project ( CCT-IP) in the Kongsfjorden area, aiming to setup a scientific platform at the Italian station "Dirigibile Italia", in Ny Alesund. This platform will be able to complement research activities provided by other national (MIUR-PRIN07) and international research programs. In the framework of this project, it was planned obtaining a comprehensive data set of physical and chemical atmospheric parameters, useful to determine all components of the energy budget at the surface, their temporal variations, and role played by different processes involving air, aerosol, snow, ice and land (permafrost and vegetation). Key element of such platform is the new 32 m high Admundsen-Nobile Climate Change Tower (CCT) that will allow to deeply investigate energy budget and the atmospheric boundary layer dynamics and exchange fluxes (heat, momentum, chemical substances) at the surface. A first set of instruments to measure the radiation balance, surface albedo, the vertical profile of meteorological parameters and the heat flux at the air -snow interface has been installed in September 2009. The on-site measurements are continously running and the data are sent in Italy via a internet connection and stored in a comprehensive database A six months intensive field campaign will start in March 2010 to measure physical characteristics and chemical composition of the aerosol and snow, the down and upwelling mass fluxes of aerosols and gaseous substances and short-lived pollutants (SLPs). These measurements will improve our knowledge on the processes controlling sources, transport processes and atmospheric transformation of chemical compound in snow and aerosol, useful as environmental and climatic marker, and will highlight the importance of local surface processes with respect of large scale transport processes.

  15. Optimal Cytoplasmic Transport in Viral Infections

    PubMed Central

    D'Orsogna, Maria R.; Chou, Tom

    2009-01-01

    For many viruses, the ability to infect eukaryotic cells depends on their transport through the cytoplasm and across the nuclear membrane of the host cell. During this journey, viral contents are biochemically processed into complexes capable of both nuclear penetration and genomic integration. We develop a stochastic model of viral entry that incorporates all relevant aspects of transport, including convection along microtubules, biochemical conversion, degradation, and nuclear entry. Analysis of the nuclear infection probabilities in terms of the transport velocity, degradation, and biochemical conversion rates shows how certain values of key parameters can maximize the nuclear entry probability of the viral material. The existence of such “optimal” infection scenarios depends on the details of the biochemical conversion process and implies potentially counterintuitive effects in viral infection, suggesting new avenues for antiviral treatment. Such optimal parameter values provide a plausible transport-based explanation of the action of restriction factors and of experimentally observed optimal capsid stability. Finally, we propose a new interpretation of how genetic mutations unrelated to the mechanism of drug action may nonetheless confer novel types of overall drug resistance. PMID:20046829

  16. A sensitivity analysis of a surface energy balance model to LAI (Leaf Area Index)

    NASA Astrophysics Data System (ADS)

    Maltese, A.; Cannarozzo, M.; Capodici, F.; La Loggia, G.; Santangelo, T.

    2008-10-01

    The LAI is a key parameter in hydrological processes, especially in the physically based distribution models. It is a critical ecosystem attribute since physiological processes such as photosynthesis, transpiration and evaporation depend on it. The diffusion of water vapor, momentum, heat and light through the canopy is regulated by the distribution and density of the leaves, branches, twigs and stems. The LAI influences the sensible heat flux H in the surface energy balance single source models through the calculation of the roughness length and of the displacement height. The aerodynamic resistance between the soil and within-canopy source height is a function of the LAI through the roughness length. This research carried out a sensitivity analysis of some of the most important parameters of surface energy balance models to the LAI time variation, in order to take into account the effects of the LAI variation with the phenological period. Finally empirical retrieved relationships between field spectroradiometric data and the field LAI measured via a light-sensitive instrument are presented for a cereal field.

  17. On the mutual relationship between conceptual models and datasets in geophysical monitoring of volcanic systems

    NASA Astrophysics Data System (ADS)

    Neuberg, J. W.; Thomas, M.; Pascal, K.; Karl, S.

    2012-04-01

    Geophysical datasets are essential to guide particularly short-term forecasting of volcanic activity. Key parameters are derived from these datasets and interpreted in different ways, however, the biggest impact on the interpretation is not determined by the range of parameters but controlled through the parameterisation and the underlying conceptual model of the volcanic process. On the other hand, the increasing number of sophisticated geophysical models need to be constrained by monitoring data, to transform a merely numerical exercise into a useful forecasting tool. We utilise datasets from the "big three", seismology, deformation and gas emissions, to gain insight in the mutual relationship between conceptual models and constraining data. We show that, e.g. the same seismic dataset can be interpreted with respect to a wide variety of different models with very different implications to forecasting. In turn, different data processing procedures lead to different outcomes even though they are based on the same conceptual model. Unsurprisingly, the most reliable interpretation will be achieved by employing multi-disciplinary models with overlapping constraints.

  18. 3D Simulation of Multiple Simultaneous Hydraulic Fractures with Different Initial Lengths in Rock

    NASA Astrophysics Data System (ADS)

    Tang, X.; Rayudu, N. M.; Singh, G.

    2017-12-01

    Hydraulic fracturing is widely used technique for extracting shale gas. During this process, fractures with various initial lengths are induced in rock mass with hydraulic pressure. Understanding the mechanism of propagation and interaction between these induced hydraulic cracks is critical for optimizing the fracking process. In this work, numerical results are presented for investigating the effect of in-situ parameters and fluid properties on growth and interaction of multi simultaneous hydraulic fractures. A fully coupled 3D fracture simulator, TOUGH- GFEM is used for simulating the effect of different vital parameters, including in-situ stress, initial fracture length, fracture spacing, fluid viscosity and flow rate on induced hydraulic fractures growth. This TOUGH-GFEM simulator is based on 3D finite volume method (FVM) and partition of unity element method (PUM). Displacement correlation method (DCM) is used for calculating multi - mode (Mode I, II, III) stress intensity factors. Maximum principal stress criteria is used for crack propagation. Key words: hydraulic fracturing, TOUGH, partition of unity element method , displacement correlation method, 3D fracturing simulator

  19. Key process parameters involved in the treatment of olive mill wastewater by membrane bioreactor.

    PubMed

    Jaouad, Y; Villain-Gambier, M; Mandi, L; Marrot, B; Ouazzani, N

    2018-04-18

    The Olive Mill Wastewater (OMWW) biodegradation in an external ceramic membrane bioreactor (MBR) was investigated with a starting acclimation step with a Ultrafiltration (UF) membrane (150 kDa) and no sludge discharge in order to develop a specific biomass adapted to OMWW biodegradation. After acclimation step, UF was replaced by an Microfiltration (MF) membrane (0.1 µm). Sludge Retention Time (SRT) was set around 25 days and Food to Microorganisms ratio (F/M) was fixed at 0.2 kg COD  kg MLVSS -1  d -1 . At stable state, removal of the main phenolic compounds (hydroxytyrosol and tyrosol) and Chemical Oxygen Demand (COD) were successfully reached (95% both). Considered as a predominant fouling factor, but never quantified in MBR treated OMWW, Soluble Microbial Products (SMP) proteins, polysaccharides and humic substances concentrations were determined (80, 110 and 360 mg L -1 respectively). At the same time, fouling was easily managed due to favourable hydraulic conditions of external ceramic MBR. Therefore, OMWW could be efficiently and durably treated by an MF MBR process under adapted operating parameters.

  20. ISRU System Model Tool: From Excavation to Oxygen Production

    NASA Technical Reports Server (NTRS)

    Santiago-Maldonado, Edgardo; Linne, Diane L.

    2007-01-01

    In the late 80's, conceptual designs for an in situ oxygen production plant were documented in a study by Eagle Engineering [1]. In the "Summary of Findings" of this study, it is clearly pointed out that: "reported process mass and power estimates lack a consistent basis to allow comparison." The study goes on to say: "A study to produce a set of process mass, power, and volume requirements on a consistent basis is recommended." Today, approximately twenty years later, as humans plan to return to the moon and venture beyond, the need for flexible up-to-date models of the oxygen extraction production process has become even more clear. Multiple processes for the production of oxygen from lunar regolith are being investigated by NASA, academia, and industry. Three processes that have shown technical merit are molten regolith electrolysis, hydrogen reduction, and carbothermal reduction. These processes have been selected by NASA as the basis for the development of the ISRU System Model Tool (ISMT). In working to develop up-to-date system models for these processes NASA hopes to accomplish the following: (1) help in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the excavation and oxygen production processes, and (4) provide estimates on energy and power requirements, mass and volume of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters. Also, as confidence and high fidelity is achieved with each component's model, new techniques and processes can be introduced and analyzed at a fraction of the cost of traditional hardware development and test approaches. A first generation ISRU System Model Tool has been used to provide inputs to the Lunar Architecture Team studies.

Top