Sample records for process parameters needed

  1. Process Development of Porcelain Ceramic Material with Binder Jetting Process for Dental Applications

    NASA Astrophysics Data System (ADS)

    Miyanaji, Hadi; Zhang, Shanshan; Lassell, Austin; Zandinejad, Amirali; Yang, Li

    2016-03-01

    Custom ceramic structures possess significant potentials in many applications such as dentistry and aerospace where extreme environments are present. Specifically, highly customized geometries with adequate performance are needed for various dental prostheses applications. This paper demonstrates the development of process and post-process parameters for a dental porcelain ceramic material using binder jetting additive manufacturing (AM). Various process parameters such as binder amount, drying power level, drying time and powder spread speed were studied experimentally for their effect on geometrical and mechanical characteristics of green parts. In addition, the effects of sintering and printing parameters on the qualities of the densified ceramic structures were also investigated experimentally. The results provide insights into the process-property relationships for the binder jetting AM process, and some of the challenges of the process that need to be further characterized for the successful adoption of the binder jetting technology in high quality ceramic fabrications are discussed.

  2. Nondimensional parameter for conformal grinding: combining machine and process parameters

    NASA Astrophysics Data System (ADS)

    Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.

    1999-11-01

    Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.

  3. Optimization of injection molding process parameters for a plastic cell phone housing component

    NASA Astrophysics Data System (ADS)

    Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya

    2016-11-01

    To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.

  4. Focusing the research agenda for simulation training visual system requirements

    NASA Astrophysics Data System (ADS)

    Lloyd, Charles J.

    2014-06-01

    Advances in the capabilities of the display-related technologies with potential uses in simulation training devices continue to occur at a rapid pace. Simultaneously, ongoing reductions in defense spending stimulate the services to push a higher proportion of training into ground-based simulators to reduce their operational costs. These two trends result in increased customer expectations and desires for more capable training devices, while the money available for these devices is decreasing. Thus, there exists an increasing need to improve the efficiency of the acquisition process and to increase the probability that users get the training devices they need at the lowest practical cost. In support of this need the IDEAS program was initiated in 2010 with the goal of improving display system requirements associated with unmet user needs and expectations and disrupted acquisitions. This paper describes a process of identifying, rating, and selecting the design parameters that should receive research attention. Analyses of existing requirements documents reveal that between 40 and 50 specific design parameters (i.e., resolution, contrast, luminance, field of view, frame rate, etc.) are typically called out for the acquisition of a simulation training display system. Obviously no research effort can address the effects of this many parameters. Thus, we developed a defensible strategy for focusing limited R&D resources on a fraction of these parameters. This strategy encompasses six criteria to identify the parameters most worthy of research attention. Examples based on display design parameters recommended by stakeholders are provided.

  5. DOE Program on Seismic Characterization for Regions of Interest to CTBT Monitoring,

    DTIC Science & Technology

    1995-08-14

    processing of the monitoring network data). While developing and testing the corrections and other parameters needed by the automated processing systems...the secondary network. Parameters tabulated in the knowledge base must be appropriate for routine automated processing of network data, and must also...operation of the PNDC, as well as to results of investigations of "special events" (i.e., those events that fail to locate or discriminate during automated

  6. Effects of the Deslagging Process on some Physicochemical Parameters of Honey

    PubMed Central

    Ranjbar, Ali Mohammad; Sadeghpour, Omid; Khanavi, Mahnaz; Shams Ardekani, Mohammad Reza; Moloudian, Hamid; Hajimahmoodi, Mannan

    2015-01-01

    Some physicochemical parameters of honey have been introduced by the International Honey Commission to evaluate its quality and origin but processes such as heating and filtering can affect these parameters. In traditional Iranian medicine, deslagging process involves boiling honey in an equal volume of water and removing the slag formed during process. The aim of this study was to determine the effects of deslagging process on parameters of color intensity, diastase evaluation, electrical conductivity, pH, free acidity, refractive index, hydroxy methyl furfural (HMF), proline and water contents according to the International Honey Committee (IHC) standards. The results showed that deslagged honey was significantly different from control honey in terms of color intensity, pH, diastase number, HMF and proline content. It can be concluded that the new standards are needed to regulate deslagged honey. PMID:25901175

  7. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  8. The Use of Logistics n the Quality Parameters Control System of Material Flow

    ERIC Educational Resources Information Center

    Karpova, Natalia P.; Toymentseva, Irina A.; Shvetsova, Elena V.; Chichkina, Vera D.; Chubarkova, Elena V.

    2016-01-01

    The relevance of the research problem is conditioned on the need to justify the use of the logistics methodologies in the quality parameters control process of material flows. The goal of the article is to develop theoretical principles and practical recommendations for logistical system control in material flows quality parameters. A leading…

  9. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.

  10. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    NASA Astrophysics Data System (ADS)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  11. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.

  12. Process analytical technologies (PAT) in freeze-drying of parenteral products.

    PubMed

    Patel, Sajal Manubhai; Pikal, Michael

    2009-01-01

    Quality by Design (QbD), aims at assuring quality by proper design and control, utilizing appropriate Process Analytical Technologies (PAT) to monitor critical process parameters during processing to ensure that the product meets the desired quality attributes. This review provides a comprehensive list of process monitoring devices that can be used to monitor critical process parameters and will focus on a critical review of the viability of the PAT schemes proposed. R&D needs in PAT for freeze-drying have also been addressed with particular emphasis on batch techniques that can be used on all the dryers independent of the dryer scale.

  13. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  14. Overview of Characterization Techniques for High Speed Crystal Growth

    NASA Technical Reports Server (NTRS)

    Ravi, K. V.

    1984-01-01

    Features of characterization requirements for crystals, devices and completed products are discussed. Key parameters of interest in semiconductor processing are presented. Characterization as it applies to process control, diagnostics and research needs is discussed with appropriate examples.

  15. Machine processing of ERTS and ground truth data

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Peacock, K.

    1973-01-01

    The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.

  16. Process of prototyping coronary stents from biodegradable Fe-Mn alloys.

    PubMed

    Hermawan, Hendra; Mantovani, Diego

    2013-11-01

    Biodegradable stents are considered to be a recent innovation, and their feasibility and applicability have been proven in recent years. Research in this area has focused on materials development and biological studies, rather than on how to transform the developed biodegradable materials into the stent itself. Currently available stent technology, the laser cutting-based process, might be adapted to fabricate biodegradable stents. In this work, the fabrication, characterization and testing of biodegradable Fe-Mn stents are described. A standard process for fabricating and testing stainless steel 316L stents was referred to. The influence of process parameters on the physical, metallurgical and mechanical properties of the stents, and the quality of the produced stents, were investigated. It was found that some steps of the standard process such as laser cutting can be directly applied, but changes to parameters are needed for annealing, and alternatives are needed to replace electropolishing. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  17. Taguchi experimental design to determine the taste quality characteristic of candied carrot

    NASA Astrophysics Data System (ADS)

    Ekawati, Y.; Hapsari, A. A.

    2018-03-01

    Robust parameter design is used to design product that is robust to noise factors so the product’s performance fits the target and delivers a better quality. In the process of designing and developing the innovative product of candied carrot, robust parameter design is carried out using Taguchi Method. The method is used to determine an optimal quality design. The optimal quality design is based on the process and the composition of product ingredients that are in accordance with consumer needs and requirements. According to the identification of consumer needs from the previous research, quality dimensions that need to be assessed are the taste and texture of the product. The quality dimension assessed in this research is limited to the taste dimension. Organoleptic testing is used for this assessment, specifically hedonic testing that makes assessment based on consumer preferences. The data processing uses mean and signal to noise ratio calculation and optimal level setting to determine the optimal process/composition of product ingredients. The optimal value is analyzed using confirmation experiments to prove that proposed product match consumer needs and requirements. The result of this research is identification of factors that affect the product taste and the optimal quality of product according to Taguchi Method.

  18. Soil Erosion as a stochastic process

    NASA Astrophysics Data System (ADS)

    Casper, Markus C.

    2015-04-01

    The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.

  19. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  20. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  1. Active chatter suppression with displacement-only measurement in turning process

    NASA Astrophysics Data System (ADS)

    Ma, Haifeng; Wu, Jianhua; Yang, Liuqing; Xiong, Zhenhua

    2017-08-01

    Regenerative chatter is a major hindrance for achieving high quality and high production rate in machining processes. Various active controllers have been proposed to mitigate chatter. However, most of existing controllers were developed on the basis of multi-states feedback of the system and state observers were usually needed. Moreover, model parameters of the machining process (mass, damping and stiffness) were required in existing active controllers. In this study, an active sliding mode controller, which employs a dynamic output feedback sliding surface for the unmatched condition and an adaptive law for disturbance estimation, is designed, analyzed, and validated for chatter suppression in turning process. Only displacement measurement is required by this approach. Other sensors and state observers are not needed. Moreover, it facilitates a rapid implementation since the designed controller is established without using model parameters of the turning process. Theoretical analysis, numerical simulations and experiments on a computer numerical control (CNC) lathe are presented. It shows that the chatter can be substantially attenuated and the chatter-free region can be significantly expanded with the presented method.

  2. Display device for indicating the value of a parameter in a process plant

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  3. Analysis of the shrinkage at the thick plate part using response surface methodology

    NASA Astrophysics Data System (ADS)

    Hatta, N. M.; Azlan, M. Z.; Shayfull, Z.; Roselina, S.; Nasir, S. M.

    2017-09-01

    Injection moulding is well known for its manufacturing process especially in producing plastic products. To measure the final product quality, there are lots of precautions to be taken into such as parameters setting at the initial stage of the process. Sometimes, if these parameters were set up wrongly, defects may be occurred and one of the well-known defects in the injection moulding process is a shrinkage. To overcome this problem, a maximisation at the precaution stage by making an optimal adjustment on the parameter setting need to be done and this paper focuses on analysing the shrinkage by optimising the parameter at thick plate part with the help of Response Surface Methodology (RSM) and ANOVA analysis. From the previous study, the outstanding parameter gained from the optimisation method in minimising the shrinkage at the moulded part was packing pressure. Therefore, with the reference from the previous literature, packing pressure was selected as the parameter setting for this study with other three parameters which are melt temperature, cooling time and mould temperature. The analysis of the process was obtained from the simulation by Autodesk Moldflow Insight (AMI) software and the material used for moulded part was Acrylonitrile Butadiene Styrene (ABS). The analysis and result were obtained and it found that the shrinkage can be minimised and the significant parameters were found as packing pressure, mould temperature and melt temperature.

  4. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    NASA Astrophysics Data System (ADS)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  5. The design and development of transonic multistage compressors

    NASA Technical Reports Server (NTRS)

    Ball, C. L.; Steinke, R. J.; Newman, F. A.

    1988-01-01

    The development of the transonic multistage compressor is reviewed. Changing trends in design and performance parameters are noted. These changes are related to advances in compressor aerodynamics, computational fluid mechanics and other enabling technologies. The parameters normally given to the designer and those that need to be established during the design process are identified. Criteria and procedures used in the selection of these parameters are presented. The selection of tip speed, aerodynamic loading, flowpath geometry, incidence and deviation angles, blade/vane geometry, blade/vane solidity, stage reaction, aerodynamic blockage, inlet flow per unit annulus area, stage/overall velocity ratio, and aerodynamic losses are considered. Trends in these parameters both spanwise and axially through the machine are highlighted. The effects of flow mixing and methods for accounting for the mixing in the design process are discussed.

  6. Catchment process affecting drinking water quality, including the significance of rainfall events, using factor analysis and event mean concentrations.

    PubMed

    Cinque, Kathy; Jayasuriya, Niranjali

    2010-12-01

    To ensure the protection of drinking water an understanding of the catchment processes which can affect water quality is important as it enables targeted catchment management actions to be implemented. In this study factor analysis (FA) and comparing event mean concentrations (EMCs) with baseline values were techniques used to asses the relationships between water quality parameters and linking those parameters to processes within an agricultural drinking water catchment. FA found that 55% of the variance in the water quality data could be explained by the first factor, which was dominated by parameters usually associated with erosion. Inclusion of pathogenic indicators in an additional FA showed that Enterococcus and Clostridium perfringens (C. perfringens) were also related to the erosion factor. Analysis of the EMCs found that most parameters were significantly higher during periods of rainfall runoff. This study shows that the most dominant processes in an agricultural catchment are surface runoff and erosion. It also shows that it is these processes which mobilise pathogenic indicators and are therefore most likely to influence the transport of pathogens. Catchment management efforts need to focus on reducing the effect of these processes on water quality.

  7. Parmeterization of spectra

    NASA Technical Reports Server (NTRS)

    Cornish, C. R.

    1983-01-01

    Following reception and analog to digital conversion (A/D) conversion, atmospheric radar backscatter echoes need to be processed so as to obtain desired information about atmospheric processes and to eliminate or minimize contaminating contributions from other sources. Various signal processing techniques have been implemented at mesosphere-stratosphere-troposphere (MST) radar facilities to estimate parameters of interest from received spectra. Such estimation techniques need to be both accurate and sufficiently efficient to be within the capabilities of the particular data-processing system. The various techniques used to parameterize the spectra of received signals are reviewed herein. Noise estimation, electromagnetic interference, data smoothing, correlation, and the Doppler effect are among the specific points addressed.

  8. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    USGS Publications Warehouse

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  9. Stages of physical dependence in New Zealand smokers: Prevalence and correlates.

    PubMed

    Walton, Darren; Newcombe, Rhiannon; Li, Judy; Tu, Danny; DiFranza, Joseph R

    2016-12-01

    Physically dependent smokers experience symptoms of wanting, craving or needing to smoke when too much time has passed since the last cigarette. There is interest in whether wanting, craving and needing represent variations in the intensity of a single physiological parameter or whether multiple physiological processes may be involved in the developmental progression of physical dependence. Our aim was to determine how a population of cigarette smokers is distributed across the wanting, craving and needing stages of physical dependence. A nationwide survey of 2594 New Zealanders aged 15years and over was conducted in 2014. The stage of physical dependence was assessed using the Levels of Physical Dependence measure. Ordinal logistic regression analysis was used to assess relations between physical dependence and other variables. Among 590 current smokers (weighted 16.2% of the sample), 22.3% had no physical dependence, 23.5% were in the Wanting stage, 14.4% in the Craving stage, and 39.8% in the Needing stage. The stage of physical dependence was predicted by daily cigarette consumption, and the time to first cigarette, but not by age, gender, ethnicity or socioeconomic status. Fewer individuals were in the craving stage than either the wanting or needing stages. The resulting inverted U-shaped curve with concentrations at either extreme is difficult to explain as a variation of a single biological parameter. The data support an interpretation that progression through the stages of wanting, craving and needing may involve more than one physiological process. Physical dependence to tobacco develops through a characteristic sequence of wanting, craving and needing which correspond to changes in addiction pathways in the brain. It is important to neuroscience research to determine if the development of physical dependence involves changes in a single brain process, or multiple processes. Our data suggests that more than one physiologic process is involved in the progression of physical dependence. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Assessment Processes to Increase the Burden of Existing Buildings Using BIM

    NASA Astrophysics Data System (ADS)

    Szeląg, Romuald

    2017-10-01

    The process of implementation of the reconstruction of buildings is often associated with the need to adapt them to increased loads. In the restricted access to the archive project documentation it is necessary to use technical solutions to obtain a fairly short period of time necessary to implement the technical parameters of such processes. Dissemination of BIM in the design process can also be used effectively in the processes of identification of existing facilities for the implementation of the work of strengthening or adapting objects to the increased load requirements. Obtained in the process of research and macroscopic data is then used in the processes of numerical processing aimed at developing a numerical model reflects the actual parameters of the structure of the existing structure and, therefore, allows a better look at the object and the execution of the process to strengthen future. This article will identify possibilities for the use of BIM in processes of identification technology buildings and structures and indicated the necessary data to be obtained during the preliminary work. Introduced in model solutions enable the use of multi-criteria analysis of the choice of the most optimal solutions in terms of costs or expenditures of time during the process of construction. Taking the above work by building a numerical model of the object allows every step of verification by authorized person inventoried solutions and enables tracking and changes in the situation of those found derogations in relation to the parameters established at the primary stage. In the event of significant deviations, there is the possibility of rapid changes to the completed process of calculation and presentation of alternative solutions. Availability software using BIM technology is increasingly common here knowledge of the implementation of such solutions will become in a short time, the standard for most objects or engineering structures. The use of modern solutions using the described processes will be discussed on the example of an industrial facility where there was a need for installation of new equipment and adapt it to the technical parameters.

  11. Investigating Effects of Fused-Deposition Modeling (FDM) Processing Parameters on Flexural Properties of ULTEM 9085 using Designed Experiment.

    PubMed

    Gebisa, Aboma Wagari; Lemu, Hirpa G

    2018-03-27

    Fused-deposition modeling (FDM), one of the additive manufacturing (AM) technologies, is an advanced digital manufacturing technique that produces parts by heating, extruding and depositing filaments of thermoplastic polymers. The properties of FDM-produced parts apparently depend on the processing parameters. These processing parameters have conflicting advantages that need to be investigated. This article focuses on an investigation into the effect of these parameters on the flexural properties of FDM-produced parts. The investigation is carried out on high-performance ULTEM 9085 material, as this material is relatively new and has potential application in the aerospace, military and automotive industries. Five parameters: air gap, raster width, raster angle, contour number, and contour width, with a full factorial design of the experiment, are considered for the investigation. From the investigation, it is revealed that raster angle and raster width have the greatest effect on the flexural properties of the material. The optimal levels of the process parameters achieved are: air gap of 0.000 mm, raster width of 0.7814 mm, raster angle of 0°, contour number of 5, and contour width of 0.7814 mm, leading to a flexural strength of 127 MPa, a flexural modulus of 2400 MPa, and 0.081 flexural strain.

  12. Investigating Effects of Fused-Deposition Modeling (FDM) Processing Parameters on Flexural Properties of ULTEM 9085 using Designed Experiment

    PubMed Central

    Gebisa, Aboma Wagari

    2018-01-01

    Fused-deposition modeling (FDM), one of the additive manufacturing (AM) technologies, is an advanced digital manufacturing technique that produces parts by heating, extruding and depositing filaments of thermoplastic polymers. The properties of FDM-produced parts apparently depend on the processing parameters. These processing parameters have conflicting advantages that need to be investigated. This article focuses on an investigation into the effect of these parameters on the flexural properties of FDM-produced parts. The investigation is carried out on high-performance ULTEM 9085 material, as this material is relatively new and has potential application in the aerospace, military and automotive industries. Five parameters: air gap, raster width, raster angle, contour number, and contour width, with a full factorial design of the experiment, are considered for the investigation. From the investigation, it is revealed that raster angle and raster width have the greatest effect on the flexural properties of the material. The optimal levels of the process parameters achieved are: air gap of 0.000 mm, raster width of 0.7814 mm, raster angle of 0°, contour number of 5, and contour width of 0.7814 mm, leading to a flexural strength of 127 MPa, a flexural modulus of 2400 MPa, and 0.081 flexural strain. PMID:29584674

  13. Systems for monitoring and digitally recording water-quality parameters

    USGS Publications Warehouse

    Smoot, George F.; Blakey, James F.

    1966-01-01

    Digital recording of water-quality parameters is a link in the automated data collection and processing system of the U.S. Geological Survey. The monitoring and digital recording systems adopted by the Geological Survey, while punching all measurements on a standard paper tape, provide a choice of compatible components to construct a system to meet specific physical problems and data needs. As many as 10 parameters can be recorded by an Instrument, with the only limiting criterion being that measurements are expressed as electrical signals.

  14. MODFLOW-2000, the U.S. Geological Survey Modular Ground-Water Model -Documentation of the Hydrogeologic-Unit Flow (HUF) Package

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.

    2000-01-01

    This report documents the Hydrogeologic-Unit Flow (HUF) Package for the groundwater modeling computer program MODFLOW-2000. The HUF Package is an alternative internal flow package that allows the vertical geometry of the system hydrogeology to be defined explicitly within the model using hydrogeologic units that can be different than the definition of the model layers. The HUF Package works with all the processes of MODFLOW-2000. For the Ground-Water Flow Process, the HUF Package calculates effective hydraulic properties for the model layers based on the hydraulic properties of the hydrogeologic units, which are defined by the user using parameters. The hydraulic properties are used to calculate the conductance coefficients and other terms needed to solve the ground-water flow equation. The sensitivity of the model to the parameters defined within the HUF Package input file can be calculated using the Sensitivity Process, using observations defined with the Observation Process. Optimal values of the parameters can be estimated by using the Parameter-Estimation Process. The HUF Package is nearly identical to the Layer-Property Flow (LPF) Package, the major difference being the definition of the vertical geometry of the system hydrogeology. Use of the HUF Package is illustrated in two test cases, which also serve to verify the performance of the package by showing that the Parameter-Estimation Process produces the true parameter values when exact observations are used.

  15. Experimental and numerical study on optimization of the single point incremental forming of AINSI 304L stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Saidi, B.; Giraud-Moreau, L.; Cherouat, A.; Nasri, R.

    2017-09-01

    AINSI 304L stainless steel sheets are commonly formed into a variety of shapes for applications in the industrial, architectural, transportation and automobile fields, it’s also used for manufacturing of denture base. In the field of dentistry, there is a need for personalized devises that are custom made for the patient. The single point incremental forming process is highly promising in this area for manufacturing of denture base. The single point incremental forming process (ISF) is an emerging process based on the use of a spherical tool, which is moved along CNC controlled tool path. One of the major advantages of this process is the ability to program several punch trajectories on the same machine in order to obtain different shapes. Several applications of this process exist in the medical field for the manufacturing of personalized titanium prosthesis (cranial plate, knee prosthesis...) due to the need of product customization to each patient. The objective of this paper is to study the incremental forming of AISI 304L stainless steel sheets for future applications in the dentistry field. During the incremental forming process, considerable forces can occur. The control of the forming force is particularly important to ensure the safe use of the CNC milling machine and preserve the tooling and machinery. In this paper, the effect of four different process parameters on the maximum force is studied. The proposed approach consists in using an experimental design based on experimental results. An analysis of variance was conducted with ANOVA to find the input parameters allowing to minimize the maximum forming force. A numerical simulation of the incremental forming process is performed with the optimal input process parameters. Numerical results are compared with the experimental ones.

  16. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  17. Pulsed Electromagnetic Acceleration of Plasmas

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Cassibry, Jason T.; Markusic, Tom E.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    A major shift in paradigm in driving pulsed plasma thruster is necessary if the original goal of accelerating a plasma sheet efficiently to high velocities as a plasma "slug" is to be realized. Firstly, the plasma interior needs to be highly collisional so that it can be dammed by the plasma edge layer not (upstream) adjacent to the driving 'vacuum' magnetic field. Secondly, the plasma edge layer needs to be strongly magnetized so that its Hall parameter is of the order of unity in this region to ensure excellent coupling of the Lorentz force to the plasma. Thirdly, to prevent and/or suppress the occurrence of secondary arcs or restrike behind the plasma, the region behind the plasma needs to be collisionless and extremely magnetized with sufficiently large Hall parameter. This places a vacuum requirement on the bore conditions prior to the shot. These requirements are quantified in the paper and lead to the introduction of three new design parameters corresponding to these three plasma requirements. The first parameter, labeled in the paper as gamma (sub 1), pertains to the permissible ratio of the diffusive excursion of the plasma during the course of the acceleration to the plasma longitudinal dimension. The second parameter is the required Hall parameter of the edge plasma region, and the third parameter the required Hall parameter of the region behind the plasma. Experimental research is required to quantify the values of these design parameters. Based upon fundamental theory of the transport processes in plasma, some theoretical guidance on the choice of these parameters are provided to help designing the necessary experiments to acquire these data.

  18. A "total parameter estimation" method in the varification of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.

  19. Numerical simulation of heat transfer and phase change during freezing of potatoes with different shapes at the presence or absence of ultrasound irradiation

    NASA Astrophysics Data System (ADS)

    Kiani, Hossein; Sun, Da-Wen

    2018-03-01

    As novel processes such as ultrasound assisted heat transfer are emerged, new models and simulations are needed to describe these processes. In this paper, a numerical model was developed to study the freezing process of potatoes. Different thermal conductivity models were investigated, and the effect of sonication was evaluated on the convective heat transfer in a fluid to the particle heat transfer system. Potato spheres and sticks were the geometries researched, and the effect of different processing parameters on the results were studied. The numerical model successfully predicted the ultrasound assisted freezing of various shapes in comparison with experimental data of the process. The model was sensitive to processing parameters variation (sound intensity, duty cycle, shape, etc.) and could accurately simulate the freezing process. Among the thermal conductivity correlations studied, de Vries and Maxwell models gave closer estimations. The maximum temperature difference was obtained for the series equation that underestimated the thermal conductivity. Both numerical and experimental data confirmed that an optimum condition of intensity and duty cycle is needed for reducing the freezing time, as increasing the intensity, increased the heat transfer rate and sonically heating rate, simultaneously, that acted against each other.

  20. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.

  1. Processing Parameters Optimization for Material Deposition Efficiency in Laser Metal Deposited Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Mahamood, Rasheedat M.; Akinlabi, Esther T.

    2016-03-01

    Ti6Al4V is an important Titanium alloy that is mostly used in many applications such as: aerospace, petrochemical and medicine. The excellent corrosion resistance property, the high strength to weight ratio and the retention of properties at high temperature makes them to be favoured in most applications. The high cost of Titanium and its alloys makes their use to be prohibitive in some applications. Ti6Al4V can be cladded on a less expensive material such as steel, thereby reducing cost and providing excellent properties. Laser Metal Deposition (LMD) process, an additive manufacturing process is capable of producing complex part directly from the 3-D CAD model of the part and it also has the capability of handling multiple materials. Processing parameters play an important role in LMD process and in order to achieve desired results at a minimum cost, then the processing parameters need to be properly controlled. This paper investigates the role of processing parameters: laser power, scanning speed, powder flow rate and gas flow rate, on the material utilization efficiency in laser metal deposited Ti6Al4V. A two-level full factorial design of experiment was used in this investigation, to be able to understand the processing parameters that are most significant as well as the interactions among these processing parameters. Four process parameters were used, each with upper and lower settings which results in a combination of sixteen experiments. The laser power settings used was 1.8 and 3 kW, the scanning speed was 0.05 and 0.1 m/s, the powder flow rate was 2 and 4 g/min and the gas flow rate was 2 and 4 l/min. The experiments were designed and analyzed using Design Expert 8 software. The software was used to generate the optimized process parameters which were found to be laser power of 3.2 kW, scanning speed of 0.06 m/s, powder flow rate of 2 g/min and gas flow rate of 3 l/min.

  2. An Improved Method to Control the Critical Parameters of a Multivariable Control System

    NASA Astrophysics Data System (ADS)

    Subha Hency Jims, P.; Dharmalingam, S.; Wessley, G. Jims John

    2017-10-01

    The role of control systems is to cope with the process deficiencies and the undesirable effect of the external disturbances. Most of the multivariable processes are highly iterative and complex in nature. Aircraft systems, Modern Power Plants, Refineries, Robotic systems are few such complex systems that involve numerous critical parameters that need to be monitored and controlled. Control of these important parameters is not only tedious and cumbersome but also is crucial from environmental, safety and quality perspective. In this paper, one such multivariable system, namely, a utility boiler has been considered. A modern power plant is a complex arrangement of pipework and machineries with numerous interacting control loops and support systems. In this paper, the calculation of controller parameters based on classical tuning concepts has been presented. The controller parameters thus obtained and employed has controlled the critical parameters of a boiler during fuel switching disturbances. The proposed method can be applied to control the critical parameters like elevator, aileron, rudder, elevator trim rudder and aileron trim, flap control systems of aircraft systems.

  3. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy

    PubMed Central

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-01-01

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework. PMID:28772504

  4. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy.

    PubMed

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-02-08

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework.

  5. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  6. Future heavy duty trucking engine requirements

    NASA Technical Reports Server (NTRS)

    Strawhorn, L. W.; Suski, V. A.

    1985-01-01

    Developers of advanced heavy duty diesel engines are engaged in probing the opportunities presented by new materials and techniques. This process is technology driven, but there is neither assurance that the eventual users of the engines so developed will be comfortable with them nor, indeed, that those consumers will continue to exist in either the same form, or numbers as they do today. To ensure maximum payoff of research dollars, the equipment development process must consider user needs. This study defines motor carrier concerns, cost tolerances, and the engine parameters which match the future projected industry needs. The approach taken to do that is to be explained and the results presented. The material to be given comes basically from a survey of motor carrier fleets. It provides indications of the role of heavy duty vehicles in the 1998 period and their desired maintenance and engine performance parameters.

  7. Continuous welding of unidirectional fiber reinforced thermoplastic tape material

    NASA Astrophysics Data System (ADS)

    Schledjewski, Ralf

    2017-10-01

    Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.

  8. Steps Towards Industrialization of Cu–III–VI2Thin‐Film Solar Cells:Linking Materials/Device Designs to Process Design For Non‐stoichiometric Photovoltaic Materials

    PubMed Central

    Chang, Hsueh‐Hsin; Sharma, Poonam; Letha, Arya Jagadhamma; Shao, Lexi; Zhang, Yafei; Tseng, Bae‐Heng

    2016-01-01

    The concept of in‐line sputtering and selenization become industrial standard for Cu–III–VI2 solar cell fabrication, but still it's very difficult to control and predict the optical and electrical parameters, which are closely related to the chemical composition distribution of the thin film. The present review article addresses onto the material design, device design and process design using parameters closely related to the chemical compositions. Its variation leads to change in the Poisson equation, current equation, and continuity equation governing the device design. To make the device design much realistic and meaningful, we need to build a model that relates the opto‐electrical properties to the chemical composition. The material parameters as well as device structural parameters are loaded into the process simulation to give a complete set of process control parameters. The neutral defect concentrations of non‐stoichiometric CuMSe2 (M = In and Ga) have been calculated under the specific atomic chemical potential conditions using this methodology. The optical and electrical properties have also been investigated for the development of a full‐function analytical solar cell simulator. The future prospects regarding the development of copper–indium–gallium–selenide thin film solar cells have also been discussed. PMID:27840790

  9. Steps Towards Industrialization of Cu-III-VI2Thin-Film Solar Cells:Linking Materials/Device Designs to Process Design For Non-stoichiometric Photovoltaic Materials.

    PubMed

    Hwang, Huey-Liang; Chang, Hsueh-Hsin; Sharma, Poonam; Letha, Arya Jagadhamma; Shao, Lexi; Zhang, Yafei; Tseng, Bae-Heng

    2016-10-01

    The concept of in-line sputtering and selenization become industrial standard for Cu-III-VI 2 solar cell fabrication, but still it's very difficult to control and predict the optical and electrical parameters, which are closely related to the chemical composition distribution of the thin film. The present review article addresses onto the material design, device design and process design using parameters closely related to the chemical compositions. Its variation leads to change in the Poisson equation, current equation, and continuity equation governing the device design. To make the device design much realistic and meaningful, we need to build a model that relates the opto-electrical properties to the chemical composition. The material parameters as well as device structural parameters are loaded into the process simulation to give a complete set of process control parameters. The neutral defect concentrations of non-stoichiometric CuMSe 2 (M = In and Ga) have been calculated under the specific atomic chemical potential conditions using this methodology. The optical and electrical properties have also been investigated for the development of a full-function analytical solar cell simulator. The future prospects regarding the development of copper-indium-gallium-selenide thin film solar cells have also been discussed.

  10. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    NASA Astrophysics Data System (ADS)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  11. Understanding overlay signatures using machine learning on non-lithography context information

    NASA Astrophysics Data System (ADS)

    Overcast, Marshall; Mellegaard, Corey; Daniel, David; Habets, Boris; Erley, Georg; Guhlemann, Steffen; Thrun, Xaver; Buhl, Stefan; Tottewitz, Steven

    2018-03-01

    Overlay errors between two layers can be caused by non-lithography processes. While these errors can be compensated by the run-to-run system, such process and tool signatures are not always stable. In order to monitor the impact of non-lithography context on overlay at regular intervals, a systematic approach is needed. Using various machine learning techniques, significant context parameters that relate to deviating overlay signatures are automatically identified. Once the most influential context parameters are found, a run-to-run simulation is performed to see how much improvement can be obtained. The resulting analysis shows good potential for reducing the influence of hidden context parameters on overlay performance. Non-lithographic contexts are significant contributors, and their automatic detection and classification will enable the overlay roadmap, given the corresponding control capabilities.

  12. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.

  13. IN718 Additive Manufacturing Properties and Influences

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.

    2015-01-01

    The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data have been "sanitized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range that was centered about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. Tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable with SLM-produced IN718. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.

  14. IN718 Additive Manufacturing Properties and Influences

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.

    2015-01-01

    The results of tensile, fracture, and fatigue testing of IN718 coupons produced using the selective laser melting (SLM) additive manufacturing technique are presented. The data has been "generalized" to remove the numerical values, although certain references to material standards are provided. This document provides some knowledge of the effect of variation of controlled build parameters used in the SLM process, a snapshot of the capabilities of SLM in industry at present, and shares some of the lessons learned along the way. For the build parameter characterization, the parameters were varied over a range about the machine manufacturer's recommended value, and in each case they were varied individually, although some co-variance of those parameters would be expected. SLM-produced IN718, tensile, fracture, and high-cycle fatigue properties equivalent to wrought IN718 are achievable. Build and post-build processes need to be determined and then controlled to established limits to accomplish this. It is recommended that a multi-variable evaluation, e.g., design-of-experiment (DOE), of the build parameters be performed to better evaluate the co-variance of the parameters.

  15. Single droplet drying step characterization in microsphere preparation.

    PubMed

    Al Zaitone, Belal; Lamprecht, Alf

    2013-05-01

    Spray drying processes are difficult to characterize since process parameters are not directly accessible. Acoustic levitation was used to investigate microencapsulation by spray drying on one single droplet facilitating the analyses of droplet behavior upon drying. Process parameters were simulated on a poly(lactide-co-glycolide)/ethyl acetate combination for microencapsulation. The results allowed quantifying the influence of process parameters such as temperature (0-40°C), polymer concentration (5-400 mg/ml), and droplet size (0.5-1.37 μl) on the drying time and drying kinetics as well as the particle morphology. The drying of polymer solutions at temperature of 21°C and concentration of 5 mg/ml, shows that the dimensionless particle diameter (Dp/D0) approaches 0.25 and the particle needs 350 s to dry. At 400 mg/ml, Dp/D0=0.8 and the drying time increases to one order of magnitude and a hollow particle is formed. The study demonstrates the benefit of using the acoustic levitator as a lab scale method to characterize and study the microparticle formation. This method can be considered as a helpful tool to mimic the full scale spray drying process by providing identical operational parameters such as air velocity, temperature, and variable droplet sizes. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Overview of Icing Physics Relevant to Scaling

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2005-01-01

    An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.

  17. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    PubMed

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  18. Biogas Production: Microbiology and Technology.

    PubMed

    Schnürer, Anna

    Biogas, containing energy-rich methane, is produced by microbial decomposition of organic material under anaerobic conditions. Under controlled conditions, this process can be used for the production of energy and a nutrient-rich residue suitable for use as a fertilising agent. The biogas can be used for production of heat, electricity or vehicle fuel. Different substrates can be used in the process and, depending on substrate character, various reactor technologies are available. The microbiological process leading to methane production is complex and involves many different types of microorganisms, often operating in close relationships because of the limited amount of energy available for growth. The microbial community structure is shaped by the incoming material, but also by operating parameters such as process temperature. Factors leading to an imbalance in the microbial community can result in process instability or even complete process failure. To ensure stable operation, different key parameters, such as levels of degradation intermediates and gas quality, are often monitored. Despite the fact that the anaerobic digestion process has long been used for industrial production of biogas, many questions need still to be resolved to achieve optimal management and gas yields and to exploit the great energy and nutrient potential available in waste material. This chapter discusses the different aspects that need to be taken into consideration to achieve optimal degradation and gas production, with particular focus on operation management and microbiology.

  19. Computer-Aided Process Model For Carbon/Phenolic Materials

    NASA Technical Reports Server (NTRS)

    Letson, Mischell A.; Bunker, Robert C.

    1996-01-01

    Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.

  20. Hydraulic parameters in eroding rills and their influence on detachment processes

    NASA Astrophysics Data System (ADS)

    Wirtz, Stefan; Seeger, Manuel; Zell, Andreas; Wagner, Christian; Wengel, René; Ries, Johannes B.

    2010-05-01

    In many experiments as well in laboratory as in field experiments the correlations between the detachment rate and different hydraulic parameters are calculated. The used parameters are water depth, runoff, shear stress, unit length shear force, stream power, Reynolds- and Froude number. The investigations show even contradictory results. In most soil erosion models like the WEPP model, the shear stress is used to predict soil detachment rates. But in none of the WEPP datasets, the shear stress showed the best correlation to the detachment rate. In this poster we present the results of several rill experiments in Andalusia from 2008 and 2009. With the used method, it is possible to measure the needed factors to calculate the mentioned parameters. Water depth is measured by an ultrasonic sensor, the runoff values are calculated by combining flow velocity and flow diameter. The parameters wetted perimeter, flow diameter and hydraulic radius can be calculated from the measured rill cross sections and the measured water levels. In the sample density values, needed for calculation of shear stress, unit length shear force and stream power, the sediment concentration and the grain density are are considered. The viscosity of the samples was measured with a rheometer. The result of this measurements shows, that there is a very high linear correlation (R² = 0.92) between sediment concentration and the dynamic viscosity. The viscosity seems to be an important factor but it is only used in the Reynolds-number-equation, in other equations it is neglected. But the viscosity value increases with increasing sediment concentration and hence the influence also increases and the in multiclications negiligible viscosity value of 1 only counts for clear water. The correlations between shear stress, unit length shear force and stream power at the x-axis and the detachment rate at the ordinate show, that there is not one fixed parameter that always displays the best correlation to the detachment rate. The best hit does not change from one experiment to another, it changes from one measuring point to another. Different processes in rill erosion are responsible for the changing correlations. In some cases no one of the parameters shows an acceptable correlation to the soil detachment, because these factors describe fluvial processes. Our experiments show, that not the fluvial processes cause the main sediment procduction in the rills, but bank failure or knickpoint and headcut retreat and these processes are more gravitative than fluvial. Another sediment producing process is the abrupt spill over of plunge pools, a process not realy fluvial and not realy gravitativ. In some experiments, the highest sediment concentrations were measured at the slowly flowing waterfront that only transports the loose material. But all these processes are not considered in soil erosion models. Hence, hydraulic parameters alone are not sufficient to predict detachment rates. They cover the fluvial incising in the rill's bottom, but the main sediment sources are not considered satisying in its equations.

  1. Prediction of composites behavior undergoing an ATP process through data-mining

    NASA Astrophysics Data System (ADS)

    Martin, Clara Argerich; Collado, Angel Leon; Pinillo, Rubén Ibañez; Barasinski, Anaïs; Abisset-Chavanne, Emmanuelle; Chinesta, Francisco

    2018-05-01

    The need to characterize composite surfaces for distinct mechanical or physical processes leads to different manners of evaluate the state of the surface. During many manufacturing processes deformation occurs, thus hindering composite classification for fabrication processes. In this work we focus on the challenge of a priori identifying the surfaces' behavior in order to optimize manufacturing. We will propose and validate the curvature of the surface as a reliable parameter and we will develop a tool that allows the prediction of the surface behavior.

  2. WE-G-204-01: BEST IN PHYSICS (IMAGING): Effect of Image Processing Parameters On Nodule Detectability in Chest Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, K; Lu, Z; MacMahon, H

    Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less

  3. Modeling of feed-forward control using the partial least squares regression method in the tablet compression process.

    PubMed

    Hattori, Yusuke; Otsuka, Makoto

    2017-05-30

    In the pharmaceutical industry, the implementation of continuous manufacturing has been widely promoted in lieu of the traditional batch manufacturing approach. More specially, in recent years, the innovative concept of feed-forward control has been introduced in relation to process analytical technology. In the present study, we successfully developed a feed-forward control model for the tablet compression process by integrating data obtained from near-infrared (NIR) spectra and the physical properties of granules. In the pharmaceutical industry, batch manufacturing routinely allows for the preparation of granules with the desired properties through the manual control of process parameters. On the other hand, continuous manufacturing demands the automatic determination of these process parameters. Here, we proposed the development of a control model using the partial least squares regression (PLSR) method. The most significant feature of this method is the use of dataset integrating both the NIR spectra and the physical properties of the granules. Using our model, we determined that the properties of products, such as tablet weight and thickness, need to be included as independent variables in the PLSR analysis in order to predict unknown process parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Wastewater treatment using hybrid treatment schemes based on cavitation and Fenton chemistry: a review.

    PubMed

    Bagal, Manisha V; Gogate, Parag R

    2014-01-01

    Advanced oxidation processes such as cavitation and Fenton chemistry have shown considerable promise for wastewater treatment applications due to the ease of operation and simple reactor design. In this review, hybrid methods based on cavitation coupled with Fenton process for the treatment of wastewater have been discussed. The basics of individual processes (Acoustic cavitation, Hydrodynamic cavitation, Fenton chemistry) have been discussed initially highlighting the need for combined processes. The different types of reactors used for the combined processes have been discussed with some recommendations for large scale operation. The effects of important operating parameters such as solution temperature, initial pH, initial pollutant concentration and Fenton's reagent dosage have been discussed with guidelines for selection of optimum parameters. The optimization of power density is necessary for ultrasonic processes (US) and combined processes (US/Fenton) whereas the inlet pressure needs to be optimized in the case of Hydrodynamic cavitation (HC) based processes. An overview of different pollutants degraded under optimized conditions using HC/Fenton and US/Fenton process with comparison with individual processes have been presented. It has been observed that the main mechanism for the synergy of the combined process depends on the generation of additional hydroxyl radicals and its proper utilization for the degradation of the pollutant, which is strongly dependent on the loading of hydrogen peroxide. Overall, efficient wastewater treatment with high degree of energy efficiency can be achieved using combined process operating under optimized conditions, as compared to the individual process. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Computational Electrocardiography: Revisiting Holter ECG Monitoring.

    PubMed

    Deserno, Thomas M; Marx, Nikolaus

    2016-08-05

    Since 1942, when Goldberger introduced the 12-lead electrocardiography (ECG), this diagnostic method has not been changed. After 70 years of technologic developments, we revisit Holter ECG from recording to understanding. A fundamental change is fore-seen towards "computational ECG" (CECG), where continuous monitoring is producing big data volumes that are impossible to be inspected conventionally but require efficient computational methods. We draw parallels between CECG and computational biology, in particular with respect to computed tomography, computed radiology, and computed photography. From that, we identify technology and methodology needed for CECG. Real-time transfer of raw data into meaningful parameters that are tracked over time will allow prediction of serious events, such as sudden cardiac death. Evolved from Holter's technology, portable smartphones with Bluetooth-connected textile-embedded sensors will capture noisy raw data (recording), process meaningful parameters over time (analysis), and transfer them to cloud services for sharing (handling), predicting serious events, and alarming (understanding). To make this happen, the following fields need more research: i) signal processing, ii) cycle decomposition; iii) cycle normalization, iv) cycle modeling, v) clinical parameter computation, vi) physiological modeling, and vii) event prediction. We shall start immediately developing methodology for CECG analysis and understanding.

  6. Advanced non-contrasted computed tomography post-processing by CT-Calculometry (CT-CM) outperforms established predictors for the outcome of shock wave lithotripsy.

    PubMed

    Langenauer, J; Betschart, P; Hechelhammer, L; Güsewell, S; Schmid, H P; Engeler, D S; Abt, D; Zumstein, V

    2018-05-29

    To evaluate the predictive value of advanced non-contrasted computed tomography (NCCT) post-processing using novel CT-calculometry (CT-CM) parameters compared to established predictors of success of shock wave lithotripsy (SWL) for urinary calculi. NCCT post-processing was retrospectively performed in 312 patients suffering from upper tract urinary calculi who were treated by SWL. Established predictors such as skin to stone distance, body mass index, stone diameter or mean stone attenuation values were assessed. Precise stone size and shape metrics, 3-D greyscale measurements and homogeneity parameters such as skewness and kurtosis, were analysed using CT-CM. Predictive values for SWL outcome were analysed using logistic regression and receiver operating characteristics (ROC) statistics. Overall success rate (stone disintegration and no re-intervention needed) of SWL was 59% (184 patients). CT-CM metrics mainly outperformed established predictors. According to ROC analyses, stone volume and surface area performed better than established stone diameter, mean 3D attenuation value was a stronger predictor than established mean attenuation value, and parameters skewness and kurtosis performed better than recently emerged variation coefficient of stone density. Moreover, prediction of SWL outcome with 80% probability to be correct would be possible in a clearly higher number of patients (up to fivefold) using CT-CM-derived parameters. Advanced NCCT post-processing by CT-CM provides novel parameters that seem to outperform established predictors of SWL response. Implementation of these parameters into clinical routine might reduce SWL failure rates.

  7. Making the purchase decision: factors other than price.

    PubMed

    Lyons, D M

    1992-05-01

    Taking price out of the limelight and concentrating on customer relations, mutual respect, and build-in/buy-in; involving the user; developing communication and evaluation processes; and being process oriented to attain the results needed require commitment on the part of administration and materiel management. There must be a commitment of time to develop the process, commitment of resources to work through the process, and a commitment of support to enhance the process. With those three parameters in place, price will no longer be the only factor in the purchasing decision.

  8. Seeing the unseen: Complete volcano deformation fields by recursive filtering of satellite radar interferograms

    NASA Astrophysics Data System (ADS)

    Gonzalez, Pablo J.

    2017-04-01

    Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033

  9. Method of Individual Forecasting of Technical State of Logging Machines

    NASA Astrophysics Data System (ADS)

    Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.

    2018-03-01

    Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.

  10. Electrokinetic remediation prefield test methods

    NASA Technical Reports Server (NTRS)

    Hodko, Dalibor (Inventor)

    2000-01-01

    Methods for determining the parameters critical in designing an electrokinetic soil remediation process including electrode well spacing, operating current/voltage, electroosmotic flow rate, electrode well wall design, and amount of buffering or neutralizing solution needed in the electrode wells at operating conditions are disclosed These methods are preferably performed prior to initiating a full scale electrokinetic remediation process in order to obtain efficient remediation of the contaminants.

  11. MeMoVolc report on classification and dynamics of volcanic explosive eruptions

    NASA Astrophysics Data System (ADS)

    Bonadonna, C.; Cioni, R.; Costa, A.; Druitt, T.; Phillips, J.; Pioli, L.; Andronico, D.; Harris, A.; Scollo, S.; Bachmann, O.; Bagheri, G.; Biass, S.; Brogi, F.; Cashman, K.; Dominguez, L.; Dürig, T.; Galland, O.; Giordano, G.; Gudmundsson, M.; Hort, M.; Höskuldsson, A.; Houghton, B.; Komorowski, J. C.; Küppers, U.; Lacanna, G.; Le Pennec, J. L.; Macedonio, G.; Manga, M.; Manzella, I.; Vitturi, M. de'Michieli; Neri, A.; Pistolesi, M.; Polacci, M.; Ripepe, M.; Rossi, E.; Scheu, B.; Sulpizio, R.; Tripoli, B.; Valade, S.; Valentine, G.; Vidal, C.; Wallenstein, N.

    2016-11-01

    Classifications of volcanic eruptions were first introduced in the early twentieth century mostly based on qualitative observations of eruptive activity, and over time, they have gradually been developed to incorporate more quantitative descriptions of the eruptive products from both deposits and observations of active volcanoes. Progress in physical volcanology, and increased capability in monitoring, measuring and modelling of explosive eruptions, has highlighted shortcomings in the way we classify eruptions and triggered a debate around the need for eruption classification and the advantages and disadvantages of existing classification schemes. Here, we (i) review and assess existing classification schemes, focussing on subaerial eruptions; (ii) summarize the fundamental processes that drive and parameters that characterize explosive volcanism; (iii) identify and prioritize the main research that will improve the understanding, characterization and classification of volcanic eruptions and (iv) provide a roadmap for producing a rational and comprehensive classification scheme. In particular, classification schemes need to be objective-driven and simple enough to permit scientific exchange and promote transfer of knowledge beyond the scientific community. Schemes should be comprehensive and encompass a variety of products, eruptive styles and processes, including for example, lava flows, pyroclastic density currents, gas emissions and cinder cone or caldera formation. Open questions, processes and parameters that need to be addressed and better characterized in order to develop more comprehensive classification schemes and to advance our understanding of volcanic eruptions include conduit processes and dynamics, abrupt transitions in eruption regime, unsteadiness, eruption energy and energy balance.

  12. Effective Parameters in Axial Injection Suspension Plasma Spray Process of Alumina-Zirconia Ceramics

    NASA Astrophysics Data System (ADS)

    Tarasi, F.; Medraj, M.; Dolatabadi, A.; Oberste-Berghaus, J.; Moreau, C.

    2008-12-01

    Suspension plasma spray (SPS) is a novel process for producing nano-structured coatings with metastable phases using significantly smaller particles as compared to conventional thermal spraying. Considering the complexity of the system there is an extensive need to better understand the relationship between plasma spray conditions and resulting coating microstructure and defects. In this study, an alumina/8 wt.% yttria-stabilized zirconia was deposited by axial injection SPS process. The effects of principal deposition parameters on the microstructural features are evaluated using the Taguchi design of experiment. The microstructural features include microcracks, porosities, and deposition rate. To better understand the role of the spray parameters, in-flight particle characteristics, i.e., temperature and velocity were also measured. The role of the porosity in this multicomponent structure is studied as well. The results indicate that thermal diffusivity of the coatings, an important property for potential thermal barrier applications, is barely affected by the changes in porosity content.

  13. HIGH-SHEAR GRANULATION PROCESS: INFLUENCE OF PROCESSING PARAMETERS ON CRITICAL QUALITY ATTRIBUTES OF ACETAMINOPHEN GRANULES AND TABLETS USING DESIGN OF EXPERIMENT APPROACH.

    PubMed

    Fayed, Mohamed H; Abdel-Rahman, Sayed I; Alanazi, Fars K; Ahmed, Mahrous O; Tawfeek, Hesham M; Al-Shedfat, Ramadan I

    2017-01-01

    Application of quality by design (QbD) in high shear granulation process is critical and need to recognize the correlation between the granulation process parameters and the properties of intermediate (granules) and corresponding final product (tablets). The present work examined the influence of water amount (X,) and wet massing time (X2) as independent process variables on the critical quality attributes of granules and corresponding tablets using design of experiment (DoE) technique. A two factor, three level (32) full factorial design was performed; each of these variables was investigated at three levels to characterize their strength and interaction. The dried granules have been analyzed for their size distribution, density and flow pattern. Additionally, the produced tablets have been investigated for weight uniformity, crushing strength, friability and percent capping, disintegration time and drug dissolution. Statistically significant impact (p < 0.05) of water amount was identified for granule growth, percent fines and distribution width and flow behavior. Granule density and compressibility were found to be significantly influenced (p < 0.05) by the two operating conditions. Also, water amount has significant effect (p < 0.05) on tablet weight unifornity, friability and percent capping. Moreover, tablet disintegration time and drug dissolution appears to be significantly influenced (p < 0.05) by the two process variables. On the other hand, the relationship of process parameters with critical quality attributes of granule and final product tablet was identified and correlated. Ultimately, a judicious selection of process parameters in high shear granulation process will allow providing product of desirable quality.

  14. The time-lapse AVO difference inversion for changes in reservoir parameters

    NASA Astrophysics Data System (ADS)

    Longxiao, Zhi; Hanming, Gu; Yan, Li

    2016-12-01

    The result of conventional time-lapse seismic processing is the difference between the amplitude and the post-stack seismic data. Although stack processing can improve the signal-to-noise ratio (SNR) of seismic data, it also causes a considerable loss of important information about the amplitude changes and only gives the qualitative interpretation. To predict the changes in reservoir fluid more precisely and accurately, we also need the quantitative information of the reservoir. To achieve this aim, we develop the method of time-lapse AVO (amplitude versus offset) difference inversion. For the inversion of reservoir changes in elastic parameters, we apply the Gardner equation as the constraint and convert the three-parameter inversion of elastic parameter changes into a two-parameter inversion to make the inversion more stable. For the inversion of variations in the reservoir parameters, we infer the relation between the difference of the reflection coefficient and variations in the reservoir parameters, and then invert reservoir parameter changes directly. The results of the theoretical modeling computation and practical application show that our method can estimate the relative variations in reservoir density, P-wave and S-wave velocity, calculate reservoir changes in water saturation and effective pressure accurately, and then provide reference for the rational exploitation of the reservoir.

  15. PCC properties to support w/c determination for durability.

    DOT National Transportation Integrated Search

    2012-10-01

    The fresh concrete watercement ratio (w/c) determination tool is urgently needed for use in the QC/QA process at the job site. Various : techniques have been used in the past to determine this parameter. However, many of these techniques can be co...

  16. HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.

    PubMed

    Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C

    2004-07-01

    A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.

  17. Laser Peening Process and Its Impact on Materials Properties in Comparison with Shot Peening and Ultrasonic Impact Peening

    PubMed Central

    Gujba, Abdullahi K.; Medraj, Mamoun

    2014-01-01

    The laser shock peening (LSP) process using a Q-switched pulsed laser beam for surface modification has been reviewed. The development of the LSP technique and its numerous advantages over the conventional shot peening (SP) such as better surface finish, higher depths of residual stress and uniform distribution of intensity were discussed. Similar comparison with ultrasonic impact peening (UIP)/ultrasonic shot peening (USP) was incorporated, when possible. The generation of shock waves, processing parameters, and characterization of LSP treated specimens were described. Special attention was given to the influence of LSP process parameters on residual stress profiles, material properties and structures. Based on the studies so far, more fundamental understanding is still needed when selecting optimized LSP processing parameters and substrate conditions. A summary of the parametric studies of LSP on different materials has been presented. Furthermore, enhancements in the surface micro and nanohardness, elastic modulus, tensile yield strength and refinement of microstructure which translates to increased fatigue life, fretting fatigue life, stress corrosion cracking (SCC) and corrosion resistance were addressed. However, research gaps related to the inconsistencies in the literature were identified. Current status, developments and challenges of the LSP technique were discussed. PMID:28788284

  18. Parametric Optimization Of Gas Metal Arc Welding Process By Using Grey Based Taguchi Method On Aisi 409 Ferritic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Ghosh, Nabendu; Kumar, Pradip; Nandi, Goutam

    2016-10-01

    Welding input process parameters play a very significant role in determining the quality of the welded joint. Only by properly controlling every element of the process can product quality be controlled. For better quality of MIG welding of Ferritic stainless steel AISI 409, precise control of process parameters, parametric optimization of the process parameters, prediction and control of the desired responses (quality indices) etc., continued and elaborate experiments, analysis and modeling are needed. A data of knowledge - base may thus be generated which may be utilized by the practicing engineers and technicians to produce good quality weld more precisely, reliably and predictively. In the present work, X-ray radiographic test has been conducted in order to detect surface and sub-surface defects of weld specimens made of Ferritic stainless steel. The quality of the weld has been evaluated in terms of yield strength, ultimate tensile strength and percentage of elongation of the welded specimens. The observed data have been interpreted, discussed and analyzed by considering ultimate tensile strength ,yield strength and percentage elongation combined with use of Grey-Taguchi methodology.

  19. Validation study and routine control monitoring of moist heat sterilization procedures.

    PubMed

    Shintani, Hideharu

    2012-06-01

    The proposed approach to validation of steam sterilization in autoclaves follows the basic life cycle concepts applicable to all validation programs. Understand the function of sterilization process, develop and understand the cycles to carry out the process, and define a suitable test or series of tests to confirm that the function of the process is suitably ensured by the structure provided. Sterilization of product and components and parts that come in direct contact with sterilized product is the most critical of pharmaceutical processes. Consequently, this process requires a most rigorous and detailed approach to validation. An understanding of the process requires a basic understanding of microbial death, the parameters that facilitate that death, the accepted definition of sterility, and the relationship between the definition and sterilization parameters. Autoclaves and support systems need to be designed, installed, and qualified in a manner that ensures their continued reliability. Lastly, the test program must be complete and definitive. In this paper, in addition to validation study, documentation of IQ, OQ and PQ concretely were described.

  20. New Ultrasonic Controller and Characterization System for Low Temperature Drying Process Intensification

    NASA Astrophysics Data System (ADS)

    Andrés, R. R.; Blanco, A.; Acosta, V. M.; Riera, E.; Martínez, I.; Pinto, A.

    Process intensification constitutes a high interesting and promising industrial area. It aims to modify conventional processes or develop new technologies in order to reduce energy needs, increase yields and improve product quality. It has been demonstrated by this research group (CSIC) that power ultrasound have a great potential in food drying processes. The effects associated with the application of power ultrasound can enhance heat and mass transfer and may constitute a way for process intensification. The objective of this work has been the design and development of a new ultrasonic system for the power characterization of piezoelectric plate-transducers, as excitation, monitoring, analysis, control and characterization of their nonlinear response. For this purpose, the system proposes a new, efficient and economic approach that separates the effect of different parameters of the process like excitation, medium and transducer parameters and variables (voltage, current, frequency, impedance, vibration velocity, acoustic pressure and temperature) by observing the electrical, mechanical, acoustical and thermal behavior, and controlling the vibrational state.

  1. Effect of Laser Power and Gas Flow Rate on Properties of Directed Energy Deposition of Titanium Alloy

    NASA Astrophysics Data System (ADS)

    Mahamood, Rasheedat M.

    2018-03-01

    Laser metal deposition (LMD) process belongs to the directed energy deposition class of additive manufacturing processes. It is an important manufacturing technology with lots of potentials especially for the automobile and aerospace industries. The laser metal deposition process is fairly new, and the process is very sensitive to the processing parameters. There is a high level of interactions among these process parameters. The surface finish of part produced using the laser metal deposition process is dependent on the processing parameters. Also, the economy of the LMD process depends largely on steps taken to eliminate or reduce the need for secondary finishing operations. In this study, the influence of laser power and gas flow rate on the microstructure, microhardness and surface finish produced during the laser metal deposition of Ti6Al4V was investigated. The laser power was varied between 1.8 kW and 3.0 kW, while the gas flow rate was varied between 2 l/min and 4 l/min. The microstructure was studied under an optical microscope, the microhardness was studied using a Metkon microhardness indenter, while the surface roughness was studied using a Jenoptik stylus surface analyzer. The results showed that better surface finish was produced at a laser power of 3.0 kW and a gas flow rate of 4 l/min.

  2. Status quo and future research challenges on organic food quality determination with focus on laboratory methods.

    PubMed

    Kahl, Johannes; Bodroza-Solarov, Marija; Busscher, Nicolaas; Hajslova, Jana; Kneifel, Wolfgang; Kokornaczyk, Maria Olga; van Ruth, Saskia; Schulzova, Vera; Stolz, Peter

    2014-10-01

    Organic food quality determination needs multi-dimensional evaluation tools. The main focus is on the authentication as an analytical verification of the certification process. New fingerprinting approaches such as ultra-performance liquid chromatography-mass spectrometry, gas chromatography-mass spectrometry, direct analysis in real time-high-resolution mass spectrometry as well as crystallization with and without the presence of additives seem to be promising methods in terms of time of analysis and detecting organic system-related parameters. For further methodological development, a system approach is recommended, which also takes into account food structure aspects. Furthermore, the authentication of processed organic samples needs more consciousness, hence most of organic food is complex and processed. © 2013 Society of Chemical Industry.

  3. Contributions to optimization of storage and transporting industrial goods

    NASA Astrophysics Data System (ADS)

    Babanatsas, T.; Babanatis Merce, R. M.; Glăvan, D. O.; Glăvan, A.

    2018-01-01

    Optimization of storage and transporting industrial goods in a factory either from a constructive, functional, or technological point of view is a determinant parameter in programming the manufacturing process, the performance of the whole process being determined by the correlation realized taking in consideration those two factors (optimization and programming the process). It is imperative to take into consideration each type of production program (range), to restrain as much as possible the area that we are using and to minimize the times of execution, all of these in order to satisfy the client’s needs, to try to classify them in order to be able to define a global software (with general rules) that is expected to fulfil each client’s needs.

  4. The Effects of Operational Parameters on a Mono-wire Cutting System: Efficiency in Marble Processing

    NASA Astrophysics Data System (ADS)

    Yilmazkaya, Emre; Ozcelik, Yilmaz

    2016-02-01

    Mono-wire block cutting machines that cut with a diamond wire can be used for squaring natural stone blocks and the slab-cutting process. The efficient use of these machines reduces operating costs by ensuring less diamond wire wear and longer wire life at high speeds. The high investment costs of these machines will lead to their efficient use and reduce production costs by increasing plant efficiency. Therefore, there is a need to investigate the cutting performance parameters of mono-wire cutting machines in terms of rock properties and operating parameters. This study aims to investigate the effects of the wire rotational speed (peripheral speed) and wire descending speed (cutting speed), which are the operating parameters of a mono-wire cutting machine, on unit wear and unit energy, which are the performance parameters in mono-wire cutting. By using the obtained results, cuttability charts for each natural stone were created on the basis of unit wear and unit energy values, cutting optimizations were performed, and the relationships between some physical and mechanical properties of rocks and the optimum cutting parameters obtained as a result of the optimization were investigated.

  5. Hubert: Software for efficient analysis of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2016-10-01

    Combination of short data acquisition time and local investigation of a solid state through hyperfine parameters makes nuclear forward scattering (NFS) a unique experimental technique for investigation of fast processes. However, the total number of acquired NFS time spectra may be very high. Therefore an efficient way of the data evaluation is needed. In this paper we report the development of Hubert software package as a response to the rapidly developing field of in-situ NFS experiments. Hubert offers several useful features for data files processing and could significantly shorten the evaluation time by using a simple connection between the neighboring time spectra through their input and output parameter values.

  6. Video segmentation and camera motion characterization using compressed data

    NASA Astrophysics Data System (ADS)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  7. Wiener-Hammerstein system identification - an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Naitali, Abdessamad; Giri, Fouad

    2016-01-01

    The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.

  8. A technique for correcting ERTS data for solar and atmospheric effects

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Peacock, K.; Shah, N. J.

    1974-01-01

    A technique is described by which ERTS investigators can obtain and utilize solar and atmospheric parameters to transform spacecraft radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument (RPMI) and its use in determining atmospheric paramaters needed for ground truth are discussed. The procedures used and results achieved in processing ERTS CCTs to correct for atmospheric parameters to obtain imagery are reviewed. Examples are given which demonstrate the nature and magnitude of atmospheric effects on computer classification programs.

  9. Pilot-Configurable Information on a Display Unit

    NASA Technical Reports Server (NTRS)

    Bell, Charles Frederick (Inventor); Ametsitsi, Julian (Inventor); Che, Tan Nhat (Inventor); Shafaat, Syed Tahir (Inventor)

    2017-01-01

    A small thin display unit that can be installed in the flight deck for displaying only flight crew-selected tactical information needed for the task at hand. The flight crew can select the tactical information to be displayed by means of any conventional user interface. Whenever the flight crew selects tactical information for processes the request, including periodically retrieving measured current values or computing current values for the requested tactical parameters and returning those current tactical parameter values to the display unit for display.

  10. Development and evaluation of a dimensionless mechanistic pan coating model for the prediction of coated tablet appearance.

    PubMed

    Niblett, Daniel; Porter, Stuart; Reynolds, Gavin; Morgan, Tomos; Greenamoyer, Jennifer; Hach, Ronald; Sido, Stephanie; Karan, Kapish; Gabbott, Ian

    2017-08-07

    A mathematical, mechanistic tablet film-coating model has been developed for pharmaceutical pan coating systems based on the mechanisms of atomisation, tablet bed movement and droplet drying with the main purpose of predicting tablet appearance quality. Two dimensionless quantities were used to characterise the product properties and operating parameters: the dimensionless Spray Flux (relating to area coverage of the spray droplets) and the Niblett Number (relating to the time available for drying of coating droplets). The Niblett Number is the ratio between the time a droplet needs to dry under given thermodynamic conditions and the time available for the droplet while on the surface of the tablet bed. The time available for drying on the tablet bed surface is critical for appearance quality. These two dimensionless quantities were used to select process parameters for a set of 22 coating experiments, performed over a wide range of multivariate process parameters. The dimensionless Regime Map created can be used to visualise the effect of interacting process parameters on overall tablet appearance quality and defects such as picking and logo bridging. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Processing of strong-motion accelerograms: Needs, options and consequences

    USGS Publications Warehouse

    Boore, D.M.; Bommer, J.J.

    2005-01-01

    Recordings from strong-motion accelerographs are of fundamental importance in earthquake engineering, forming the basis for all characterizations of ground shaking employed for seismic design. The recordings, particularly those from analog instruments, invariably contain noise that can mask and distort the ground-motion signal at both high and low frequencies. For any application of recorded accelerograms in engineering seismology or earthquake engineering, it is important to identify the presence of this noise in the digitized time-history and its influence on the parameters that are to be derived from the records. If the parameters of interest are affected by noise then appropriate processing needs to be applied to the records, although it must be accepted from the outset that it is generally not possible to recover the actual ground motion over a wide range of frequencies. There are many schemes available for processing strong-motion data and it is important to be aware of the merits and pitfalls associated with each option. Equally important is to appreciate the effects of the procedures on the records in order to avoid errors in the interpretation and use of the results. Options for processing strong-motion accelerograms are presented, discussed and evaluated from the perspective of engineering application. ?? 2004 Elsevier Ltd. All rights reserved.

  12. FIRST ORDER KINETIC GAS GENERATION MODEL PARAMETERS FOR WET LANDFILLS

    EPA Science Inventory

    Landfill gas is produced as a result of a sequence of physical, chemical, and biological processes occurring within an anaerobic landfill. Landfill operators, energy recovery project owners, regulators, and energy users need to be able to project the volume of gas produced and re...

  13. Disease management programs for type 2 diabetes in Germany: a systematic literature review evaluating effectiveness.

    PubMed

    Fuchs, Sabine; Henschke, Cornelia; Blümel, Miriam; Busse, Reinhard

    2014-06-27

    Disease management programs (DMPs) are intended to improve the care of persons with chronic diseases. Despite numerous studies there is no unequivocal evidence about the effectiveness of DMPs in Germany. We conducted a systematic literature review in the MEDLINE, EMBASE, Cochrane Library, and CCMed databases. Our analysis included all controlled studies in which patients with type 2 diabetes enrolled in a DMP were compared to type 2 diabetes patients receiving routine care with respect to process, outcome, and economic parameters. The 9 studies included in the analysis were highly divergent with respect to their characteristics and the process and outcome parameters studied in each. No study had data beyond the year 2008. In 3 publications, the DMP patients had a lower mortality than the control patients (2.3%, 11.3%, and 7.17% versus 4.7%, 14.4%, and 14.72%). In 2 publications, DMP participation was found to be associated with a mean survival time of 1044.94 (± 189.87) days, as against 985.02 (± 264.68) in the control group. No consistent effect was seen with respect to morbidity, quality of life, or economic parameters. 7 publications from 5 studies revealed positive effects on process parameters for DMP participants. The observed beneficial trends with respect to mortality and survival time, as well as improvements in process parameters, indicate that DMPs can, in fact, improve the care of patients with diabetes. Further evaluation is needed, because some changes in outcome parameters (an important indicator of the quality of care) may only be observable over a longer period of time.

  14. Melt-Pool Temperature and Size Measurement During Direct Laser Sintering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    List, III, Frederick Alyious; Dinwiddie, Ralph Barton; Carver, Keith

    2017-08-01

    Additive manufacturing has demonstrated the ability to fabricate complex geometries and components not possible with conventional casting and machining. In many cases, industry has demonstrated the ability to fabricate complex geometries with improved efficiency and performance. However, qualification and certification of processes is challenging, leaving companies to focus on certification of material though design allowable based approaches. This significantly reduces the business case for additive manufacturing. Therefore, real time monitoring of the melt pool can be used to detect the development of flaws, such as porosity or un-sintered powder and aid in the certification process. Characteristics of the melt poolmore » in the Direct Laser Sintering (DLS) process is also of great interest to modelers who are developing simulation models needed to improve and perfect the DLS process. Such models could provide a means to rapidly develop the optimum processing parameters for new alloy powders and optimize processing parameters for specific part geometries. Stratonics’ ThermaViz system will be integrated with the Renishaw DLS system in order to demonstrate its ability to measure melt pool size, shape and temperature. These results will be compared with data from an existing IR camera to determine the best approach for the determination of these critical parameters.« less

  15. Factors Affecting Bacterial Inactivation during High Hydrostatic Pressure Processing of Foods: A Review.

    PubMed

    Syed, Qamar-Abbas; Buffa, Martin; Guamis, Buenaventura; Saldo, Jordi

    2016-01-01

    Although, the High Hydrostatic Pressure (HHP) technology has been gaining gradual popularity in food industry since last two decades, intensive research is needed to explore the missing information. Bacterial inactivation in food by using HHP applications can be enhanced by getting deeper insights of the process. Some of these aspects have been already studied in detail (like pressure, time, and temperature, etc.), while some others still need to be investigated in more details (like pH, rates of compression, and decompression, etc.). Selection of process parameters is mainly dependent on type of matrix and target bacteria. This intensive review provides comprehensive information about the variety of aspects that can determine the bacterial inactivation potential of HHP process indicating the fields of future research on this subject including pH shifts of the pressure treated samples and critical limits of compression and decompression rates to accelerate the process efficacy.

  16. A quality by design approach to scale-up of high-shear wet granulation process.

    PubMed

    Pandey, Preetanshu; Badawy, Sherif

    2016-01-01

    High-shear wet granulation is a complex process that in turn makes scale-up a challenging task. Scale-up of high-shear wet granulation process has been studied extensively in the past with various different methodologies being proposed in the literature. This review article discusses existing scale-up principles and categorizes the various approaches into two main scale-up strategies - parameter-based and attribute-based. With the advent of quality by design (QbD) principle in drug product development process, an increased emphasis toward the latter approach may be needed to ensure product robustness. In practice, a combination of both scale-up strategies is often utilized. In a QbD paradigm, there is also a need for an increased fundamental and mechanistic understanding of the process. This can be achieved either by increased experimentation that comes at higher costs, or by using modeling techniques, that are also discussed as part of this review.

  17. Modelling aspects regarding the control in 13C isotope separation column

    NASA Astrophysics Data System (ADS)

    Boca, M. L.

    2016-08-01

    Carbon represents the fourth most abundant chemical element in the world, having two stable and one radioactive isotope. The 13Carbon isotopes, with a natural abundance of 1.1%, plays an important role in numerous applications, such as the study of human metabolism changes, molecular structure studies, non-invasive respiratory tests, Alzheimer tests, air pollution and global warming effects on plants [9] A manufacturing control system manages the internal logistics in a production system and determines the routings of product instances, the assignment of workers and components, the starting of the processes on not-yet-finished product instances. Manufacturing control does not control the manufacturing processes themselves, but has to cope with the consequences of the processing results (e.g. the routing of products to a repair station). In this research it was fulfilled some UML (Unified Modelling Language) diagrams for modelling the C13 Isotope Separation column, implement in STARUML program. Being a critical process and needing a good control and supervising, the critical parameters in the column, temperature and pressure was control using some PLC (Programmable logic controller) and it was made some graphic analyze for this to observe some critical situation than can affect the separation process. The main parameters that need to be control are: -The liquid nitrogen (N2) level in the condenser. -The electrical power supplied to the boiler. -The vacuum pressure.

  18. Indicator system for a process plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  19. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  20. Curie-Montgolfiere Planetary Explorers

    NASA Astrophysics Data System (ADS)

    Taylor, Chris Y.; Hansen, Jeremiah

    2007-01-01

    Hot-air balloons, also known as Montgolfiere balloons, powered by heat from radioisotope decay are a potentially useful tool for exploring planetary atmospheres and augmenting the capabilities of other exploration technologies. This paper describes the physical equations and identifies the key engineering parameters that drive radioisotope-powered balloon performance. These parameters include envelope strength-to-weight, envelope thermal conductivity, heater power-to-weight, heater temperature, and balloon shape. The design space for these parameters are shown for varying atmospheric compositions to illustrate the performance needed to build functioning ``Curie-Montgolfiere'' balloons for various planetary atmospheres. Methods to ease the process of Curie-Montgolfiere conceptual design and sizing of are also introduced.

  1. Optimal design study of high efficiency indium phosphide space solar cells

    NASA Technical Reports Server (NTRS)

    Jain, Raj K.; Flood, Dennis J.

    1990-01-01

    Recently indium phosphide solar cells have achieved beginning of life AMO efficiencies in excess of 19 pct. at 25 C. The high efficiency prospects along with superb radiation tolerance make indium phosphide a leading material for space power requirements. To achieve cost effectiveness, practical cell efficiencies have to be raised to near theoretical limits and thin film indium phosphide cells need to be developed. The optimal design study is described of high efficiency indium phosphide solar cells for space power applications using the PC-1D computer program. It is shown that cells with efficiencies over 22 pct. AMO at 25 C could be fabricated by achieving proper material and process parameters. It is observed that further improvements in cell material and process parameters could lead to experimental cell efficiencies near theoretical limits. The effect of various emitter and base parameters on cell performance was studied.

  2. In-Situ Waviness Characterization of Metal Plates by a Lateral Shearing Interferometric Profilometer

    PubMed Central

    Frade, María; Enguita, José María; Álvarez, Ignacio

    2013-01-01

    Characterizing waviness in sheet metal is a key process for quality control in many industries, such as automotive and home appliance manufacturing. However, there is still no known technique able to work in an automated in-floor inspection system. The literature describes many techniques developed in the last three decades, but most of them are either slow, only able to work in laboratory conditions, need very short (unsafe) working distances, or are only able to estimate certain waviness parameters. In this article we propose the use of a lateral shearing interferometric profilometer, which is able to obtain a 19 mm profile in a single acquisition, with sub-micron precision, in an uncontrolled environment, and from a working distance greater than 90 mm. This system allows direct measurement of all needed waviness parameters even with objects in movement. We describe a series of experiments over several samples of steel plates to validate the sensor and the processing method, and the results are in close agreement with those obtained with a contact stylus device. The sensor is an ideal candidate for on-line or in-machine fast automatic waviness assessment, reducing delays and costs in many metalworking processes. PMID:23584120

  3. In-situ waviness characterization of metal plates by a lateral shearing interferometric profilometer.

    PubMed

    Frade, María; Enguita, José María; Alvarez, Ignacio

    2013-04-12

    Characterizing waviness in sheet metal is a key process for quality control in many industries, such as automotive and home appliance manufacturing. However, there is still no known technique able to work in an automated in-floor inspection system. The literature describes many techniques developed in the last three decades, but most of them are either slow, only able to work in laboratory conditions, need very short (unsafe) working distances, or are only able to estimate certain waviness parameters. In this article we propose the use of a lateral shearing interferometric profilometer, which is able to obtain a 19 mm profile in a single acquisition, with sub-micron precision, in an uncontrolled environment, and from a working distance greater than 90 mm. This system allows direct measurement of all needed waviness parameters even with objects in movement. We describe a series of experiments over several samples of steel plates to validate the sensor and the processing method, and the results are in close agreement with those obtained with a contact stylus device. The sensor is an ideal candidate for on-line or in-machine fast automatic waviness assessment, reducing delays and costs in many metalworking processes.

  4. A generalized multi-dimensional mathematical model for charging and discharging processes in a supercapacitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A

    A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less

  5. Tunneling from the past horizon

    NASA Astrophysics Data System (ADS)

    Kang, Subeom; Yeom, Dong-han

    2018-04-01

    We investigate a tunneling and emission process of a thin-shell from a Schwarzschild black hole, where the shell was initially located beyond the Einstein-Rosen bridge and finally appears at the right side of the Penrose diagram. In order to obtain such a solution, we should assume that the areal radius of the black hole horizon increases after the tunneling. Hence, there is a parameter range such that the tunneling rate is exponentially enhanced, rather than suppressed. We may have two interpretations regarding this. First, such a tunneling process from the past horizon is improbable by physical reasons; second, such a tunneling is possible in principle, but in order to obtain a stable Einstein-Rosen bridge, one needs to restrict the parameter spaces. If such a process is allowed, this can be a nonperturbative contribution to Einstein-Rosen bridges as well as eternal black holes.

  6. Automated system for generation of soil moisture products for agricultural drought assessment

    NASA Astrophysics Data System (ADS)

    Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.

  7. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  8. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  9. Automated optical testing of LWIR objective lenses using focal plane array sensors

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik; Domagalski, Christian; Peter, Frank; Heinisch, Josef; Dumitrescu, Eugen

    2012-10-01

    The image quality of today's state-of-the-art IR objective lenses is constantly improving while at the same time the market for thermography and vision grows strongly. Because of increasing demands on the quality of IR optics and increasing production volumes, the standards for image quality testing increase and tests need to be performed in shorter time. Most high-precision MTF testing equipment for the IR spectral bands in use today relies on the scanning slit method that scans a 1D detector over a pattern in the image generated by the lens under test, followed by image analysis to extract performance parameters. The disadvantages of this approach are that it is relatively slow, it requires highly trained operators for aligning the sample and the number of parameters that can be extracted is limited. In this paper we present lessons learned from the R and D process on using focal plane array (FPA) sensors for testing of long-wave IR (LWIR, 8-12 m) optics. Factors that need to be taken into account when switching from scanning slit to FPAs are e.g.: the thermal background from the environment, the low scene contrast in the LWIR, the need for advanced image processing algorithms to pre-process camera images for analysis and camera artifacts. Finally, we discuss 2 measurement systems for LWIR lens characterization that we recently developed with different target applications: 1) A fully automated system suitable for production testing and metrology that uses uncooled microbolometer cameras to automatically measure MTF (on-axis and at several o-axis positions) and parameters like EFL, FFL, autofocus curves, image plane tilt, etc. for LWIR objectives with an EFL between 1 and 12mm. The measurement cycle time for one sample is typically between 6 and 8s. 2) A high-precision research-grade system using again an uncooled LWIR camera as detector, that is very simple to align and operate. A wide range of lens parameters (MTF, EFL, astigmatism, distortion, etc.) can be easily and accurately measured with this system.

  10. Electroactive Biofilms: Current Status and Future Research Needs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borole, Abhijeet P; Reguera, Gemma; Ringeisen, Bradley

    2011-01-01

    Electroactive biofilms generated by electrochemically active microorganisms have many potential applications in bioenergy and chemicals production. This review assesses the effects of microbiological and process parameters on enrichment of such biofilms as well as critically evaluates the current knowledge of the mechanisms of extracellular electron transfer in BES systems. First we discuss the role of biofilm forming microorganisms vs. planktonic microorganisms. Physical, chemical and electrochemical parameters which dictate the enrichment and subsequent performance of the biofilms are discussed. Potential dependent biological parameters including biofilm growth rate, specific electron transfer rate and others and their relationship to BES system performance ismore » assessed. A review of the mechanisms of electron transfer in BES systems is included followed by a discussion of biofilm and its exopolymeric components and their electrical conductivity. A discussion of the electroactive biofilms in biocathodes is also included. Finally, we identify the research needs for further development of the electroactive biofilms to enable commercial applications.« less

  11. Laboratory Studies on Surface Sampling of Bacillus anthracis Contamination: Summary, Gaps, and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca

    2011-11-28

    This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less

  12. Thermophilic versus Mesophilic Anaerobic Digestion of Sewage Sludge: A Comparative Review

    PubMed Central

    Gebreeyessus, Getachew D.; Jenicek, Pavel

    2016-01-01

    During advanced biological wastewater treatment, a huge amount of sludge is produced as a by-product of the treatment process. Hence, reuse and recovery of resources and energy from the sludge is a big technological challenge. The processing of sludge produced by Wastewater Treatment Plants (WWTPs) is massive, which takes up a big part of the overall operational costs. In this regard, anaerobic digestion (AD) of sewage sludge continues to be an attractive option to produce biogas that could contribute to the wastewater management cost reduction and foster the sustainability of those WWTPs. At the same time, AD reduces sludge amounts and that again contributes to the reduction of the sludge disposal costs. However, sludge volume minimization remains, a challenge thus improvement of dewatering efficiency is an inevitable part of WWTP operation. As a result, AD parameters could have significant impact on sludge properties. One of the most important operational parameters influencing the AD process is temperature. Consequently, the thermophilic and the mesophilic modes of sludge AD are compared for their pros and cons by many researchers. However, most comparisons are more focused on biogas yield, process speed and stability. Regarding the biogas yield, thermophilic sludge AD is preferred over the mesophilic one because of its faster biochemical reaction rate. Equally important but not studied sufficiently until now was the influence of temperature on the digestate quality, which is expressed mainly by the sludge dewateringability, and the reject water quality (chemical oxygen demand, ammonia nitrogen, and pH). In the field of comparison of thermophilic and mesophilic digestion process, few and often inconclusive research, unfortunately, has been published so far. Hence, recommendations for optimized technologies have not yet been done. The review presented provides a comparison of existing sludge AD technologies and the gaps that need to be filled so as to optimize the connection between the two systems. In addition, many other relevant AD process parameters, including sludge rheology, which need to be addressed, are also reviewed and presented. PMID:28952577

  13. Procedures and results related to the direct determination of gravity anomalies from satellite and terrestrial gravity data

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1974-01-01

    The equations needed for the incorporation of gravity anomalies as unknown parameters in an orbit determination program are described. These equations were implemented in the Geodyn computer program which was used to process optical satellite observations. The arc dependent parameter unknowns, 184 unknown 15 deg and coordinates of 7 tracking stations were considered. Up to 39 arcs (5 to 7 days) involving 10 different satellites, were processed. An anomaly solution from the satellite data and a combination solution with 15 deg terrestrial anomalies were made. The limited data samples indicate that the method works. The 15 deg anomalies from various solutions and the potential coefficients implied by the different solutions are reported.

  14. Simulation of the detonation process of an ammonium nitrate based emulsion explosive using the lee-tarver reactive flow model

    NASA Astrophysics Data System (ADS)

    Ribeiro, José B.; Silva, Cristóvão; Mendes, Ricardo; Plaksin, I.; Campos, Jose

    2012-03-01

    The use of emulsion explosives [EEx] for processing materials (compaction, welding and forming) requires the ability to perform detailed simulations of its detonation process [DP]. Detailed numerical simulations of the DP of this kind of explosives, characterized by having a finite reaction zone thickness, are thought to be suitably performed using the Lee-Tarver reactive flow model. In this work a real coded genetic algorithm methodology was used to estimate the 15 parameters of the reaction rate equation [RRE] of that model for a particular EEx. This methodology allows, in a single optimization procedure, using only one experimental result and without the need of any starting solution, to seek for the 15 parameters of the RRE that fit the numerical to the experimental results. Mass averaging and the Plate-Gap Model have been used for the determination of the shock data used in the unreacted explosive JWL EoS assessment, and the thermochemical code THOR retrieved the data used in the detonation products JWL EoS assessment. The obtained parameters allow a reasonable description of the experimental data.

  15. An invertebrate embryologist's guide to routine processing of confocal images.

    PubMed

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.

  16. Event-driven simulation of the state institution activity for the service provision based on business processes

    NASA Astrophysics Data System (ADS)

    Kataev, M. Yu.; Loseva, N. V.; Mitsel, A. A.; Bulysheva, L. A.; Kozlov, S. V.

    2017-01-01

    The paper presents an approach, based on business processes, assessment and control of the state of the state institution, the social insurance Fund. The paper describes the application of business processes, such as items with clear measurable parameters that need to be determined, controlled and changed for management. The example of one of the business processes of the state institutions, which shows the ability to solve management tasks, is given. The authors of the paper demonstrate the possibility of applying the mathematical apparatus of imitative simulation for solving management tasks.

  17. Evaluation of methods for application of epitaxial layers of superconductor and buffer layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-06-01

    The recent achievements in a number of laboratories of critical currents in excess of 1.0x10{sup 6} amp/cm{sup 2} at 77K in YBCO deposited over suitably textured buffer/substrate composites have stimulated interest in the potential applications of coated conductors at high temperatures and high magnetic fields. As of today, two different approaches for obtaining the textured substrates have been identified. These are: Los Alamos National Laboratory`s (LANL) ion-beam assisted deposition called IBAD, to obtain a highly textured yttria-stabilized zirconia (YSZ) buffer on nickel alloy strips, and Oak Ridge National Laboratory`s (ORNL) rolling assisted, bi-axial texturized substrate option called RABiTS. Similarly, basedmore » on the published literature, the available options to form High Temperature Superconductor (HTS) films on metallic, semi-metallic or ceramic substrates can be divided into: physical methods, and non-physical or chemical methods. Under these two major groups, the schemes being proposed consist of: - Sputtering - Electron-Beam Evaporation - Flash Evaporation - Molecular Beam Epitaxy - Laser Ablation - Electrophoresis - Chemical Vapor Deposition (Including Metal-Organic Chemical Vapor Deposition) - Sol-Gel - Metal-Organic Decomposition - Electrodeposition, and - Aerosol/Spray Pyrolysis. In general, a spool- to-spool or reel-to-reel type of continuous manufacturing scheme developed out of any of the above techniques, would consist of: - Preparation of Substrate Material - Preparation and Application of the Buffer Layer(s) - Preparation and Application of the HTS Material and Required Post-Annealing, and - Preparation and Application of the External Protective Layer. These operations would be affected by various process parameters which can be classified into: Chemistry and Material Related Parameters; and Engineering and Environmental Based Parameters. Thus, one can see that for successful development of the coated conductors manufacturing process, an extensive review of the available options was necessary. Under the U.S. Department of Energy (DOE`s) sponsorship, the University of Tennessee Space Institute (UTSI), was given a responsibility of performing this review. In UTSI`s efforts to review the available options, Oak Ridge National Laboratory, (ORNL), especially Mr. Robert Hawsey and Dr. M. Paranthaman provided very valuable guidance and technical assistance. This report describes the review carried out by the UTSI staff, students and faculty members. It also provides the approach being used to develop the cost information as well as the major operational parameters/variables that will have to be monitored and the relevant control systems. In particular, the report includes: - Process Flow Schemes and Involved Operations - Multi-Attribute Analysis Carried out for Objective and Subjective Criteria - Manufacturing Parameters to Process 6,000 km/year of Quality Coated Conductor Material - Metal Organics (MOD), Sol-Gel, and E-Beam as the Leading Candidates, and Technical Concerns/Issues that Need to be Resolved to Develop a Commercially Viable Option Out of Each of Them. - Process Control Needs for Various Schemes - Approach/Methodology for Developing Cost of Coated Conductors This report also includes generic areas in which additional research and development work are needed. In general, it is our feeling that the science and chemistry that are being developed in the coated conductor wire program now need proper engineering assistance/viewpoints to develop leading options into a viable commercial process.« less

  18. The Research and Implementation of Vehicle Bluetooth Hands-free Devices Key Parameters Downloading Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-bo; Wang, Zhi-xue; Li, Jian-xin; Ma, Jian-hui; Li, Yang; Li, Yan-qiang

    In order to facilitate Bluetooth function realization and information can be effectively tracked in the process of production, the vehicle Bluetooth hands-free devices need to download such key parameters as Bluetooth address, CVC license and base plate numbers, etc. Therefore, it is the aim to search simple and effective methods to download parameters for each vehicle Bluetooth hands-free device, and to control and record the use of parameters. In this paper, by means of Bluetooth Serial Peripheral Interface programmer device, the parallel port is switched to SPI. The first step is to download parameters is simulating SPI with the parallel port. To perform SPI function, operating the parallel port in accordance with the SPI timing. The next step is to achieve SPI data transceiver functions according to the programming parameters of options. Utilizing the new method, downloading parameters is fast and accurate. It fully meets vehicle Bluetooth hands-free devices production requirements. In the production line, it has played a large role.

  19. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  20. 40 CFR 1065.905 - General provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for a particular type of engine. Before using this subpart's procedures for field testing, read the...? (9) Which engine and ambient parameters do I need to measure? (10) How do I process the data recorded... a gravimetric balance for PM, weigh PM samples according to §§ 1065.590 and 1065.595. (7) Use the...

  1. Kohonen Self-Organizing Feature Maps as a Means to Benchmark College and University Websites

    ERIC Educational Resources Information Center

    Cooper, Cameron; Burns, Andrew

    2007-01-01

    Websites for colleges and universities have become the primary means for students to obtain information in the college search process. Consequently, institutions of higher education should target their websites toward prospective and current students' needs, interests, and tastes. Numerous parameters must be determined in creating a school website…

  2. Space debris mitigation - engineering strategies

    NASA Astrophysics Data System (ADS)

    Taylor, E.; Hammond, M.

    The problem of space debris pollution is acknowledged to be of growing concern by space agencies, leading to recent activities in the field of space debris mitigation. A review of the current (and near-future) mitigation guidelines, handbooks, standards and licensing procedures has identified a number of areas where further work is required. In order for space debris mitigation to be implemented in spacecraft manufacture and operation, the authors suggest that debris-related criteria need to become design parameters (following the same process as applied to reliability and radiation). To meet these parameters, spacecraft manufacturers and operators will need processes (supported by design tools and databases and implementation standards). A particular aspect of debris mitigation, as compared with conventional requirements (e.g. radiation and reliability) is the current and near-future national and international regulatory framework and associated liability aspects. A framework for these implementation standards is presented, in addition to results of in-house research and development on design tools and databases (including collision avoidance in GTO and SSTO and evaluation of failure criteria on composite and aluminium structures).

  3. A discrimination-association model for decomposing component processes of the implicit association test.

    PubMed

    Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale

    2013-06-01

    A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.

  4. ARTICLES: Thermohydrodynamic models of the interaction of pulse-periodic radiation with matter

    NASA Astrophysics Data System (ADS)

    Arutyunyan, R. V.; Baranov, V. Yu; Bol'shov, Leonid A.; Malyuta, D. D.; Mezhevov, V. S.; Pis'mennyĭ, V. D.

    1987-02-01

    Experimental and theoretical investigations were made of the processes of drilling and deep melting of metals by pulsed and pulse-periodic laser radiation. Direct photography of the surface revealed molten metal splashing due to interaction with single CO2 laser pulses. A proposed thermohydrodynamic model was used to account for the experimental results and to calculate the optimal parameters of pulse-periodic radiation needed for deep melting. The melt splashing processes were simulated numerically.

  5. Continuous Odour Measurement with Chemosensor Systems

    NASA Astrophysics Data System (ADS)

    Boeker, Peter; Haas, T.; Diekmann, B.; Lammer, P. Schulze

    2009-05-01

    The continuous odour measurement is a challenging task for chemosensor systems. Firstly, a long term and stable measurement mode must be guaranteed in order to preserve the validity of the time consuming and expensive olfactometric calibration data. Secondly, a method is needed to deal with the incoming sensor data. The continuous online detection of signal patterns, the correlated gas emission and the assigned odour data is essential for the continuous odour measurement. Thirdly, a severe danger of over-fitting in the process of the odour calibration is present, because of the high measurement uncertainty of the olfactometry. In this contribution we present a technical solution for continuous measurements comprising of a hybrid QMB-sensor array and electrochemical cells. A set of software tools enables the efficient data processing and calibration and computes the calibration parameters. The internal software of the measurement systems microcontroller processes the calibration parameters online for the output of the desired odour information.

  6. Evaluation of the traffic parameters in a metropolitan area by fusing visual perceptions and CNN processing of webcam images.

    PubMed

    Faro, Alberto; Giordano, Daniela; Spampinato, Concetto

    2008-06-01

    This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network.

  7. Prediction of multi performance characteristics of wire EDM process using grey ANFIS

    NASA Astrophysics Data System (ADS)

    Kumanan, Somasundaram; Nair, Anish

    2017-09-01

    Super alloys are used to fabricate components in ultra-supercritical power plants. These hard to machine materials are processed using non-traditional machining methods like Wire cut electrical discharge machining and needs attention. This paper details about multi performance optimization of wire EDM process using Grey ANFIS. Experiments are designed to establish the performance characteristics of wire EDM such as surface roughness, material removal rate, wire wear rate and geometric tolerances. The control parameters are pulse on time, pulse off time, current, voltage, flushing pressure, wire tension, table feed and wire speed. Grey relational analysis is employed to optimise the multi objectives. Analysis of variance of the grey grades is used to identify the critical parameters. A regression model is developed and used to generate datasets for the training of proposed adaptive neuro fuzzy inference system. The developed prediction model is tested for its prediction ability.

  8. Cost of ownership for inspection equipment

    NASA Astrophysics Data System (ADS)

    Dance, Daren L.; Bryson, Phil

    1993-08-01

    Cost of Ownership (CoO) models are increasingly a part of the semiconductor equipment evaluation and selection process. These models enable semiconductor manufacturers and equipment suppliers to quantify a system in terms of dollars per wafer. Because of the complex nature of the semiconductor manufacturing process, there are several key attributes that must be considered in order to accurately reflect the true 'cost of ownership'. While most CoO work to date has been applied to production equipment, the need to understand cost of ownership for inspection and metrology equipment presents unique challenges. Critical parameters such as detection sensitivity as a function of size and type of defect are not included in current CoO models yet are, without question, major factors in the technical evaluation process and life-cycle cost. This paper illustrates the relationship between these parameters, as components of the alpha and beta risk, and cost of ownership.

  9. Automation and quality assurance of the production cycle

    NASA Astrophysics Data System (ADS)

    Hajdu, L.; Didenko, L.; Lauret, J.

    2010-04-01

    Processing datasets on the order of tens of terabytes is an onerous task, faced by production coordinators everywhere. Users solicit data productions and, especially for simulation data, the vast amount of parameters (and sometime incomplete requests) point at the need for a tracking, control and archiving all requests made so a coordinated handling could be made by the production team. With the advent of grid computing the parallel processing power has increased but traceability has also become increasing problematic due to the heterogeneous nature of Grids. Any one of a number of components may fail invalidating the job or execution flow in various stages of completion and re-submission of a few of the multitude of jobs (keeping the entire dataset production consistency) a difficult and tedious process. From the definition of the workflow to its execution, there is a strong need for validation, tracking, monitoring and reporting of problems. To ease the process of requesting production workflow, STAR has implemented several components addressing the full workflow consistency. A Web based online submission request module, implemented using Drupal's Content Management System API, enforces ahead that all parameters are described in advance in a uniform fashion. Upon submission, all jobs are independently tracked and (sometime experiment-specific) discrepancies are detected and recorded providing detailed information on where/how/when the job failed. Aggregate information on success and failure are also provided in near real-time.

  10. Indicator system for advanced nuclear plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  11. Console for a nuclear control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  12. Alarm system for a nuclear control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  13. Method of installing a control room console in a nuclear power plant

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  14. Advanced nuclear plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  15. Advanced nuclear plant control room complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  16. Climsat rationale

    NASA Technical Reports Server (NTRS)

    Hansen, James

    1993-01-01

    We summarize reasons for the Climsat proposition; we also stress the need for certain climate monitoring other than that supplied by Climsat, especially solar irradiance, and we stress the complementarity of Climsat monitoring to plans for detailed EOS measurements. Existing and planned observations will not provide measurements of most climate forcing and feedback parameters with the accuracy needed to measure plausible decadal changes. Stratospheric water vapor and aerosol requirements are not met, for example, even though the present SAGE II instrument on the ERBS spacecraft measures those two parameters accurately, because ERBS is not expected to last more than a few years and it does not provide global coverage. We stress the imminence of a potential data gap even of those parameters, such as solar irradiance and stratospheric aerosols, for which monitoring capability has been proven and currently is in place. We find that most of the missing global climate forcings and feedbacks can be measured by three small instruments, which would need to be deployed on two spacecraft to obtain adequate sampling and global coverage. The monitoring must be maintained continuously for at least two decades. Such continuity can be attained by replacing a satellite after it fails, the functioning satellite providing calibration transfer to the new satellite. Certain complementary monitoring data are also needed, including solar monitoring from space, in order to fully meet requirements for monitoring all the climate forcings and feedbacks. The complementary data needs are discussed toward the end of this section. We summarize the proposed Climsat measurements and compare the expected accuracies to those which are needed to analyze changes of the global thermal energy cycle on decadal time scales. We stress the need to get broader participation of the scientific community in the monitoring and analysis activity. Finally, we discuss related climate process and diagnostic measurements.

  17. Dynamic modeling the composting process of the mixture of poultry manure and wheat straw.

    PubMed

    Petric, Ivan; Mustafić, Nesib

    2015-09-15

    Due to lack of understanding of the complex nature of the composting process, there is a need to provide a valuable tool that can help to improve the prediction of the process performance but also its optimization. Therefore, the main objective of this study is to develop a comprehensive mathematical model of the composting process based on microbial kinetics. The model incorporates two different microbial populations that metabolize the organic matter in two different substrates. The model was validated by comparison of the model and experimental data obtained from the composting process of the mixture of poultry manure and wheat straw. Comparison of simulation results and experimental data for five dynamic state variables (organic matter conversion, oxygen concentration, carbon dioxide concentration, substrate temperature and moisture content) showed that the model has very good predictions of the process performance. According to simulation results, the optimum values for air flow rate and ambient air temperature are 0.43 l min(-1) kg(-1)OM and 28 °C, respectively. On the basis of sensitivity analysis, the maximum organic matter conversion is the most sensitive among the three objective functions. Among the twelve examined parameters, μmax,1 is the most influencing parameter and X1 is the least influencing parameter. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets

    PubMed Central

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-01-01

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801

  19. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets.

    PubMed

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-03-03

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.

  20. Optimum processing parameters for the fabrication of twill flax fabric-reinforced polypropylene (PP) composites

    NASA Astrophysics Data System (ADS)

    Zuhudi, Nurul Zuhairah Mahmud; Minhat, Mulia; Shamsuddin, Mohd Hafizi; Isa, Mohd Dali; Nur, Nurhayati Mohd

    2017-12-01

    In recent years, natural fabric thermoplastic composites such as flax have received much attention due to its attractive capabilities for structural applications. It is crucial to study the processing of flax fabric materials in order to achieve good quality and cost-effectiveness in fibre reinforced composites. Though flax fabric has been widely utilized for several years in composite applications due to its high strength and abundance in nature, much work has been concentrated on short flax fibre and very little work focused on using flax fabric. The effectiveness of the flax fabric is expected to give higher strength performance due to its structure but the processing needs to be optimised. Flax fabric composites were fabricated using compression moulding due to its simplicity, gives good surface finish and relatively low cost in terms of labour and production. Further, the impregnation of the polymer into the fabric is easier in this process. As the fabric weave structure contributes to the impregnation quality which leads to the overall performance, the processing parameters of consolidation i.e. pressure, time, and weight fraction of fabric were optimized using the Taguchi method. This optimization enhances the consolidation quality of the composite by improving the composite mechanical properties, three main tests were conducted i.e. tensile, flexural and impact test. It is observed that the processing parameter significantly affected the consolidation and quality of composite.

  1. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  2. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  3. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  4. Comparison between reverse Brayton and Kapitza based LNG boil-off gas reliquefaction system using exergy analysis

    NASA Astrophysics Data System (ADS)

    Kochunni, Sarun Kumar; Chowdhury, Kanchan

    2017-02-01

    LNG boil-off gas (BOG) reliquefaction systems in LNG carrier ships uses refrigeration devices which are based on reverse Brayton, Claude, Kapitza (modified Claude) or Cascade cycles. Some of these refrigeration devices use nitrogen as the refrigerants and hence nitrogen storage vessels or nitrogen generators needs to be installed in LNG carrier ships which consume space and add weight to the carrier. In the present work, a new configuration based on Kapitza liquefaction cycle which uses BOG itself as working fluid is proposed and has been compared with Reverse Brayton Cycle (RBC) on sizes of heat exchangers and compressor operating parameters. Exergy analysis is done after simulating at steady state with Aspen Hysys 8.6® and the comparison between RBC and Kapitza may help designers to choose reliquefaction system with appropriate process parameters and sizes of equipment. With comparable exergetic efficiency as that of an RBC, a Kaptiza system needs only BOG compressor without any need of nitrogen gas.

  5. Optimized cutting and forming parameters for a robust collar drawing process for hot-rolled complex-phase steels

    NASA Astrophysics Data System (ADS)

    Kovacs, S.; Beier, T.; Woestmann, S.

    2017-09-01

    The demands on materials for automotive applications are steadily increasing. For chassis components, the trend is towards thinner and higher strength materials for weight and cost reduction. In view of attainable strengths of up to 1200 MPa for hot rolled materials, certain aspects need to be analysed and evaluated in advance in the development process using these materials. Collars in particular, for example in control arms, have been in focus for part and process design. Issues concerning edge and surface cracks are observed due to improper geometry and process layout. The hole expansion capability of the chosen material grade has direct influence on the achievable collar height. In general, shear cutting reduces the residual formability of blank edges and the hole expansion capability. In this paper, using the example of the complex phase steel CP-W® 800 of thyssenkrupp, it is shown how a suitable geometry of a collar and optimum shear cutting parameters can be chosen.

  6. Strategic planning for the International Space Station

    NASA Technical Reports Server (NTRS)

    Griner, Carolyn S.

    1990-01-01

    The concept for utilization and operations planning for the International Space Station Freedom was developed in a NASA Space Station Operations Task Force in 1986. Since that time the concept has been further refined to definitize the process and products required to integrate the needs of the international user community with the operational capabilities of the Station in its evolving configuration. The keystone to the process is the development of individual plans by the partners, with the parameters and formats common to the degree that electronic communications techniques can be effectively utilized, while maintaining the proper level and location of configuration control. The integration, evaluation, and verification of the integrated plan, called the Consolidated Operations and Utilization Plan (COUP), is being tested in a multilateral environment to prove out the parameters, interfaces, and process details necessary to produce the first COUP for Space Station in 1991. This paper will describe the concept, process, and the status of the multilateral test case.

  7. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  8. Functional identification of spike-processing neural circuits.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2014-02-01

    We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.

  9. Fast estimation of space-robots inertia parameters: A modular mathematical formulation

    NASA Astrophysics Data System (ADS)

    Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2016-10-01

    This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.

  10. Yet one more dwell time algorithm

    NASA Astrophysics Data System (ADS)

    Haberl, Alexander; Rascher, Rolf

    2017-06-01

    The current demand of even more powerful and efficient microprocessors, for e.g. deep learning, has led to an ongoing trend of reducing the feature size of the integrated circuits. These processors are patterned with EUV-lithography which enables 7 nm chips [1]. To produce mirrors which satisfy the needed requirements is a challenging task. Not only increasing requirements on the imaging properties, but also new lens shapes, such as aspheres or lenses with free-form surfaces, require innovative production processes. However, these lenses need new deterministic sub-aperture polishing methods that have been established in the past few years. These polishing methods are characterized, by an empirically determined TIF and local stock removal. Such a deterministic polishing method is ion-beam-figuring (IBF). The beam profile of an ion beam is adjusted to a nearly ideal Gaussian shape by various parameters. With the known removal function, a dwell time profile can be generated for each measured error profile. Such a profile is always generated pixel-accurately to the predetermined error profile, with the aim always of minimizing the existing surface structures up to the cut-off frequency of the tool used [2]. The processing success of a correction-polishing run depends decisively on the accuracy of the previously computed dwell-time profile. So the used algorithm to calculate the dwell time has to accurately reflect the reality. But furthermore the machine operator should have no influence on the dwell-time calculation. Conclusively there mustn't be any parameters which have an influence on the calculation result. And lastly it should take a minimum of machining time to get a minimum of remaining error structures. Unfortunately current dwell time algorithm calculations are divergent, user-dependent, tending to create high processing times and need several parameters to bet set. This paper describes an, realistic, convergent and user independent dwell time algorithm. The typical processing times are reduced to about 80 % up to 50 % compared to conventional algorithms (Lucy-Richardson, Van-Cittert …) as used in established machines. To verify its effectiveness a plane surface was machined on an IBF.

  11. Magnetorheological finishing: a perfect solution to nanofinishing requirements

    NASA Astrophysics Data System (ADS)

    Sidpara, Ajay

    2014-09-01

    Finishing of optics for different applications is the most important as well as difficult step to meet the specification of optics. Conventional grinding or other polishing processes are not able to reduce surface roughness beyond a certain limit due to high forces acting on the workpiece, embedded abrasive particles, limited control over process, etc. Magnetorheological finishing (MRF) process provides a new, efficient, and innovative way to finish optical materials as well many metals to their desired level of accuracy. This paper provides an overview of MRF process for different applications, important process parameters, requirement of magnetorheological fluid with respect to workpiece material, and some areas that need to be explored for extending the application of MRF process.

  12. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  13. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  14. Continuous processing and the applications of online tools in pharmaceutical product manufacture: developments and examples.

    PubMed

    Ooi, Shing Ming; Sarkar, Srimanta; van Varenbergh, Griet; Schoeters, Kris; Heng, Paul Wan Sia

    2013-04-01

    Continuous processing and production in pharmaceutical manufacturing has received increased attention in recent years mainly due to the industries' pressing needs for more efficient, cost-effective processes and production, as well as regulatory facilitation. To achieve optimum product quality, the traditional trial-and-error method for the optimization of different process and formulation parameters is expensive and time consuming. Real-time evaluation and the control of product quality using an online process analyzer in continuous processing can provide high-quality production with very high-throughput at low unit cost. This review focuses on continuous processing and the application of different real-time monitoring tools used in the pharmaceutical industry for continuous processing from powder to tablets.

  15. Processing of Copper Zinc Tin Sulfide Nanocrystal Dispersions for Thin Film Solar Cells

    NASA Astrophysics Data System (ADS)

    Williams, Bryce Arthur

    A scalable and inexpensive renewable energy source is needed to meet the expected increase in electricity demand throughout the developed and developing world in the next 15 years without contributing further to global warming through CO2 emissions. Photovoltaics may meet this need but current technologies are less than ideal requiring complex manufacturing processes and/or use of toxic, rare-earth materials. Copper zinc tin sulfide (Cu 2ZnSnS4, CZTS) solar cells offer a true "green" alternative based upon non-toxic and abundant elements. Solution-based processes utilizing CZTS nanocrystal dispersions followed by high temperature annealing have received significant research attention due to their compatibility with traditional roll-to-roll coating processes. In this work, CZTS nanocrystal (5-35 nm diameters) dispersions were utilized as a production pathway to form solar absorber layers. Aerosol-based coating methods (aerosol jet printing and ultrasonic spray coating) were optimized for formation of dense, crack-free CZTS nanocrystal coatings. The primary variables underlying determination of coating morphology within the aerosol-coating parameter space were investigated. It was found that the liquid content of the aerosol droplets at the time of substrate impingement play a critical role. Evaporation of the liquid from the aerosol droplets during coating was altered through changes to coating parameters as well as to the CZTS nanocrystal dispersions. In addition, factors influencing conversion of CZTS nanocrystal coatings into dense, large-grained polycrystalline films suitable for solar cell development during thermal annealing were studied. The roles nanocrystal size, carbon content, sodium uptake, and sulfur pressure were found to have pivotal roles in film microstructure evolution. The effects of these parameters on film morphology, grain growth rates, and chemical makeup were analyzed from electron microscopy images as well as compositional analysis techniques. From these results, a deeper understanding of the interplay between the numerous annealing variables was achieved and improved annealing processes were developed.

  16. System-level view of geospace dynamics: Challenges for high-latitude ground-based observations

    NASA Astrophysics Data System (ADS)

    Donovan, E.

    2014-12-01

    Increasingly, research programs including GEM, CEDAR, GEMSIS, GO Canada, and others are focusing on how geospace works as a system. Coupling sits at the heart of system level dynamics. In all cases, coupling is accomplished via fundamental processes such as reconnection and plasma waves, and can be between regions, energy ranges, species, scales, and energy reservoirs. Three views of geospace are required to attack system level questions. First, we must observe the fundamental processes that accomplish the coupling. This "observatory view" requires in situ measurements by satellite-borne instruments or remote sensing from powerful well-instrumented ground-based observatories organized around, for example, Incoherent Scatter Radars. Second, we need to see how this coupling is controlled and what it accomplishes. This demands quantitative observations of the system elements that are being coupled. This "multi-scale view" is accomplished by networks of ground-based instruments, and by global imaging from space. Third, if we take geospace as a whole, the system is too complicated, so at the top level we need time series of simple quantities such as indices that capture important aspects of the system level dynamics. This requires a "key parameter view" that is typically provided through indices such as AE and DsT. With the launch of MMS, and ongoing missions such as THEMIS, Cluster, Swarm, RBSP, and ePOP, we are entering a-once-in-a-lifetime epoch with a remarkable fleet of satellites probing processes at key regions throughout geospace, so the observatory view is secure. With a few exceptions, our key parameter view provides what we need. The multi-scale view, however, is compromised by space/time scales that are important but under-sampled, combined extent of coverage and resolution that falls short of what we need, and inadequate conjugate observations. In this talk, I present an overview of what we need for taking system level research to its next level, and how high latitude ground based observations can address these challenges.

  17. Experimental Investigation – Magnetic Assisted Electro Discharge Machining

    NASA Astrophysics Data System (ADS)

    Kesava Reddy, Chirra; Manzoor Hussain, M.; Satyanarayana, S.; Krishna, M. V. S. Murali

    2018-04-01

    Emerging technology needs advanced machined parts with high strength and temperature resistance, high fatigue life at low production cost with good surface quality to fit into various industrial applications. Electro discharge machine is one of the extensively used machines to manufacture advanced machined parts which cannot be machined by other traditional machine with high precision and accuracy. Machining of DIN 17350-1.2080 (High Carbon High Chromium steel), using electro discharge machining has been discussed in this paper. In the present investigation an effort is made to use permanent magnet at various positions near the spark zone to improve surface quality of the machined surface. Taguchi methodology is used to obtain optimal choice for each machining parameter such as peak current, pulse duration, gap voltage and Servo reference voltage etc. Process parameters have significant influence on machining characteristics and surface finish. Improvement in surface finish is observed when process parameters are set at optimum condition under the influence of magnetic field at various positions.

  18. Using Bayesian regression to test hypotheses about relationships between parameters and covariates in cognitive models.

    PubMed

    Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan

    2018-06-01

    An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.

  19. Correcting Inadequate Model Snow Process Descriptions Dramatically Improves Mountain Hydrology Simulations

    NASA Astrophysics Data System (ADS)

    Pomeroy, J. W.; Fang, X.

    2014-12-01

    The vast effort in hydrology devoted to parameter calibration as a means to improve model performance assumes that the models concerned are not fundamentally wrong. By focussing on finding optimal parameter sets and ascribing poor model performance to parameter or data uncertainty, these efforts may fail to consider the need to improve models with more intelligent descriptions of hydrological processes. To test this hypothesis, a flexible physically based hydrological model including a full suite of snow hydrology processes as well as warm season, hillslope and groundwater hydrology was applied to Marmot Creek Research Basin, Canadian Rocky Mountains where excellent driving meteorology and basin biophysical descriptions exist. Model parameters were set from values found in the basin or from similar environments; no parameters were calibrated. The model was tested against snow surveys and streamflow observations. The model used algorithms that describe snow redistribution, sublimation and forest canopy effects on snowmelt and evaporative processes that are rarely implemented in hydrological models. To investigate the contribution of these processes to model predictive capability, the model was "falsified" by deleting parameterisations for forest canopy snow mass and energy, blowing snow, intercepted rain evaporation, and sublimation. Model falsification by ignoring forest canopy processes contributed to a large increase in SWE errors for forested portions of the research basin with RMSE increasing from 19 to 55 mm and mean bias (MB) increasing from 0.004 to 0.62. In the alpine tundra portion, removing blowing processes resulted in an increase in model SWE MB from 0.04 to 2.55 on north-facing slopes and -0.006 to -0.48 on south-facing slopes. Eliminating these algorithms degraded streamflow prediction with the Nash Sutcliffe efficiency dropping from 0.58 to 0.22 and MB increasing from 0.01 to 0.09. These results show dramatic model improvements by including snow redistribution and melt processes associated with wind transport and forest canopies. As most hydrological models do not currently include these processes, it is suggested that modellers first improve the realism of model structures before trying to optimise what are inherently inadequate simulations of hydrology.

  20. A feasibility study on age-related factors of wrist pulse using principal component analysis.

    PubMed

    Jang-Han Bae; Young Ju Jeon; Sanghun Lee; Jaeuk U Kim

    2016-08-01

    Various analysis methods for examining wrist pulse characteristics are needed for accurate pulse diagnosis. In this feasibility study, principal component analysis (PCA) was performed to observe age-related factors of wrist pulse from various analysis parameters. Forty subjects in the age group of 20s and 40s were participated, and their wrist pulse signal and respiration signal were acquired with the pulse tonometric device. After pre-processing of the signals, twenty analysis parameters which have been regarded as values reflecting pulse characteristics were calculated and PCA was performed. As a results, we could reduce complex parameters to lower dimension and age-related factors of wrist pulse were observed by combining-new analysis parameter derived from PCA. These results demonstrate that PCA can be useful tool for analyzing wrist pulse signal.

  1. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  2. Additive Manufacturing in Production: A Study Case Applying Technical Requirements

    NASA Astrophysics Data System (ADS)

    Ituarte, Iñigo Flores; Coatanea, Eric; Salmi, Mika; Tuomi, Jukka; Partanen, Jouni

    Additive manufacturing (AM) is expanding the manufacturing capabilities. However, quality of AM produced parts is dependent on a number of machine, geometry and process parameters. The variability of these parameters affects the manufacturing drastically and therefore standardized processes and harmonized methodologies need to be developed to characterize the technology for end use applications and enable the technology for manufacturing. This research proposes a composite methodology integrating Taguchi Design of Experiments, multi-objective optimization and statistical process control, to optimize the manufacturing process and fulfil multiple requirements imposed to an arbitrary geometry. The proposed methodology aims to characterize AM technology depending upon manufacturing process variables as well as to perform a comparative assessment of three AM technologies (Selective Laser Sintering, Laser Stereolithography and Polyjet). Results indicate that only one machine, laser-based Stereolithography, was feasible to fulfil simultaneously macro and micro level geometrical requirements but mechanical properties were not at required level. Future research will study a single AM system at the time to characterize AM machine technical capabilities and stimulate pre-normative initiatives of the technology for end use applications.

  3. Waveform inversion for orthorhombic anisotropy with P waves: feasibility and resolution

    NASA Astrophysics Data System (ADS)

    Kazei, Vladimir; Alkhalifah, Tariq

    2018-05-01

    Various parametrizations have been suggested to simplify inversions of first arrivals, or P waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P waves. These parameters are different from the six parameters needed to describe the kinematics of P waves. Reflection-based radiation patterns from the P-P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios and data bandwidths allows us to quantify the resolution of different parametrizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic and orthorhombic) in hierarchical parametrization is the best choice. Hierarchical parametrization reduces the trade-off between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P-wave propagation need to be retrieved simultaneously, the classic parametrization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parametrizations can be used to ascertain the set of parameters that can be resolved.

  4. Integration of Mahalanobis-Taguchi system and traditional cost accounting for remanufacturing crankshaft

    NASA Astrophysics Data System (ADS)

    Abu, M. Y.; Norizan, N. S.; Rahman, M. S. Abd

    2018-04-01

    Remanufacturing is a sustainability strategic planning which transforming the end of life product to as new performance with their warranty is same or better than the original product. In order to quantify the advantages of this strategy, all the processes must implement the optimization to reach the ultimate goal and reduce the waste generated. The aim of this work is to evaluate the criticality of parameters on the end of life crankshaft based on Taguchi’s orthogonal array. Then, estimate the cost using traditional cost accounting by considering the critical parameters. By implementing the optimization, the remanufacturer obviously produced lower cost and waste during production with higher potential to gain the profit. Mahalanobis-Taguchi System was proven as a powerful method of optimization that revealed the criticality of parameters. When subjected the method to the MAN engine model, there was 5 out of 6 crankpins were critical which need for grinding process while no changes happened to the Caterpillar engine model. Meanwhile, the cost per unit for MAN engine model was changed from MYR1401.29 to RM1251.29 while for Caterpillar engine model have no changes due to the no changes on criticality of parameters consideration. Therefore, by integrating the optimization and costing through remanufacturing process, a better decision can be achieved after observing the potential profit will be gained. The significant of output demonstrated through promoting sustainability by reducing re-melting process of damaged parts to ensure consistent benefit of return cores.

  5. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  6. Systematic procedure for designing processes with multiple environmental objectives.

    PubMed

    Kim, Ki-Joo; Smith, Raymond L

    2005-04-01

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.

  7. Modeling precursor diffusion and reaction of atomic layer deposition in porous structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keuter, Thomas, E-mail: t.keuter@fz-juelich.de; Menzler, Norbert Heribert; Mauer, Georg

    2015-01-01

    Atomic layer deposition (ALD) is a technique for depositing thin films of materials with a precise thickness control and uniformity using the self-limitation of the underlying reactions. Usually, it is difficult to predict the result of the ALD process for given external parameters, e.g., the precursor exposure time or the size of the precursor molecules. Therefore, a deeper insight into ALD by modeling the process is needed to improve process control and to achieve more economical coatings. In this paper, a detailed, microscopic approach based on the model developed by Yanguas-Gil and Elam is presented and additionally compared with themore » experiment. Precursor diffusion and second-order reaction kinetics are combined to identify the influence of the porous substrate's microstructural parameters and the influence of precursor properties on the coating. The thickness of the deposited film is calculated for different depths inside the porous structure in relation to the precursor exposure time, the precursor vapor pressure, and other parameters. Good agreement with experimental results was obtained for ALD zirconiumdioxide (ZrO{sub 2}) films using the precursors tetrakis(ethylmethylamido)zirconium and O{sub 2}. The derivation can be adjusted to describe other features of ALD processes, e.g., precursor and reactive site losses, different growth modes, pore size reduction, and surface diffusion.« less

  8. Research on cylinder processes of gasoline homogenous charge compression ignition (HCCI) engine

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu

    2017-10-01

    This paper is designed to develop a HCCI engine starting from a spark ignition engine platform. The engine test was a single cylinder, four strokes provided with carburetor. The results of experimental research on this version were used as a baseline for the next phase of the work. After that, the engine was modified for a HCCI configuration, the carburetor was replaced by a direct fuel injection system in order to control precisely the fuel mass per cycle taking into account the measured intake air-mass. To ensure that the air - fuel mixture auto ignite, the compression ratio was increased from 9.7 to 11.5. The combustion process in HCCI regime is governed by chemical kinetics of mixture of air-fuel, rein ducted or trapped exhaust gases and fresh charge. To modify the quantities of trapped burnt gases, the exchange gas system was changed from fixed timing to variable valve timing. To analyze the processes taking place in the HCCI engine and synthesizing a control system, a model of the system which takes into account the engine configuration and operational parameters are needed. The cylinder processes were simulated on virtual model. The experimental research works were focused on determining the parameters which control the combustion timing of HCCI engine to obtain the best energetic and ecologic parameters.

  9. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  10. Real-time control data wrangling for development of mathematical control models of technological processes

    NASA Astrophysics Data System (ADS)

    Vasilyeva, N. V.; Koteleva, N. I.; Fedorova, E. R.

    2018-05-01

    The relevance of the research is due to the need to stabilize the composition of the melting products of copper-nickel sulfide raw materials in the Vanyukov furnace. The goal of this research is to identify the most suitable methods for the aggregation of the real time data for the development of a mathematical model for control of the technological process of melting copper-nickel sulfide raw materials in the Vanyukov furnace. Statistical methods of analyzing the historical data of the real technological object and the correlation analysis of process parameters are described. Factors that exert the greatest influence on the main output parameter (copper content in matte) and ensure the physical-chemical transformations are revealed. An approach to the processing of the real time data for the development of a mathematical model for control of the melting process is proposed. The stages of processing the real time information are considered. The adopted methodology for the aggregation of data suitable for the development of a control model for the technological process of melting copper-nickel sulfide raw materials in the Vanyukov furnace allows us to interpret the obtained results for their further practical application.

  11. Chemometrics-based process analytical technology (PAT) tools: applications and adaptation in pharmaceutical and biopharmaceutical industries.

    PubMed

    Challa, Shruthi; Potumarthi, Ravichandra

    2013-01-01

    Process analytical technology (PAT) is used to monitor and control critical process parameters in raw materials and in-process products to maintain the critical quality attributes and build quality into the product. Process analytical technology can be successfully implemented in pharmaceutical and biopharmaceutical industries not only to impart quality into the products but also to prevent out-of-specifications and improve the productivity. PAT implementation eliminates the drawbacks of traditional methods which involves excessive sampling and facilitates rapid testing through direct sampling without any destruction of sample. However, to successfully adapt PAT tools into pharmaceutical and biopharmaceutical environment, thorough understanding of the process is needed along with mathematical and statistical tools to analyze large multidimensional spectral data generated by PAT tools. Chemometrics is a chemical discipline which incorporates both statistical and mathematical methods to obtain and analyze relevant information from PAT spectral tools. Applications of commonly used PAT tools in combination with appropriate chemometric method along with their advantages and working principle are discussed. Finally, systematic application of PAT tools in biopharmaceutical environment to control critical process parameters for achieving product quality is diagrammatically represented.

  12. Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Jayson F.; Dirks, James A.

    2008-08-29

    EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less

  13. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  14. Friction Stir Welding of Curved Plates

    NASA Technical Reports Server (NTRS)

    Sanchez, Nestor

    1999-01-01

    Friction stir welding (FSW) is a remarkable technology for making butt and lap joints in aluminum alloys. The process operates by passing a rotating tool between two closely butted plates. This process generates heat and the heated material is stirred from both sides of the plates to generate a high quality weld. Application of this technique has a very broad field for NASA. In particular, NASA is interested in using this welding process to manufacture tanks and curved elements. Therefore, this research has been oriented to the study the FSW of curved plates. The study has covered a number of topics that are important in the model development and to uncover the physical process involve in the welding itself. The materials used for the experimental welds were as close to each other as we could possibly find, aluminum 5454-0 and 5456-0 with properties listed at http://matweb.com. The application of FSW to curved plates needs to consider the behavior that we observed in this study. There is going to be larger force in the normal direction (Fz) as the curvature of the plate increases. A particular model needs to be derived for each material and thickness. A more complete study should also include parameters such as spin rate, tool velocity, and power used. The force in the direction of motion (Fx) needs to be reconsidered to make sure of its variability with respect to other parameters such as velocity, thickness, etc. It seems like the curvature does not play a role in this case. Variations in temperature were found with respect to the curvature. However, these changes seem to be smaller than the effect on Fz. The temperatures were all below the melting point. We understand now that the process of FSW produces a three dimensional flow of material that takes place during the weld. This flow needs to be study in a more detailed way to see in which directions the flow of material is stronger. It could be possible to model the flow using a 2-dimensional model in the particular directions where the flow moves faster. More experimental information is required to enrich the knowledge about FSW, and from this point, derive useful mathematical formulas to optimize the process and the design of the machines that will perform it. More experiments and experimental equipment are required to uncover the mathematics of the process.

  15. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  16. Consensus statement with recommendations on active surveillance inclusion criteria and definition of progression in men with localized prostate cancer: the critical role of the pathologist.

    PubMed

    Montironi, Rodolfo; Hammond, Elizabeth H; Lin, Daniel W; Gore, John L; Srigley, John R; Samaratunga, Hema; Egevad, Lars; Rubin, Mark A; Nacey, John; Klotz, Laurence; Sandler, Howard; Zietman, Anthony L; Holden, Stuart; Humphrey, Peter A; Evans, Andrew J; Delahunt, Brett; McKenney, Jesse K; Berney, Daniel; Wheeler, Thomas M; Chinnaiyan, Arul; True, Lawrence; Knudsen, Beatrice; Epstein, Jonathan I; Amin, Mahul B

    2014-12-01

    Active surveillance (AS) is an important management option for men with low-risk, clinically localized prostate cancer. The clinical parameters for patient selection and definition of progression for AS protocols are evolving as data from several large cohorts matures. Vital to this process is the critical role pathologic parameters play in identifying appropriate candidates for AS. These findings need to be reproducible and consistently reported by surgical pathologists. This report highlights the importance of accurate pathology reporting as a critical component of these protocols.

  17. Non-electrical-power temperature-time integrating sensor for RFID based on microfluidics

    NASA Astrophysics Data System (ADS)

    Schneider, Mike; Hoffmann, Martin

    2011-06-01

    The integration of RFID tags into packages offers the opportunity to combine logistic advantages of the technology with monitoring different parameters from inside the package at the same time. An essential demand for enhanced product safety especially in pharmacy or food industry is the monitoring of the time-temperature-integral. Thus, completely passive time-temperature-integrators (TTI) requiring no battery, microprocessor nor data logging devices are developed. TTI representing the sterilization process inside an autoclave system is a demanding challenge: a temperature of at least 120 °C have to be maintained over 45 minutes to assure that no unwanted organism remains. Due to increased temperature, the viscosity of a fluid changes and thus the speed of the fluid inside the channel increases. The filled length of the channel represents the time temperature integral affecting the system. Measurements as well as simulations allow drawing conclusions about the influence of the geometrical parameters of the system and provide the possibility of adaptation. Thus a completely passive sensor element for monitoring an integral parameter with waiving of external electrical power supply and data processing technology is demonstrated. Furthermore, it is shown how to adjust the specific TTI parameters of the sensor to different applications and needs by modifying the geometrical parameters of the system.

  18. Design and implementation of sensor systems for control of a closed-loop life support system

    NASA Technical Reports Server (NTRS)

    Alnwick, Leslie; Clark, Amy; Debs, Patricia; Franczek, Chris; Good, Tom; Rodrigues, Pedro

    1989-01-01

    The sensing and controlling needs for a Closed-Loop Life Support System (CLLSS) were investigated. The sensing needs were identified in five particular areas and the requirements were defined for workable sensors. The specific areas of interest were atmosphere and temperature, nutrient delivery, plant health, plant propagation and support, and solids processing. The investigation of atmosphere and temperature control focused on the temperature distribution within the growth chamber as well as the possibility for sensing other parameters such as gas concentration, pressure, and humidity. The sensing needs were studied for monitoring the solution level in a porous membrane material along with the requirements for measuring the mass flow rate in the delivery system. The causes and symptoms of plant disease were examined and the various techniques for sensing these health indicators were explored. The study of sensing needs for plant propagation and support focused on monitoring seed viability and measuring seed moisture content as well as defining the requirements for drying and storing the seeds. The areas of harvesting, food processing, and resource recycling, were covered with a main focus on the sensing possibilities for regulating the recycling process.

  19. Simulation of the detonation process of an ammonium nitrate based emulsion explosive using the Lee-Tarver reactive flow model

    NASA Astrophysics Data System (ADS)

    Ribeiro, Jose; Silva, Cristovao; Mendes, Ricardo; Plaksin, Igor; Campos, Jose

    2011-06-01

    The use of emulsion explosives [EEx] for processing materials (compaction, welding and forming) requires the ability to perform detailed simulations of its detonation process [DP]. Detailed numerical simulations of the DP of this kind of explosives, characterized by having a finite reaction zone thickness, are thought to be suitable performed using the Lee-Tarver reactive flow model. In this work a real coded genetic algorithm methodology was used to estimate the 15 parameters of the reaction rate equation [RRE] of that model for a particular EEx. This methodology allows, in a single optimization procedure, using only one experimental result and without the need of any starting solution, to seek for the 15 parameters of the RRE that fit the numerical to the experimental results. Mass averaging and the Plate-Gap Model have been used for the determination of the shock data used in the unreacted explosive JWL EoS assessment and the thermochemical code THOR retrieved the data used in the detonation products JWL EoS assessment. The obtained parameters allow a good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.

  20. Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Oostdyk, Rebecca; Perotti, Jose

    2009-01-01

    When setting out to model and/or simulate a complex mechanical or electrical system, a modeler is faced with a vast array of tools, software, equations, algorithms and techniques that may individually or in concert aid in the development of the model. Mature requirements and a well understood purpose for the model may considerably shrink the field of possible tools and algorithms that will suit the modeling solution. Is the model intended to be used in an offline fashion or in real-time? On what platform does it need to execute? How long will the model be allowed to run before it outputs the desired parameters? What resolution is desired? Do the parameters need to be qualitative or quantitative? Is it more important to capture the physics or the function of the system in the model? Does the model need to produce simulated data? All these questions and more will drive the selection of the appropriate tools and algorithms, but the modeler must be diligent to bear in mind the final application throughout the modeling process to ensure the model meets its requirements without needless iterations of the design. The purpose of this paper is to describe the considerations and techniques used in the process of creating a functional fault model of a liquid hydrogen (LH2) system that will be used in a real-time environment to automatically detect and isolate failures.

  1. Clinical Parameters and Tools for Home-Based Assessment of Parkinson's Disease: Results from a Delphi study.

    PubMed

    Ferreira, Joaquim J; Santos, Ana T; Domingos, Josefa; Matthews, Helen; Isaacs, Tom; Duffen, Joy; Al-Jawad, Ahmed; Larsen, Frank; Artur Serrano, J; Weber, Peter; Thoms, Andrea; Sollinger, Stefan; Graessner, Holm; Maetzler, Walter

    2015-01-01

    Parkinson's disease (PD) is a neurodegenerative disorder with fluctuating symptoms. To aid the development of a system to evaluate people with PD (PwP) at home (SENSE-PARK system) there was a need to define parameters and tools to be applied in the assessment of 6 domains: gait, bradykinesia/hypokinesia, tremor, sleep, balance and cognition. To identify relevant parameters and assessment tools of the 6 domains, from the perspective of PwP, caregivers and movement disorders specialists. A 2-round Delphi study was conducted to select a core of parameters and assessment tools to be applied. This process included PwP, caregivers and movement disorders specialists. Two hundred and thirty-three PwP, caregivers and physicians completed the first round questionnaire, and 50 the second. Results allowed the identification of parameters and assessment tools to be added to the SENSE-PARK system. The most consensual parameters were: Falls and Near Falls; Capability to Perform Activities of Daily Living; Interference with Activities of Daily Living; Capability to Process Tasks; and Capability to Recall and Retrieve Information. The most cited assessment strategies included Walkers; the Evaluation of Performance Doing Fine Motor Movements; Capability to Eat; Assessment of Sleep Quality; Identification of Circumstances and Triggers for Loose of Balance and Memory Assessment. An agreed set of measuring parameters, tests, tools and devices was achieved to be part of a system to evaluate PwP at home. A pattern of different perspectives was identified for each stakeholder.

  2. Seismological Signature of Chemical Differentiation of Earth's Upper Mantle

    NASA Astrophysics Data System (ADS)

    Matsukage, K. N.; Nishihara, Y.; Karato, S.

    2004-12-01

    Chemical differentiation from a primitive rock (such as pyrolite) to harzburgite due to partial melting and melt extraction is one of the most important mechanisms that causes the chemical heterogeneity in Earth's upper mantle. In this study, we investigate the seismic signature of chemical differentiation that helps mapping chemical heterogeneity in the upper mantle. The relation between chemical differentiation and its seismological signature is not straightforward because a large number of unknown parameters are involved although the seismological observations provide only a few parameters (e.g., VP, VS, QP). Therefore it is critical to identify a small number of parameters by which the gross trend of chemical evolution can be described. The variation in major element composition in natural samples reflect complicated processes that include not only partial melting but also other complex processes (e.g., metasomatism, influx melting). We investigate the seismic velocities of hypothetical but well-defined simple chemical differentiation processes (e.g., partial melting of various pressure conditions, addition of Si-rich melt or fluid), which cover the chemical variation of the natural mantle peridotites with various tectonic settings (mid ocean ridge, island arc and continent). The seismic velocities of the peridotites were calculated to 13 GPa and 1730 K. We obtained two major conclusions. First is that the variations of seismic velocities of upper mantle peridotites can be interpreted in terms of a few distinct parameters. For one class of peridotites which is formed by simple partial melting (e.g. mid-ocean ridges peridotites), seismic velocities can be described in terms of one parameter, namely Mg# (=Mg/(Mg+Fe) atomic ratio). In contrast, some of the peridotites in the continental (cratonic) environment with high silica content and high Mg# need at least two parameters (such as Mg# and Opx# (the volume fraction of orthopyroxene)) are needed to characterize their seismic velocities. Second is the jump of seismic velocity at 300 km in harzburgite that is caused by orthorhombic (opx) to high-pressure monoclinic phase transition in MgSiO3 pyroxene. If opx-rich harzburgite (the maximum content of opx in continental harzburgite is ˜45 vol%) exists at around 300km, the maximum contrast of jump would be 2.5 % for VS and 0.9 % for VP. This phase transition will correspond to the seismological discontinuity around 300km (X-discontinuity).

  3. Development of analysis technique to predict the material behavior of blowing agent

    NASA Astrophysics Data System (ADS)

    Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo

    2014-11-01

    In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.

  4. Probabilistic modeling of percutaneous absorption for risk-based exposure assessments and transdermal drug delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford Kuofei

    Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less

  5. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  6. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE PAGES

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...

    2017-07-11

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  7. Reactive extraction at liquid-liquid systems

    NASA Astrophysics Data System (ADS)

    Wieszczycka, Karolina

    2018-01-01

    The chapter summarizes the state of knowledge about a metal transport in two-phase system. The first part of this review focuses on the distribution law and main factors determination in classical solvent extraction (solubility and polarity of the solute, as well as inter- and intramolecules interaction. Next part of the chapter is devoted to the reactive solvent extraction and the molecular modeling requiring knowledge on type of extractants, complexation mechanisms, metals ions speciation and oxidation during complexes forming, and other parameters that enable to understand the extraction process. Also the kinetic data that is needed for proper modeling, simulation and design of processes needed for critical separations are discussed. Extraction at liquid-solid system using solvent impregnated resins is partially identical as in the case of the corresponding solvent extraction, therefore this subject was also presented in all aspects of separation process (equilibrium, mechanism, kinetics).

  8. Collisional excitation of CO by H2O - An astrophysicist's guide to obtaining rate constants from coherent anti-Stokes Raman line shape data

    NASA Technical Reports Server (NTRS)

    Green, Sheldon

    1993-01-01

    Rate constants for excitation of CO by collisions with H2O are needed to understand recent observations of comet spectra. These collision rates are closely related to spectral line shape parameters, especially those for Raman Q-branch spectra. Because such spectra have become quite important for thermometry applications, much effort has been invested in understanding this process. Although it is not generally possible to extract state-to-state rate constants directly from the data as there are too many unknowns, if the matrix of state-to-state rates can be expressed in terms of a rate-law model which depends only on rotational quantum numbers plus a few parameters, the parameters can be determined from the data; this has been done with some success for many systems, especially those relevant to combustion processes. Although such an analysis has not yet been done for CO-H2O, this system is expected to behave similarly to N2-H2O which has been well studies; modifications of parameters for the latter system are suggested which should provide a reasonable description of rate constants for the former.

  9. Systematic development of technical textiles

    NASA Astrophysics Data System (ADS)

    Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.

    2016-07-01

    Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.

  10. Software Computes Tape-Casting Parameters

    NASA Technical Reports Server (NTRS)

    deGroh, Henry C., III

    2003-01-01

    Tcast2 is a FORTRAN computer program that accelerates the setup of a process in which a slurry containing metal particles and a polymeric binder is cast, to a thickness regulated by a doctor blade, onto fibers wound on a rotating drum to make a green precursor of a metal-matrix/fiber composite tape. Before Tcast2, setup parameters were determined by trial and error in time-consuming multiple iterations of the process. In Tcast2, the fiber architecture in the final composite is expressed in terms of the lateral distance between fibers and the thickness-wise distance between fibers in adjacent plies. The lateral distance is controlled via the manner of winding. The interply spacing is controlled via the characteristics of the slurry and the doctor-blade height. When a new combination of fibers and slurry is first cast and dried to a green tape, the shrinkage from the wet to the green condition and a few other key parameters of the green tape are measured. These parameters are provided as input to Tcast2, which uses them to compute the doctor-blade height and fiber spacings needed to obtain the desired fiber architecture and fiber volume fraction in the final composite.

  11. One-step patterning of double tone high contrast and high refractive index inorganic spin-on resist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zanchetta, E.; Della Giustina, G.; Brusatin, G.

    2014-09-14

    A direct one-step and low temperature micro-fabrication process, enabling to realize large area totally inorganic TiO₂ micro-patterns from a spin-on resist, is presented. High refractive index structures (up to 2 at 632 nm) without the need for transfer processes have been obtained by mask assisted UV lithography, exploiting photocatalytic titania properties. A distinctive feature not shared by any of the known available resists and boosting the material versatility, is that the system behaves either as a positive or as negative tone resist, depending on the process parameters and on the development chemistry. In order to explain the resist double tonemore » behavior, deep comprehension of the lithographic process parameters optimization and of the resist chemistry and structure evolution during the lithographic process, generally uncommon in literature, is reported. Another striking property of the presented resist is that the negative tone shows a high contrast up to 19, allowing to obtain structures resolution down to 2 μm wide. The presented process and material permit to directly fabricate different titania geometries of great importance for solar cells, photo-catalysis, and photonic crystals applications.« less

  12. Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2017-03-01

    This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.

  13. Auto-tuning for NMR probe using LabVIEW

    NASA Astrophysics Data System (ADS)

    Quen, Carmen; Pham, Stephanie; Bernal, Oscar

    2014-03-01

    Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program uses a simplified model of the NMR probe conditions near perfect tuning to mimic the tuning process and predict the position of the capacitor shafts needed to achieve the desirable impedance. The tuning capacitors of the probe are controlled by stepper motors through a LabVIEW/computer interface. Our program calculates the effective capacitance needed to tune the probe and provides controlling parameters to advance the motors in the right direction. The impedance reading of a network analyzer can be used to correct the model parameters in real time for feedback control.

  14. Identification of arteries and veins in cerebral angiography fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Andra Tache, Irina

    2017-11-01

    In the present study a new method for pixels tagging into arteries and veins classes from temporal cerebral angiography is presented. This need comes from the neurosurgeon who is evaluating the fluoroscopic angiography and the magnetic resonance images from the brain in order to locate the fistula of the patients who suffer from arterio-venous malformation. The method includes the elimination of the background pixels from a previous segmentation and the generation of the time intensity curves for each remaining pixel. The later undergo signal processing in order to extract the characteristic parameters needed for applying the k-means clustering algorithm. Some of the parameters are: the phase and the maximum amplitude extracted from the Fourier transform, the standard deviation and the mean value. The tagged classes are represented into images which then are re-classified by an expert into artery and vein pixels.

  15. Swarm intelligence application for optimization of CO2 diffusivity in polystyrene-b-polybutadiene-b-polystyrene (SEBS) foaming

    NASA Astrophysics Data System (ADS)

    Sharudin, Rahida Wati; Ajib, Norshawalina Muhamad; Yusoff, Marina; Ahmad, Mohd Aizad

    2017-12-01

    Thermoplastic elastomer SEBS foams were prepared by using carbon dioxide (CO2) as a blowing agent and the process is classified as physical foaming method. During the foaming process, the diffusivity of CO2 need to be controlled since it is one of the parameter that will affect the final cellular structure of the foam. Conventionally, the rate of CO2 diffusion was measured experimentally by using a highly sensitive device called magnetic suspension balance (MSB). Besides, this expensive MSB machine is not easily available and measurement of CO2 diffusivity is quite complicated as well as time consuming process. Thus, to overcome these limitations, a computational method was introduced. Particle Swarm Optimization (PSO) is a part of Swarm Intelligence system which acts as a beneficial optimization tool where it can solve most of nonlinear complications. PSO model was developed for predicting the optimum foaming temperature and CO2 diffusion rate in SEBS foam. Results obtained by PSO model are compared with experimental results for CO2 diffusivity at various foaming temperature. It is shown that predicted optimum foaming temperature at 154.6 °C was not represented the best temperature for foaming as the cellular structure of SEBS foamed at corresponding temperature consisted pores with unstable dimension and the structure was not visibly perceived due to foam shrinkage. The predictions were not agreed well with experimental result when single parameter of CO2 diffusivity is considered in PSO model because it is not the only factor that affected the controllability of foam shrinkage. The modification on the PSO model by considering CO2 solubility and rigidity of SEBS as additional parameters needs to be done for obtaining the optimum temperature for SEBS foaming. Hence stable SEBS foam could be prepared.

  16. Refrigeration generation using expander-generator units

    NASA Astrophysics Data System (ADS)

    Klimenko, A. V.; Agababov, V. S.; Koryagin, A. V.; Baidakova, Yu. O.

    2016-05-01

    The problems of using the expander-generator unit (EGU) to generate refrigeration, along with electricity were considered. It is shown that, on the level of the temperatures of refrigeration flows using the EGU, one can provide the refrigeration supply of the different consumers: ventilation and air conditioning plants and industrial refrigerators and freezers. The analysis of influence of process parameters on the cooling power of the EGU, which depends on the parameters of the gas expansion process in the expander and temperatures of cooled environment, was carried out. The schematic diagram of refrigeration generation plant based on EGU is presented. The features and advantages of EGU to generate refrigeration compared with thermotransformer of steam compressive and absorption types were shown, namely: there is no need to use the energy generated by burning fuel to operate the EGU; beneficial use of the heat delivered to gas from the flow being cooled in equipment operating on gas; energy production along with refrigeration generation, which makes it possible to create, using EGU, the trigeneration plants without using the energy power equipment. It is shown that the level of the temperatures of refrigeration flows, which can be obtained by using the EGU on existing technological decompression stations of the transported gas, allows providing the refrigeration supply of various consumers. The information that the refrigeration capacity of an expander-generator unit not only depends on the parameters of the process of expansion of gas flowing in the expander (flow rate, temperatures and pressures at the inlet and outlet) but it is also determined by the temperature needed for a consumer and the initial temperature of the flow of the refrigeration-carrier being cooled. The conclusion was made that the expander-generator units can be used to create trigeneration plants both at major power plants and at small energy.

  17. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  18. High performance dielectric materials development

    NASA Astrophysics Data System (ADS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-09-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  19. Can Item Analysis of MCQs Accomplish the Need of a Proper Assessment Strategy for Curriculum Improvement in Medical Education?

    ERIC Educational Resources Information Center

    Pawade, Yogesh R.; Diwase, Dipti S.

    2016-01-01

    Item analysis of Multiple Choice Questions (MCQs) is the process of collecting, summarizing and utilizing information from students' responses to evaluate the quality of test items. Difficulty Index (p-value), Discrimination Index (DI) and Distractor Efficiency (DE) are the parameters which help to evaluate the quality of MCQs used in an…

  20. Software for Acoustic Rendering

    NASA Technical Reports Server (NTRS)

    Miller, Joel D.

    2003-01-01

    SLAB is a software system that can be run on a personal computer to simulate an acoustic environment in real time. SLAB was developed to enable computational experimentation in which one can exert low-level control over a variety of signal-processing parameters, related to spatialization, for conducting psychoacoustic studies. Among the parameters that can be manipulated are the number and position of reflections, the fidelity (that is, the number of taps in finite-impulse-response filters), the system latency, and the update rate of the filters. Another goal in the development of SLAB was to provide an inexpensive means of dynamic synthesis of virtual audio over headphones, without need for special-purpose signal-processing hardware. SLAB has a modular, object-oriented design that affords the flexibility and extensibility needed to accommodate a variety of computational experiments and signal-flow structures. SLAB s spatial renderer has a fixed signal-flow architecture corresponding to a set of parallel signal paths from each source to a listener. This fixed architecture can be regarded as a compromise that optimizes efficiency at the expense of complete flexibility. Such a compromise is necessary, given the design goal of enabling computational psychoacoustic experimentation on inexpensive personal computers.

  1. Solid state light engines for bioanalytical instruments and biomedical devices

    NASA Astrophysics Data System (ADS)

    Jaffe, Claudia B.; Jaffe, Steven M.

    2010-02-01

    Lighting subsystems to drive 21st century bioanalysis and biomedical diagnostics face stringent requirements. Industrywide demands for speed, accuracy and portability mean illumination must be intense as well as spectrally pure, switchable, stable, durable and inexpensive. Ideally a common lighting solution could service these needs for numerous research and clinical applications. While this is a noble objective, the current technology of arc lamps, lasers, LEDs and most recently light pipes have intrinsic spectral and angular traits that make a common solution untenable. Clearly a hybrid solution is required to service the varied needs of the life sciences. Any solution begins with a critical understanding of the instrument architecture and specifications for illumination regarding power, illumination area, illumination and emission wavelengths and numerical aperture. Optimizing signal to noise requires careful optimization of these parameters within the additional constraints of instrument footprint and cost. Often the illumination design process is confined to maximizing signal to noise without the ability to adjust any of the above parameters. A hybrid solution leverages the best of the existing lighting technologies. This paper will review the design process for this highly constrained, but typical optical optimization scenario for numerous bioanalytical instruments and biomedical devices.

  2. Emergent Aerospace Designs Using Negotiating Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Deshmukh, Abhijit; Middelkoop, Timothy; Krothapalli, Anjaneyulu; Smith, Charles

    2000-01-01

    This paper presents a distributed design methodology where designs emerge as a result of the negotiations between different stake holders in the process, such as cost, performance, reliability, etc. The proposed methodology uses autonomous agents to represent design decision makers. Each agent influences specific design parameters in order to maximize their utility. Since the design parameters depend on the aggregate demand of all the agents in the system, design agents need to negotiate with others in the market economy in order to reach an acceptable utility value. This paper addresses several interesting research issues related to distributed design architectures. First, we present a flexible framework which facilitates decomposition of the design problem. Second, we present overview of a market mechanism for generating acceptable design configurations. Finally, we integrate learning mechanisms in the design process to reduce the computational overhead.

  3. Analysis of Generator Oscillation Characteristics Based on Multiple Synchronized Phasor Measurements

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Takuhei; Yoshimoto, Masamichi; Mitani, Yasunori; Saeki, Osamu; Tsuji, Kiichiro

    In recent years, there has been considerable interest in the on-line measurement, such as observation of power system dynamics and evaluation of machine parameters. On-line methods are particularly attractive since the machine’s service need not be interrupted and parameter estimation is performed by processing measurements obtained during the normal operation of the machine. Authors placed PMU (Phasor Measurement Unit) connected to 100V outlets in some Universities in the 60Hz power system and examine oscillation characteristics in power system. PMU is synchronized based on the global positioning system (GPS) and measured data are transmitted via Internet. This paper describes an application of PMU for generator oscillation analysis. The purpose of this paper is to show methods for processing phase difference and to estimate damping coeffcient and natural angular frequency from phase difference at steady state.

  4. Selecting the Parameters of the Orientation Engine for a Technological Spacecraft

    NASA Astrophysics Data System (ADS)

    Belousov, A. I.; Sedelnikov, A. V.

    2018-01-01

    This work provides a solution to the issues of providing favorable conditions for carrying out gravitationally sensitive technological processes on board a spacecraft. It is noted that an important role is played by the optimal choice of the orientation system of the spacecraft and the main parameters of the propulsion system as the most important executive organ of the system of orientation and control of the orbital motion of the spacecraft. Advantages and disadvantages of two different orientation systems are considered. One of them assumes the periodic impulsive inclusion of a low thrust liquid rocket engines, the other is based on the continuous operation of the executing elements. A conclusion is drawn on the need to take into account the composition of gravitationally sensitive processes when choosing the orientation system of the spacecraft.

  5. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  6. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  7. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1995-01-01

    When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.

  8. Identification of the dominant hydrological process and appropriate model structure of a karst catchment through stepwise simplification of a complex conceptual model

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Wu, Jichun; Jiang, Guanghui; Kang, Zhiqiang

    2017-05-01

    Conceptual models often suffer from the over-parameterization problem due to limited available data for the calibration. This leads to the problem of parameter nonuniqueness and equifinality, which may bring much uncertainty of the simulation result. How to find out the appropriate model structure supported by the available data to simulate the catchment is still a big challenge in the hydrological research. In this paper, we adopt a multi-model framework to identify the dominant hydrological process and appropriate model structure of a karst spring, located in Guilin city, China. For this catchment, the spring discharge is the only available data for the model calibration. This framework starts with a relative complex conceptual model according to the perception of the catchment and then this complex is simplified into several different models by gradually removing the model component. The multi-objective approach is used to compare the performance of these different models and the regional sensitivity analysis (RSA) is used to investigate the parameter identifiability. The results show this karst spring is mainly controlled by two different hydrological processes and one of the processes is threshold-driven which is consistent with the fieldwork investigation. However, the appropriate model structure to simulate the discharge of this spring is much simpler than the actual aquifer structure and hydrological processes understanding from the fieldwork investigation. A simple linear reservoir with two different outlets is enough to simulate this spring discharge. The detail runoff process in the catchment is not needed in the conceptual model to simulate the spring discharge. More complex model should need more other additional data to avoid serious deterioration of model predictions.

  9. Classification of high-resolution multi-swath hyperspectral data using Landsat 8 surface reflectance data as a calibration target and a novel histogram based unsupervised classification technique to determine natural classes from biophysically relevant fit parameters

    NASA Astrophysics Data System (ADS)

    McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.

    2016-12-01

    Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.

  10. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  11. A simple hyperbolic model for communication in parallel processing environments

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  12. Modelling and analysis of solar cell efficiency distributions

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Greulich, Johannes

    2017-08-01

    We present an approach to model the distribution of solar cell efficiencies achieved in production lines based on numerical simulations, metamodeling and Monte Carlo simulations. We validate our methodology using the example of an industrial feasible p-type multicrystalline silicon “passivated emitter and rear cell” process. Applying the metamodel, we investigate the impact of each input parameter on the distribution of cell efficiencies in a variance-based sensitivity analysis, identifying the parameters and processes that need to be improved and controlled most accurately. We show that if these could be optimized, the mean cell efficiencies of our examined cell process would increase from 17.62% ± 0.41% to 18.48% ± 0.09%. As the method relies on advanced characterization and simulation techniques, we furthermore introduce a simplification that enhances applicability by only requiring two common measurements of finished cells. The presented approaches can be especially helpful for ramping-up production, but can also be applied to enhance established manufacturing.

  13. Anticipatory control: A software retrofit for current plant controllers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parthasarathy, S.; Parlos, A.G.; Atiya, A.F.

    1993-01-01

    The design and simulated testing of an artificial neural network (ANN)-based self-adapting controller for complex process systems are presented in this paper. The proposed controller employs concepts based on anticipatory systems, which have been widely used in the petroleum and chemical industries, and they are slowly finding their way into the power industry. In particular, model predictive control (MPC) is used for the systematic adaptation of the controller parameters to achieve desirable plant performance over the entire operating envelope. The versatile anticipatory control algorithm developed in this study is projected to enhance plant performance and lend robustness to drifts inmore » plant parameters and to modeling uncertainties. This novel technique of integrating recurrent ANNs with a conventional controller structure appears capable of controlling complex, nonlinear, and nonminimum phase process systems. The direct, on-line adaptive control algorithm presented in this paper considers the plant response over a finite time horizon, diminishing the need for manual control or process interruption for controller gain tuning.« less

  14. ON THE DEGREE OF CONVERSION AND COEFFICIENT OF THERMAL EXPANSION OF A SINGLE FIBER COMPOSITE USING A FBG SENSOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, M.; Botsis, J.; Coric, D.

    2008-08-28

    The increasing needs of extending the lifetime in high-technology fields, such as space and aerospace, rail transport and naval systems, require quality enhancing of the composite materials either from a processing standing point or in the sense of resistance to service conditions. It is well accepted that the final quality of composite materials and structures is strongly influenced by processing parameters like curing and post-curing temperatures, rate of heating and cooling, applied vacuum, etc. To optimize manufacturing cycles, residual strains evolution due to chemical shrinkage and other physical parameters of the constituent materials must be characterized in situ. Such knowledgemore » can lead to a sensible reduction in defects and to improved physical and mechanical properties of final products. In this context continuous monitoring of strains distribution developed during processing is important in understanding and retrieving components' and materials' characteristics such as local strains gradients, degree of curing, coefficient of thermal expansion, moisture absorption, etc.« less

  15. Investigation of Springback Associated with Composite Material Component Fabrication (MSFC Center Director's Discretionary Fund Final Report, Project 94-09)

    NASA Technical Reports Server (NTRS)

    Benzie, M. A.

    1998-01-01

    The objective of this research project was to examine processing and design parameters in the fabrication of composite components to obtain a better understanding and attempt to minimize springback associated with composite materials. To accomplish this, both processing and design parameters were included in a Taguchi-designed experiment. Composite angled panels were fabricated, by hand layup techniques, and the fabricated panels were inspected for springback effects. This experiment yielded several significant results. The confirmation experiment validated the reproducibility of the factorial effects, error recognized, and experiment as reliable. The material used in the design of tooling needs to be a major consideration when fabricating composite components, as expected. The factors dealing with resin flow, however, raise several potentially serious material and design questions. These questions must be dealt with up front in order to minimize springback: viscosity of the resin, vacuum bagging of the part for cure, and the curing method selected. These factors directly affect design, material selection, and processing methods.

  16. Mass production of bacterial communities adapted to the degradation of volatile organic compounds (TEX).

    PubMed

    Lapertot, Miléna; Seignez, Chantal; Ebrahimi, Sirous; Delorme, Sandrine; Peringer, Paul

    2007-06-01

    This study focuses on the mass cultivation of bacteria adapted to the degradation of a mixture composed of toluene, ethylbenzene, o-, m- and p-xylenes (TEX). For the cultivation process Substrate Pulse Batch (SPB) technique was adapted under well-automated conditions. The key parameters to be monitored were handled by LabVIEW software including, temperature, pH, dissolved oxygen and turbidity. Other parameters, such as biomass, ammonium or residual substrate concentrations needed offline measurements. SPB technique has been successfully tested experimentally on TEX. The overall behavior of the mixed bacterial population was observed and discussed along the cultivation process. Carbon and nitrogen limitations were shown to affect the integrity of the bacterial cells as well as their production of exopolymeric substances (EPS). Average productivity and yield values successfully reached the industrial specifications, which were 0.45 kg(DW)m(-3) d(-1) and 0.59 g(DW)g (C) (-1) , respectively. Accuracy and reproducibility of the obtained results present the controlled SPB process as a feasible technique.

  17. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    NASA Astrophysics Data System (ADS)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  18. Technical Approach for Determining Key Parameters Needed for Modeling the Performance of Cast Stone for the Integrated Disposal Facility Performance Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yabusaki, Steven B.; Serne, R. Jeffrey; Rockhold, Mark L.

    2015-03-30

    Washington River Protection Solutions (WRPS) and its contractors at Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) are conducting a development program to develop / refine the cementitious waste form for the wastes treated at the ETF and to provide the data needed to support the IDF PA. This technical approach document is intended to provide guidance to the cementitious waste form development program with respect to the waste form characterization and testing information needed to support the IDF PA. At the time of the preparation of this technical approach document, the IDF PA effort is justmore » getting started and the approach to analyze the performance of the cementitious waste form has not been determined. Therefore, this document looks at a number of different approaches for evaluating the waste form performance and describes the testing needed to provide data for each approach. Though the approach addresses a cementitious secondary aqueous waste form, it is applicable to other waste forms such as Cast Stone for supplemental immobilization of Hanford LAW. The performance of Cast Stone as a physical and chemical barrier to the release of contaminants of concern (COCs) from solidification of Hanford liquid low activity waste (LAW) and secondary wastes processed through the Effluent Treatment Facility (ETF) is of critical importance to the Hanford Integrated Disposal Facility (IDF) total system performance assessment (TSPA). The effectiveness of cementitious waste forms as a barrier to COC release is expected to evolve with time. PA modeling must therefore anticipate and address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Most organizations responsible for disposal facility operation and their regulators support an iterative hierarchical safety/performance assessment approach with a general philosophy that modeling provides the critical link between the short-term understanding from laboratory and field tests, and the prediction of repository performance over repository time frames and scales. One common recommendation is that experiments be designed to permit the appropriate scaling in the models. There is a large contrast in the physical and chemical properties between the Cast Stone waste package and the IDF backfill and surrounding sediments. Cast Stone exhibits low permeability, high tortuosity, low carbonate, high pH, and low Eh whereas the backfill and native sediments have high permeability, low tortuosity, high carbonate, circumneutral pH, and high Eh. These contrasts have important implications for flow, transport, and reactions across the Cast Stone – backfill interface. Over time with transport across the interface and subsequent reactions, the sharp geochemical contrast will blur and there will be a range of spatially-distributed conditions. In general, COC mobility and transport will be sensitive to these geochemical variations, which also include physical changes in porosity and permeability from mineral reactions. Therefore, PA modeling must address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Section 2 of this document reviews past Hanford PAs and SRS Saltstone PAs, which to date have mostly relied on the lumped parameter COC release conceptual models for TSPA predictions, and provides some details on the chosen values for the lumped parameters. Section 3 provides more details on the hierarchical modeling strategy and processes and mechanisms that control COC release. Section 4 summarizes and lists the key parameters for which numerical values are needed to perform PAs. Section 5 provides brief summaries of the methods used to measure the needed parameters and references to get more details.« less

  19. Gaalas/Gaas Solar Cell Process Study

    NASA Technical Reports Server (NTRS)

    Almgren, D. W.; Csigi, K. I.

    1980-01-01

    Available information on liquid phase, vapor phase (including chemical vapor deposition) and molecular beam epitaxy growth procedures that could be used to fabricate single crystal, heteroface, (AlGa) As/GaAs solar cells, for space applications is summarized. A comparison of the basic cost elements of the epitaxy growth processes shows that the current infinite melt LPE process has the lower cost per cell for an annual production rate of 10,000 cells. The metal organic chemical vapor deposition (MO-CVD) process has the potential for low cost production of solar cells but there is currently a significant uncertainty in process yield, i.e., the fraction of active material in the input gas stream that ends up in the cell. Additional work is needed to optimize and document the process parameters for the MO-CVD process.

  20. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  1. Architectural setup for online monitoring and control of process parameters in robot-based ISF

    NASA Astrophysics Data System (ADS)

    Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2017-10-01

    This article describes new developments in an incremental, robot-based sheet metal forming process (Roboforming) for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet forming (ISF) machines, this system offers high geometrical design flexibility without the need of any part-dependent tools. However, the industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors introduce a new architectural setup extending the current one by a superordinate process control. This sophisticated control consists of two modules, i.e. the compensation of the two industrial robots' low structural stiffness as well as a combined force/torque control. It is assumed that this contribution will lead to future research and development projects in which the authors will thoroughly investigate ISF process parameters influencing the geometric accuracy of the forming results.

  2. Robust parameter design for automatically controlled systems and nanostructure synthesis

    NASA Astrophysics Data System (ADS)

    Dasgupta, Tirthankar

    2007-12-01

    This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.

  3. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  4. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Performance Evaluation and Parameter Identification on DROID III

    NASA Technical Reports Server (NTRS)

    Plumb, Julianna J.

    2011-01-01

    The DROID III project consisted of two main parts. The former, performance evaluation, focused on the performance characteristics of the aircraft such as lift to drag ratio, thrust required for level flight, and rate of climb. The latter, parameter identification, focused on finding the aerodynamic coefficients for the aircraft using a system that creates a mathematical model to match the flight data of doublet maneuvers and the aircraft s response. Both portions of the project called for flight testing and that data is now available on account of this project. The conclusion of the project is that the performance evaluation data is well-within desired standards but could be improved with a thrust model, and that parameter identification is still in need of more data processing but seems to produce reasonable results thus far.

  6. Method for Predicting and Optimizing System Parameters for Electrospinning System

    NASA Technical Reports Server (NTRS)

    Wincheski, Russell A. (Inventor)

    2011-01-01

    An electrospinning system using a spinneret and a counter electrode is first operated for a fixed amount of time at known system and operational parameters to generate a fiber mat having a measured fiber mat width associated therewith. Next, acceleration of the fiberizable material at the spinneret is modeled to determine values of mass, drag, and surface tension associated with the fiberizable material at the spinneret output. The model is then applied in an inversion process to generate predicted values of an electric charge at the spinneret output and an electric field between the spinneret and electrode required to fabricate a selected fiber mat design. The electric charge and electric field are indicative of design values for system and operational parameters needed to fabricate the selected fiber mat design.

  7. Classical nucleation theory of homogeneous freezing of water: thermodynamic and kinetic parameters.

    PubMed

    Ickes, Luisa; Welti, André; Hoose, Corinna; Lohmann, Ulrike

    2015-02-28

    The probability of homogeneous ice nucleation under a set of ambient conditions can be described by nucleation rates using the theoretical framework of Classical Nucleation Theory (CNT). This framework consists of kinetic and thermodynamic parameters, of which three are not well-defined (namely the interfacial tension between ice and water, the activation energy and the prefactor), so that any CNT-based parameterization of homogeneous ice formation is less well-constrained than desired for modeling applications. Different approaches to estimate the thermodynamic and kinetic parameters of CNT are reviewed in this paper and the sensitivity of the calculated nucleation rate to the choice of parameters is investigated. We show that nucleation rates are very sensitive to this choice. The sensitivity is governed by one parameter - the interfacial tension between ice and water, which determines the energetic barrier of the nucleation process. The calculated nucleation rate can differ by more than 25 orders of magnitude depending on the choice of parameterization for this parameter. The second most important parameter is the activation energy of the nucleation process. It can lead to a variation of 16 orders of magnitude. By estimating the nucleation rate from a collection of droplet freezing experiments from the literature, the dependence of these two parameters on temperature is narrowed down. It can be seen that the temperature behavior of these two parameters assumed in the literature does not match with the predicted nucleation rates from the fit in most cases. Moreover a comparison of all possible combinations of theoretical parameterizations of the dominant two free parameters shows that one combination fits the fitted nucleation rates best, which is a description of the interfacial tension coming from a molecular model [Reinhardt and Doye, J. Chem. Phys., 2013, 139, 096102] in combination with the activation energy derived from self-diffusion measurements [Zobrist et al., J. Phys. Chem. C, 2007, 111, 2149]. However, some fundamental understanding of the processes is still missing. Further research in future might help to tackle this problem. The most important questions, which need to be answered to constrain CNT, are raised in this study.

  8. A crunch on thermocompression flip chip bonding

    NASA Astrophysics Data System (ADS)

    Suppiah, Sarveshvaran; Ong, Nestor Rubio; Sauli, Zaliman; Sarukunaselan, Karunavani; Alcain, Jesselyn Barro; Mahmed, Norsuria; Retnasamy, Vithyacharan

    2017-09-01

    This study discussed the evolution and important findings, critical technical challenges, solutions and bonding equipment of flip chip thermo compression bonding (TCB). The bonding force, temperature and time were the key bonding parameters that need to be tweaked based on the researches done by others. TCB technology worked well with both pre-applied underfill and flux (still under development). Lower throughput coupled with higher processing costs was example of challenges in the TCB technology. The paper is concluded with a brief description of the current equipment used in thermo compression process.

  9. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    NASA Astrophysics Data System (ADS)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.

  10. ABM Drag_Pass Report Generator

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Gladden, Roy; Khanampornpan, Teerapat

    2008-01-01

    dragREPORT software was developed in parallel with abmREPORT, which is described in the preceding article. Both programs were built on the capabilities created during that process. This tool generates a drag_pass report that summarizes vital information from the MRO aerobreaking drag_pass build process to facilitate both sequence reviews and provide a high-level summarization of the sequence for mission management. The script extracts information from the ENV, SSF, FRF, SCMFmax, and OPTG files, presenting them in a single, easy-to-check report providing the majority of parameters needed for cross check and verification as part of the sequence review process. Prior to dragReport, all the needed information was spread across a number of different files, each in a different format. This software is a Perl script that extracts vital summarization information and build-process details from a number of source files into a single, concise report format used to aid the MPST sequence review process and to provide a high-level summarization of the sequence for mission management reference. This software could be adapted for future aerobraking missions to provide similar reports, review and summarization information.

  11. Modelling of dynamic contact length in rail grinding process

    NASA Astrophysics Data System (ADS)

    Zhi, Shaodan; Li, Jianyong; Zarembski, A. M.

    2014-09-01

    Rails endure frequent dynamic loads from the passing trains for supporting trains and guiding wheels. The accumulated stress concentrations will cause the plastic deformation of rail towards generating corrugations, contact fatigue cracks and also other defects, resulting in more dangerous status even the derailment risks. So the rail grinding technology has been invented with rotating grinding stones pressed on the rail with defects removal. Such rail grinding works are directed by experiences rather than scientifically guidance, lacking of flexible and scientific operating methods. With grinding control unit holding the grinding stones, the rail grinding process has the characteristics not only the surface grinding but also the running railway vehicles. First of all, it's important to analyze the contact length between the grinding stone and the rail, because the contact length is a critical parameter to measure the grinding capabilities of stones. Moreover, it's needed to build up models of railway vehicle unit bonded with the grinding stone to represent the rail grinding car. Therefore the theoretical model for contact length is developed based on the geometrical analysis. And the calculating models are improved considering the grinding car's dynamic behaviors during the grinding process. Eventually, results are obtained based on the models by taking both the operation parameters and the structure parameters into the calculation, which are suitable for revealing the process of rail grinding by combining the grinding mechanism and the railway vehicle systems.

  12. Preliminary Result of Earthquake Source Parameters the Mw 3.4 at 23:22:47 IWST, August 21, 2004, Centre Java, Indonesia Based on MERAMEX Project

    NASA Astrophysics Data System (ADS)

    Laksono, Y. A.; Brotopuspito, K. S.; Suryanto, W.; Widodo; Wardah, R. A.; Rudianto, I.

    2018-03-01

    In order to study the structure subsurface at Merapi Lawu anomaly (MLA) using forward modelling or full waveform inversion, it needs a good earthquake source parameters. The best result source parameter comes from seismogram with high signal to noise ratio (SNR). Beside that the source must be near the MLA location and the stations that used as parameters must be outside from MLA in order to avoid anomaly. At first the seismograms are processed by software SEISAN v10 using a few stations from MERAMEX project. After we found the hypocentre that match the criterion we fine-tuned the source parameters using more stations. Based on seismogram from 21 stations, it is obtained the source parameters as follows: the event is at August, 21 2004, on 23:22:47 Indonesia western standard time (IWST), epicentre coordinate -7.80°S, 101.34°E, hypocentre 47.3 km, dominant frequency f0 = 3.0 Hz, the earthquake magnitude Mw = 3.4.

  13. Matching experimental and three dimensional numerical models for structural vibration problems with uncertainties

    NASA Astrophysics Data System (ADS)

    Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.

    2018-03-01

    The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.

  14. Temperature determination using pyrometry

    DOEpatents

    Breiland, William G.; Gurary, Alexander I.; Boguslavskiy, Vadim

    2002-01-01

    A method for determining the temperature of a surface upon which a coating is grown using optical pyrometry by correcting Kirchhoff's law for errors in the emissivity or reflectance measurements associated with the growth of the coating and subsequent changes in the surface thermal emission and heat transfer characteristics. By a calibration process that can be carried out in situ in the chamber where the coating process occurs, an error calibration parameter can be determined that allows more precise determination of the temperature of the surface using optical pyrometry systems. The calibration process needs only to be carried out when the physical characteristics of the coating chamber change.

  15. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Fractional Brownian motion time-changed by gamma and inverse gamma process

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Wyłomańska, A.; Połoczański, R.; Sundar, S.

    2017-02-01

    Many real time-series exhibit behavior adequate to long range dependent data. Additionally very often these time-series have constant time periods and also have characteristics similar to Gaussian processes although they are not Gaussian. Therefore there is need to consider new classes of systems to model these kinds of empirical behavior. Motivated by this fact in this paper we analyze two processes which exhibit long range dependence property and have additional interesting characteristics which may be observed in real phenomena. Both of them are constructed as the superposition of fractional Brownian motion (FBM) and other process. In the first case the internal process, which plays role of the time, is the gamma process while in the second case the internal process is its inverse. We present in detail their main properties paying main attention to the long range dependence property. Moreover, we show how to simulate these processes and estimate their parameters. We propose to use a novel method based on rescaled modified cumulative distribution function for estimation of parameters of the second considered process. This method is very useful in description of rounded data, like waiting times of subordinated processes delayed by inverse subordinators. By using the Monte Carlo method we show the effectiveness of proposed estimation procedures. Finally, we present the applications of proposed models to real time series.

  17. Integrating Materials, Manufacturing, Design and Validation for Sustainability in Future Transport Systems

    NASA Astrophysics Data System (ADS)

    Price, M. A.; Murphy, A.; Butterfield, J.; McCool, R.; Fleck, R.

    2011-05-01

    The predictive methods currently used for material specification, component design and the development of manufacturing processes, need to evolve beyond the current `metal centric' state of the art, if advanced composites are to realise their potential in delivering sustainable transport solutions. There are however, significant technical challenges associated with this process. Deteriorating environmental, political, economic and social conditions across the globe have resulted in unprecedented pressures to improve the operational efficiency of the manufacturing sector generally and to change perceptions regarding the environmental credentials of transport systems in particular. There is a need to apply new technologies and develop new capabilities to ensure commercial sustainability in the face of twenty first century economic and climatic conditions as well as transport market demands. A major technology gap exists between design, analysis and manufacturing processes in both the OEMs, and the smaller companies that make up the SME based supply chain. As regulatory requirements align with environmental needs, manufacturers are increasingly responsible for the broader lifecycle aspects of vehicle performance. These include not only manufacture and supply but disposal and re-use or re-cycling. In order to make advances in the reduction of emissions coupled with improved economic efficiency through the provision of advanced lightweight vehicles, four key challenges are identified as follows: Material systems, Manufacturing systems, Integrated design methods using digital manufacturing tools and Validation systems. This paper presents a project which has been designed to address these four key issues, using at its core, a digital framework for the creation and management of key parameters related to the lifecycle performance of thermoplastic composite parts and structures. It aims to provide capability for the proposition, definition, evaluation and demonstration of advanced lightweight structures for new generation vehicles in the context of whole life performance parameters.

  18. Margin of Safety Definition and Examples Used in Safety Basis Documents and the USQ Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaulieu, R. A.

    The Nuclear Safety Management final rule, 10 CFR 830, provides an undefined term, margin of safety (MOS). Safe harbors listed in 10 CFR 830, Table 2, such as DOE-STD-3009 use but do not define the term. This lack of definition has created the need for the definition. This paper provides a definition of MOS and documents examples of MOS as applied in a U.S. Department of Energy (DOE) approved safety basis for an existing nuclear facility. If we understand what MOS looks like regarding Technical Safety Requirements (TSR) parameters, then it helps us compare against other parameters that do notmore » involve a MOS. This paper also documents parameters that are not MOS. These criteria could be used to determine if an MOS exists in safety basis documents. This paper helps DOE, including the National Nuclear Security Administration (NNSA) and its contractors responsible for the safety basis improve safety basis documents and the unreviewed safety question (USQ) process with respect to MOS.« less

  19. Topology Synthesis of Structures Using Parameter Relaxation and Geometric Refinement

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.

    2007-01-01

    Typically, structural topology optimization problems undergo relaxation of certain design parameters to allow the existence of intermediate variable optimum topologies. Relaxation permits the use of a variety of gradient-based search techniques and has been shown to guarantee the existence of optimal solutions and eliminate mesh dependencies. This Technical Publication (TP) will demonstrate the application of relaxation to a control point discretization of the design workspace for the structural topology optimization process. The control point parameterization with subdivision has been offered as an alternative to the traditional method of discretized finite element design domain. The principle of relaxation demonstrates the increased utility of the control point parameterization. One of the significant results of the relaxation process offered in this TP is that direct manufacturability of the optimized design will be maintained without the need for designer intervention or translation. In addition, it will be shown that relaxation of certain parameters may extend the range of problems that can be addressed; e.g., in permitting limited out-of-plane motion to be included in a path generation problem.

  20. Multi Response Optimization of Process Parameters Using Grey Relational Analysis for Turning of Al-6061

    NASA Astrophysics Data System (ADS)

    Deepak, Doreswamy; Beedu, Rajendra

    2017-08-01

    Al-6061 is one among the most useful material used in manufacturing of products. The major qualities of Aluminium are reasonably good strength, corrosion resistance and thermal conductivity. These qualities have made it a suitable material for various applications. While manufacturing these products, companies strive for reducing the production cost by increasing Material Removal Rate (MRR). Meanwhile, the quality of surface need to be ensured at an acceptable value. This paper aims at bringing a compromise between high MRR and low surface roughness requirement by applying Grey Relational Analysis (GRA). This article presents the selection of controllable parameters like longitudinal feed, cutting speed and depth of cut to arrive at optimum values of MRR and surface roughness (Ra). The process parameters for experiments were selected based on Taguchi’s L9 array with two replications. Grey relation analysis being most suited method for multi response optimization, the same is adopted for the optimization. The result shows that feed rate is the most significant factor that influences MRR and Surface finish.

  1. Identification of a thermo-elasto-viscoplastic behavior law for the simulation of thermoforming of high impact polystyrene

    NASA Astrophysics Data System (ADS)

    Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.

    2018-05-01

    Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.

  2. Reinventing The Design Process: Teams and Models

    NASA Technical Reports Server (NTRS)

    Wall, Stephen D.

    1999-01-01

    The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.

  3. Informing soil models using pedotransfer functions: challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Pachepsky, Yakov; Romano, Nunzio

    2015-04-01

    Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model abstraction, become progressively more popular. The variability PTFs rely on the spatio-temporal dynamics of soil variables, and that opens new sources of PTF inputs stemming from technology advances such as monitoring networks, remote and proximal sensing, and omics. 6. Burgeoning PTF development has not so far affected several persisting regional knowledge gaps. Remarkably little effort was put so far into PTF development for saline soils, calcareous and gypsiferous soils, peat soils, paddy soils, soils with well expressed shrink-swell behavior, and soils affected by freeze-thaw cycles. 7. Soils from tropical regions are quite often considered as a pseudo-entity for which a single PTF can be applied. This assumption will not be needed as more regional data will be accumulated and analyzed. 8. Other advances in regional PTFs will be possible due to presence of large databases on region-specific useful PTF inputs such as moisture equivalent, laser diffractometry data, or soil specific surface. 9. Most of flux models in soils, be it water, solutes, gas, or heat, involve parameters that are scale-dependent. Including scale dependencies in PTFs will be critical to improve PTF usability. 10. Another scale-related matter is pedotransfer for coarse-scale soil modeling, for example, in weather or climate models. Soil hydraulic parameters in these models cannot be measured and the efficiency of the pedotransfer can be evaluated only in terms of its utility. There is a pressing need to determine combinations of pedotransfer and upscaling procedures that can lead to the derivation of suitable coarse-scale soil model parameters. 11. The spatial coarse scale often assumes a coarse temporal support, and that may lead to including in PTFs other environmental variables such as topographic, weather, and management attributes. 12. Some PTF inputs are time- or space-dependent, and yet little is known whether the spatial or temporal structure of PTF outputs is properly predicted from such inputs 13. Further exploration is needed to use PTF as a source of hypotheses on and insights into relationships between soil processes and soil composition as well as between soil structure and soil functioning. PTFs are empirical relationships and their accuracy outside the database used for the PTF development is essentially unknown. Therefore they should never be considered as an ultimate source of parameters in soil modeling. Rather they strive to provide a balance between accuracy and availability. The primary role of PTF is to assist in modeling for screening and comparative purposes, establishing ranges and/or probability distributions of model parameters, and creating realistic synthetic soil datasets and scenarios. Developing and improving PTFs will remain the mainstream way of packaging data and knowledge for applications of soil modeling.

  4. Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali

    Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, aremore » related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.« less

  5. Is there a `universal' dynamic zero-parameter hydrological model? Evaluation of a dynamic Budyko model in US and India

    NASA Astrophysics Data System (ADS)

    Patnaik, S.; Biswal, B.; Sharma, V. C.

    2017-12-01

    River flow varies greatly in space and time, and the single biggest challenge for hydrologists and ecologists around the world is the fact that most rivers are either ungauged or poorly gauged. Although it is relatively easier to predict long-term average flow of a river using the `universal' zero-parameter Budyko model, lack of data hinders short-term flow prediction at ungauged locations using traditional hydrological models as they require observed flow data for model calibration. Flow prediction in ungauged basins thus requires a dynamic 'zero-parameter' hydrological model. One way to achieve this is to regionalize a dynamic hydrological model's parameters. However, a regionalization method based zero-parameter dynamic hydrological model is not `universal'. An alternative attempt was made recently to develop a zero-parameter dynamic model by defining an instantaneous dryness index as a function of antecedent rainfall and solar energy inputs with the help of a decay function and using the original Budyko function. The model was tested first in 63 US catchments and later in 50 Indian catchments. The median Nash-Sutcliffe efficiency (NSE) was found to be close to 0.4 in both the cases. Although improvements need to be incorporated in order to use the model for reliable prediction, the main aim of this study was to rather understand hydrological processes. The overall results here seem to suggest that the dynamic zero-parameter Budyko model is `universal.' In other words natural catchments around the world are strikingly similar to each other in the way they respond to hydrologic inputs; we thus need to focus more on utilizing catchment similarities in hydrological modelling instead of over parameterizing our models.

  6. Investigation of polymer derived ceramics cantilevers for application of high speed atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Wu, Chia-Yun

    High speed Atomic Force Microscopy (AFM) has a wide variety of applications ranging from nanomanufacturing to biophysics. In order to have higher scanning speed of certain AFM modes, high resonant frequency cantilevers are needed; therefore, the goal of this research is to investigate using polymer derived ceramics for possible applications in making high resonant frequency AFM cantilevers using complex cross sections. The polymer derived ceramic that will be studied, is silicon carbide. Polymer derived ceramics offer a potentially more economic fabrication approach for MEMS due to their relatively low processing temperatures and ease of complex shape design. Photolithography was used to make the desired cantilever shapes with micron scale size followed by a wet etching process to release the cantilevers from the substrates. The whole manufacturing process we use borrow well-developed techniques from the semiconducting industry, and as such this project also could offer the opportunity to reduce the fabrication cost of AFM cantilevers and MEMS in general. The characteristics of silicon carbide made from the precursor polymer, SMP-10 (Starfire Systems), were studied. In order to produce high qualities of silicon carbide cantilevers, where the major concern is defects, proper process parameters needed to be determined. Films of polymer derived ceramics often have defects due to shrinkage during the conversion process. Thus control of defects was a central issue in this study. A second, related concern was preventing oxidation; the polymer derived ceramics we chose is easily oxidized during processing. Establishing an environment without oxygen in the whole process was a significant challenge in the project. The optimization of the parameters for using photolithography and wet etching process was the final and central goal of the project; well established techniques used in microfabrication were modified for use in making the cantilever in the project. The techniques developed here open a path to the fabrication of cantilevers with unconventional cross sections.

  7. The resilience and functional role of moss in boreal and arctic ecosystems.

    PubMed

    Turetsky, M R; Bond-Lamberty, B; Euskirchen, E; Talbot, J; Frolking, S; McGuire, A D; Tuittila, E-S

    2012-10-01

    Mosses in northern ecosystems are ubiquitous components of plant communities, and strongly influence nutrient, carbon and water cycling. We use literature review, synthesis and model simulations to explore the role of mosses in ecological stability and resilience. Moss community responses to disturbance showed all possible responses (increases, decreases, no change) within most disturbance categories. Simulations from two process-based models suggest that northern ecosystems would need to experience extreme perturbation before mosses were eliminated. But simulations with two other models suggest that loss of moss will reduce soil carbon accumulation primarily by influencing decomposition rates and soil nitrogen availability. It seems clear that mosses need to be incorporated into models as one or more plant functional types, but more empirical work is needed to determine how to best aggregate species. We highlight several issues that have not been adequately explored in moss communities, such as functional redundancy and singularity, relationships between response and effect traits, and parameter vs conceptual uncertainty in models. Mosses play an important role in several ecosystem processes that play out over centuries - permafrost formation and thaw, peat accumulation, development of microtopography - and there is a need for studies that increase our understanding of slow, long-term dynamical processes. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  8. The resilience and functional role of moss in boreal and arctic ecosystems

    USGS Publications Warehouse

    Turetsky, M.; Bond-Lamberty, B.; Euskirchen, E.S.; Talbot, J. J.; Frolking, S.; McGuire, A.D.; Tuittila, E.S.

    2012-01-01

    Mosses in northern ecosystems are ubiquitous components of plant communities, and strongly influence nutrient, carbon and water cycling. We use literature review, synthesis and model simulations to explore the role of mosses in ecological stability and resilience. Moss community responses to disturbance showed all possible responses (increases, decreases, no change) within most disturbance categories. Simulations from two process-based models suggest that northern ecosystems would need to experience extreme perturbation before mosses were eliminated. But simulations with two other models suggest that loss of moss will reduce soil carbon accumulation primarily by influencing decomposition rates and soil nitrogen availability. It seems clear that mosses need to be incorporated into models as one or more plant functional types, but more empirical work is needed to determine how to best aggregate species. We highlight several issues that have not been adequately explored in moss communities, such as functional redundancy and singularity, relationships between response and effect traits, and parameter vs conceptual uncertainty in models. Mosses play an important role in several ecosystem processes that play out over centuries – permafrost formation and thaw, peat accumulation, development of microtopography – and there is a need for studies that increase our understanding of slow, long-term dynamical processes.

  9. Image processing for IMRT QA dosimetry.

    PubMed

    Zaini, Mehran R; Forest, Gary J; Loshek, David D

    2005-01-01

    We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.

  10. Performance improvement of microbial fuel cell (MFC) using suitable electrode and Bioengineered organisms: A review

    PubMed Central

    Choudhury, Payel; Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Ray, Rup Narayan

    2017-01-01

    ABSTRACT There is an urgent need to find an environment friendly and sustainable technology for alternative energy due to rapid depletion of fossil fuel and industrialization. Microbial Fuel Cells (MFCs) have operational and functional advantages over the current technologies for energy generation from organic matter as it directly converts electricity from substrate at ambient temperature. However, MFCs are still unsuitable for high energy demands due to practical limitations. The overall performance of an MFC depends on microorganism, appropriate electrode materials, suitable MFC designs, and optimizing process parameters which would accelerate commercialization of this technology in near future. In this review, we put forth the recent developments on microorganism and electrode material that are critical for the generation of bioelectricity generation. This would give a comprehensive insight into the characteristics, options, modifications, and evaluations of these parameters and their effects on process development of MFCs. PMID:28453385

  11. Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.

    PubMed

    Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew

    2017-08-10

    Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.

  12. Mesoscale Polymer Dissolution Probed by Raman Spectroscopy and Molecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Tsun-Mei; Xantheas, Sotiris S.; Vasdekis, Andreas E.

    2016-10-13

    The diffusion of various solvents into a polystyrene (PS) matrix was probed experimentally by monitoring the temporal profiles of the Raman spectra and theoretically from molecular dynamics (MD) simulations of the binary system. The simulation results assist in providing a fundamental, molecular level connection between the mixing/dissolution processes and the difference = solvent – PS in the values of the Hildebrand parameter () between the two components of the binary systems: solvents having similar values of with PS (small ) exhibit fast diffusion into the polymer matrix, whereas the diffusion slows down considerably when the ’s are different (large ).more » To this end, the Hildebrand parameter was identified as a useful descriptor that governs the process of mixing in polymer – solvent binary systems. The experiments also provide insight into further refinements of the models specific to non-Fickian diffusion phenomena that need to be used in the simulations.« less

  13. Performance improvement of microbial fuel cell (MFC) using suitable electrode and Bioengineered organisms: A review.

    PubMed

    Choudhury, Payel; Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Ray, Rup Narayan; Bhunia, Biswanath

    2017-09-03

    There is an urgent need to find an environment friendly and sustainable technology for alternative energy due to rapid depletion of fossil fuel and industrialization. Microbial Fuel Cells (MFCs) have operational and functional advantages over the current technologies for energy generation from organic matter as it directly converts electricity from substrate at ambient temperature. However, MFCs are still unsuitable for high energy demands due to practical limitations. The overall performance of an MFC depends on microorganism, appropriate electrode materials, suitable MFC designs, and optimizing process parameters which would accelerate commercialization of this technology in near future. In this review, we put forth the recent developments on microorganism and electrode material that are critical for the generation of bioelectricity generation. This would give a comprehensive insight into the characteristics, options, modifications, and evaluations of these parameters and their effects on process development of MFCs.

  14. Modeling Subsurface Behavior at the System Level: Considerations and a Path Forward

    NASA Astrophysics Data System (ADS)

    Geesey, G.

    2005-12-01

    The subsurface is an obscure but essential resource to life on Earth. It is an important region for carbon production and sequestration, a source and reservoir for energy, minerals and metals and potable water. There is a growing need to better understand subsurface possesses that control the exploitation and security of these resources. Our best models often fail to predict these processes at the field scale because of limited understanding of 1) the processes and the controlling parameters, 2) how processes are coupled at the field scale 3) geological heterogeneities that control hydrological, geochemical and microbiological processes at the field scale and 4) lack of data sets to calibrate and validate numerical models. There is a need for experimental data obtained at scales larger than those obtained at the laboratory bench that take into account the influence of hydrodynamics, geochemical reactions including complexation and chelation/adsorption/precipitation/ion exchange/oxidation-reduction/colloid formation and dissolution, and reactions of microbial origin. Furthermore, the coupling of each of these processes and reactions needs to be evaluated experimentally at a scale that produces data that can be used to calibrate numerical models so that they accurately describe field scale system behavior. Establishing the relevant experimental scale for collection of data from coupled processes remains a challenge and will likely be process-dependent and involve iterations of experimentation and data collection at different intermediate scales until the models calibrated with the appropriate date sets achieve an acceptable level of performance. Assuming that the geophysicists will soon develop technologies to define geological heterogeneities over a wide range of scales in the subsurface, geochemists need to continue to develop techniques to remotely measure abiotic reactions, while geomicrobiologists need to continue their development of complementary technologies to remotely measure microbial community parameters that define their key functions at a scale that accurately reflects their role in large scale subsurface system behavior. The practical questions that geomicrobiologist must answer in the short term are: 1) What is known about the activities of the dominant microbial populations or those of their closest relatives? 2) Which of these activities is likely to dominate under in situ conditions? In the process of answering these questions, researchers will obtain answers to questions of a more fundamental nature such as 1) How deep does "active" life extend below the surface of the seafloor and terrestrial subsurface? 2) How are electrons exchanged between microbial cells and solid phase minerals? 3) What is the metabolic state and mechanism of survival of "inactive" life forms in the subsurface? 4) What can genomes of life forms trapped in geological material tell us about evolution of life that current methods cannot? The subsurface environment represents a challenging environment to understand and model. As the need to understand subsurface processes increases and the technologies to characterize them become available, modeling subsurface behavior will approach the level of sophistication of models used today to predict behavior of other large scale systems such as the oceans.

  15. Differential Geometry Applied To Least-Square Error Surface Approximations

    NASA Astrophysics Data System (ADS)

    Bolle, Ruud M.; Sabbah, Daniel

    1987-08-01

    This paper focuses on extraction of the parameters of individual surfaces from noisy depth maps. The basis for this are least-square error polynomial approximations to the range data and the curvature properties that can be computed from these approximations. The curvature properties are derived using the invariants of the Weingarten Map evaluated at the origin of local coordinate systems centered at the range points. The Weingarten Map is a well-known concept in differential geometry; a brief treatment of the differential geometry pertinent to surface curvature is given. We use the curvature properties for extracting certain surface parameters from the curvature properties of the approximations. Then we show that curvature properties alone are not enough to obtain all the parameters of the surfaces; higher order properties (information about change of curvature) are needed to obtain full parametric descriptions. This surface parameter estimation problem arises in the design of a vision system to recognize 3D objects whose surfaces are composed of planar patches and patches of quadrics of revolution. (Quadrics of revolution are quadrics that are surfaces of revolution.) A significant portion of man-made objects can be modeled using these surfaces. The actual process of recognition and parameter extraction is framed as a set of stacked parameter space transforms. The transforms are "stacked" in the sense that any one transform computes only a partial geometric description that forms the input to the next transform. Those who are interested in the organization and control of the recognition and parameter recognition process are referred to [Sabbah86], this paper briefly touches upon the organization, but concentrates mainly on geometrical aspects of the parameter extraction.

  16. Modelling of the combustion velocity in UIT-85 on sustainable alternative gas fuel

    NASA Astrophysics Data System (ADS)

    Smolenskaya, N. M.; Korneev, N. V.

    2017-05-01

    The flame propagation velocity is one of the determining parameters characterizing the intensity of combustion process in the cylinder of an engine with spark ignition. Strengthening of requirements for toxicity and efficiency of the ICE contributes to gradual transition to sustainable alternative fuels, which include the mixture of natural gas with hydrogen. Currently, studies of conditions and regularities of combustion of this fuel to improve efficiency of its application are carried out in many countries. Therefore, the work is devoted to modeling the average propagation velocities of natural gas flame front laced with hydrogen to 15% by weight of the fuel, and determining the possibility of assessing the heat release characteristics on the average velocities of the flame front propagation in the primary and secondary phases of combustion. Experimental studies, conducted the on single cylinder universal installation UIT-85, showed the presence of relationship of the heat release characteristics with the parameters of the flame front propagation. Based on the analysis of experimental data, the empirical dependences for determination of average velocities of flame front propagation in the first and main phases of combustion, taking into account the change in various parameters of engine operation with spark ignition, were obtained. The obtained results allow to determine the characteristics of heat dissipation and to assess the impact of addition of hydrogen to the natural gas combustion process, that is needed to identify ways of improvement of the combustion process efficiency, including when you change the throttling parameters.

  17. Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)

    DOE PAGES

    Atchley, Adam L.; Painter, Scott L.; Harp, Dylan R.; ...

    2015-09-01

    Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. Thus, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth system models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth system models challenge validation and parameterization of hydrothermal models. A recently developed surface–subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to achieve the goals of constructing a process-rich model based on plausible parameters and to identify fine-scale controls of ALT in ice-wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze–thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g., troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less

  18. Development of a Real Time Sparse Non-Negative Matrix Factorization Module for Cochlear Implants by Using xPC Target

    PubMed Central

    Hu, Hongmei; Krasoulis, Agamemnon; Lutman, Mark; Bleeck, Stefan

    2013-01-01

    Cochlear implants (CIS) require efficient speech processing to maximize information transmission to the brain, especially in noise. A novel CI processing strategy was proposed in our previous studies, in which sparsity-constrained non-negative matrix factorization (NMF) was applied to the envelope matrix in order to improve the CI performance in noisy environments. It showed that the algorithm needs to be adaptive, rather than fixed, in order to adjust to acoustical conditions and individual characteristics. Here, we explore the benefit of a system that allows the user to adjust the signal processing in real time according to their individual listening needs and their individual hearing capabilities. In this system, which is based on MATLAB®, SIMULINK® and the xPC Target™ environment, the input/outupt (I/O) boards are interfaced between the SIMULINK blocks and the CI stimulation system, such that the output can be controlled successfully in the manner of a hardware-in-the-loop (HIL) simulation, hence offering a convenient way to implement a real time signal processing module that does not require any low level language. The sparsity constrained parameter of the algorithm was adapted online subjectively during an experiment with normal-hearing subjects and noise vocoded speech simulation. Results show that subjects chose different parameter values according to their own intelligibility preferences, indicating that adaptive real time algorithms are beneficial to fully explore subjective preferences. We conclude that the adaptive real time systems are beneficial for the experimental design, and such systems allow one to conduct psychophysical experiments with high ecological validity. PMID:24129021

  19. Development of a real time sparse non-negative matrix factorization module for cochlear implants by using xPC target.

    PubMed

    Hu, Hongmei; Krasoulis, Agamemnon; Lutman, Mark; Bleeck, Stefan

    2013-10-14

    Cochlear implants (CIs) require efficient speech processing to maximize information transmission to the brain, especially in noise. A novel CI processing strategy was proposed in our previous studies, in which sparsity-constrained non-negative matrix factorization (NMF) was applied to the envelope matrix in order to improve the CI performance in noisy environments. It showed that the algorithm needs to be adaptive, rather than fixed, in order to adjust to acoustical conditions and individual characteristics. Here, we explore the benefit of a system that allows the user to adjust the signal processing in real time according to their individual listening needs and their individual hearing capabilities. In this system, which is based on MATLAB®, SIMULINK® and the xPC Target™ environment, the input/outupt (I/O) boards are interfaced between the SIMULINK blocks and the CI stimulation system, such that the output can be controlled successfully in the manner of a hardware-in-the-loop (HIL) simulation, hence offering a convenient way to implement a real time signal processing module that does not require any low level language. The sparsity constrained parameter of the algorithm was adapted online subjectively during an experiment with normal-hearing subjects and noise vocoded speech simulation. Results show that subjects chose different parameter values according to their own intelligibility preferences, indicating that adaptive real time algorithms are beneficial to fully explore subjective preferences. We conclude that the adaptive real time systems are beneficial for the experimental design, and such systems allow one to conduct psychophysical experiments with high ecological validity.

  20. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  1. Developing a CD-CBM Anticipatory Approach for Cavitation - Defining a Model Descriptor Consistent Between Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, G.O.; Dress, W.B.; Kercel, S.W.

    1999-05-10

    A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used asmore » a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.« less

  2. What a Difference a Parameter Makes: a Psychophysical Comparison of Random Dot Motion Algorithms

    PubMed Central

    Pilly, Praveen K.; Seitz, Aaron R.

    2009-01-01

    Random dot motion (RDM) displays have emerged as one of the standard stimulus types employed in psychophysical and physiological studies of motion processing. RDMs are convenient because it is straightforward to manipulate the relative motion energy for a given motion direction in addition to stimulus parameters such as the speed, contrast, duration, density, aperture, etc. However, as widely as RDMs are employed so do they vary in their details of implementation. As a result, it is often difficult to make direct comparisons across studies employing different RDM algorithms and parameters. Here, we systematically measure the ability of human subjects to estimate motion direction for four commonly used RDM algorithms under a range of parameters in order to understand how these different algorithms compare in their perceptibility. We find that parametric and algorithmic differences can produce dramatically different performances. These effects, while surprising, can be understood in relationship to pertinent neurophysiological data regarding spatiotemporal displacement tuning properties of cells in area MT and how the tuning function changes with stimulus contrast and retinal eccentricity. These data help give a baseline by which different RDM algorithms can be compared, demonstrate a need for clearly reporting RDM details in the methods of papers, and also pose new constraints and challenges to models of motion direction processing. PMID:19336240

  3. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  4. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas.

    PubMed

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-11-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).

  5. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas

    PubMed Central

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-01-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293

  6. The effect of orientation difference in fused deposition modeling of ABS polymer on the processing time, dimension accuracy, and strength

    NASA Astrophysics Data System (ADS)

    Tanoto, Yopi Y.; Anggono, Juliana; Siahaan, Ian H.; Budiman, Wesley

    2017-01-01

    There are several parameters that must be set before manufacturing a product using 3D printing. These parameters include the orientation deposition of that product, type of material, form fill, fill density, and other parameters. The finished product of 3D printing has some responses that can be observed, measured, and tested. Some of those responses are the processing time, the dimensions of the end product, its surface roughness and the mechanical properties, i.e. its yield strength, ultimate tensile strength, and impact resistance. This research was conducted to study the relationship between process parameters of 3D printing machine using a technology of fused deposition modeling (FDM) and the generated responses. The material used was ABS plastic that was commonly used in the industry. Understanding the relationship between the parameters and the responses thus the resulting product can be manufactured to meet the user needs. Three different orientations in depositing the ABS polymer named XY(first orientation), YX (second orientation), and ZX (third orientation) were studied. Processing time, dimensional accuracy, and the product strength were the responses that were measured and tested. The study reports that the printing process with third orientation was the fastest printing process with the processing time 2432 seconds followed by orientation 1 and 2 with a processing time of 2688 and 2780 seconds respectively. Dimension accuracy was also measured from the width and the length of gauge area of tensile test specimens printed in comparison with the dimensions required by ASTM 638-02. It was found that the smallest difference was in thickness dimension, i.e. 0.1 mm thicker in printed sample using second orientation than as required by the standard. The smallest thickness deviation from the standard was measured in width dimension of a sample printed using first orientation (0.13 mm). As with the length dimension, the closest dimension to the standard was resulted from the third orientation product, i.e 0.2 mm. Tensile test done on all the specimens produced with those three orientations shows that the highest tensile strength was obtained in sample from second orientation deposition, i.e. 7.66 MPa followed by the first and third orientations products, i.e. 6.8 MPa and 3.31 MPa, respectively.

  7. Formulation of chitosan-TPP-pDNA nanocapsules for gene therapy applications

    NASA Astrophysics Data System (ADS)

    Gaspar, V. M.; Sousa, F.; Queiroz, J. A.; Correia, I. J.

    2011-01-01

    The encapsulation of DNA inside nanoparticles meant for gene delivery applications is a challenging process where several parameters need to be modulated in order to design nanocapsules with specific tailored characteristics. The purpose of this study was to investigate and improve the formulation parameters of plasmid DNA (pDNA) loaded in chitosan nanocapsules using tripolyphosphate (TPP) as polyanionic crosslinker. Nanocapsule morphology and encapsulation efficiency were analyzed as a function of chitosan degree of deacetylation and chitosan-TPP ratio. The manipulation of these parameters influenced not only the particle size but also the encapsulation and release of pDNA. Consequently the transfection efficiency of the nanoparticulated systems was also enhanced with the optimization of the particle characteristics. Overall, the differently formulated nanoparticulated systems possess singular properties that can be employed according to the desired gene delivery application.

  8. An inverse problem for a mathematical model of aquaponic agriculture

    NASA Astrophysics Data System (ADS)

    Bobak, Carly; Kunze, Herb

    2017-01-01

    Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.

  9. Multiple electron processes of He and Ne by proton impact

    NASA Astrophysics Data System (ADS)

    Terekhin, Pavel Nikolaevich; Montenegro, Pablo; Quinto, Michele; Monti, Juan; Fojon, Omar; Rivarola, Roberto

    2016-05-01

    A detailed investigation of multiple electron processes (single and multiple ionization, single capture, transfer-ionization) of He and Ne is presented for proton impact at intermediate and high collision energies. Exclusive absolute cross sections for these processes have been obtained by calculation of transition probabilities in the independent electron and independent event models as a function of impact parameter in the framework of the continuum distorted wave-eikonal initial state theory. A binomial analysis is employed to calculate exclusive probabilities. The comparison with available theoretical and experimental results shows that exclusive probabilities are needed for a reliable description of the experimental data. The developed approach can be used for obtaining the input database for modeling multiple electron processes of charged particles passing through the matter.

  10. Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.

    2002-05-01

    Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.

  11. Bioprocess development workflow: Transferable physiological knowledge instead of technological correlations.

    PubMed

    Reichelt, Wieland N; Haas, Florian; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    Microbial bioprocesses need to be designed to be transferable from lab scale to production scale as well as between setups. Although substantial effort is invested to control technological parameters, usually the only true constant parameter is the actual producer of the product: the cell. Hence, instead of solely controlling technological process parameters, the focus should be increasingly laid on physiological parameters. This contribution aims at illustrating a workflow of data life cycle management with special focus on physiology. Information processing condenses the data into physiological variables, while information mining condenses the variables further into physiological descriptors. This basis facilitates data analysis for a physiological explanation for observed phenomena in productivity. Targeting transferability, we demonstrate this workflow using an industrially relevant Escherichia coli process for recombinant protein production and substantiate the following three points: (1) The postinduction phase is independent in terms of productivity and physiology from the preinduction variables specific growth rate and biomass at induction. (2) The specific substrate uptake rate during induction phase was found to significantly impact the maximum specific product titer. (3) The time point of maximum specific titer can be predicted by an easy accessible physiological variable: while the maximum specific titers were reached at different time points (19.8 ± 7.6 h), those maxima were reached all within a very narrow window of cumulatively consumed substrate dSn (3.1 ± 0.3 g/g). Concluding, this contribution provides a workflow on how to gain a physiological view on the process and illustrates potential benefits. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:261-270, 2017. © 2016 American Institute of Chemical Engineers.

  12. Modern methods for the quality management of high-rate melt solidification

    NASA Astrophysics Data System (ADS)

    Vasiliev, V. A.; Odinokov, S. A.; Serov, M. M.

    2016-12-01

    The quality management of high-rate melt solidification needs combined solution obtained by methods and approaches adapted to a certain situation. Technological audit is recommended to estimate the possibilities of the process. Statistical methods are proposed with the choice of key parameters. Numerical methods, which can be used to perform simulation under multifactor technological conditions, and an increase in the quality of decisions are of particular importance.

  13. Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth

    DTIC Science & Technology

    2010-08-01

    Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against

  14. Contaminant Attenuation and Transport Characterization of 200-DV-1 Operable Unit Sediment Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truex, Michael J.; Szecsody, James E.; Qafoku, Nikolla

    2017-05-15

    A laboratory study was conducted to quantify contaminant attenuation processes and associated contaminant transport parameters that are needed to evaluate transport of contaminants through the vadose zone to the groundwater. The laboratory study information, in conjunction with transport analyses, can be used as input to evaluate the feasibility of Monitored Natural Attenuation and other remedies for the 200-DV-1 Operable Unit at the Hanford Site.

  15. High Cycle Fatigue Science and Technology Program 1999 Annual Report

    DTIC Science & Technology

    2000-01-01

    CONFINING MEDIUM) FOCUSED LASER BEAM PAINT (ABLATION MEDIUM) TRAVELING SHOCK WAVES • A repetitive pattern of laser pulses results in an area of deep ...includes an improved beam delivery system, a more 11 robust beam monitoring configuration, and a more robust processing chamber. Lessons learned will be...impacted specimens. Additional work is needed to better understand the effect of this parameter and technique. Fractography showed that some of the

  16. Electron-Impact-Ionization and Electron-Attachment Cross Sections of Radicals Important in Transient Gaseous Discharges.

    DTIC Science & Technology

    1988-02-05

    for understanding the microscopic processes of electrical discharges and for designing gaseous discharge switches. High power gaseous discharge switches...half-maximum) energy resolution. The electron gun and ion extraction were of the same design of Srivastava at the Jet Propulsion Laboratory. Ions...photons. - The observed current switching can be applied to the design of discharge switches. Elec- tron transport parameters are needed for the

  17. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  18. Nimbus 7 Coastal Zone Color Scanner (CZCS). Level 1 data product users' guide

    NASA Technical Reports Server (NTRS)

    Williams, S. P.; Szajna, E. F.; Hovis, W. A.

    1985-01-01

    The coastal zone color scanner (CZCS) is a scanning multispectral radiometer designed specifically for the remote sensing of Ocean Color parameters from an Earth orbiting space platform. A technical manual which is intended for users of NIMBUS 7 CZCS Level 1 data products is presented. It contains information needed by investigators and data processing personnel to operate on the data using digital computers and related equipment.

  19. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  20. Research and development of smart wearable health applications: the challenge ahead.

    PubMed

    Lymberis, Andreas

    2004-01-01

    Continuous monitoring of physiological and physical parameters is necessary for the assessment and management of personal health status. It can significantly contribute to the reduction of healthcare cost by avoiding unnecessary hospitalisations and ensuring that those who need urgent care get it sooner. In conjunction with cost-effective telemedicine platforms, ubiquitous health monitoring can significantly contribute to the enhancement of disease prevention and early diagnosis, disease management, treatment and home rehabilitation. Latest developments in the area of micro and nanotechnologies, information processing and wireless communication offer, today, the possibility for minimally (or non) invasive biomedical measurement but also wearable sensing, processing and data communication. Although the systems are being developed to satisfy specific user needs, a number of common critical issues have to be tackled to achieve reliable and acceptable smart health wearable applications e.g. biomedical sensors, user interface, clinical validation, data security and confidentiality, scenarios of use, decision support, user acceptance and business models. Major technological achievements have been realised the last few years. Cutting edge development combining functional clothing and integrated electronics open a new research area and possibilities for body sensing and communicating health parameters. This paper reviews the current status of research and development on smart wearable health systems and applications and discusses the outstanding issues and future challenges.

  1. Nonstationarities in Catchment Response According to Basin and Rainfall Characteristics: Application to Korean Watershed

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; Kim, Jin-Guk; Jung, Il-Won

    2015-04-01

    It must be acknowledged that application of rainfall-runoff models to simulate rainfall-runoff processes are successful in gauged watershed. However, there still remain some issues that will need to be further discussed. In particular, the quantitive representation of nonstationarity issue in basin response (e.g. concentration time, storage coefficient and roughness) along with ungauged watershed needs to be studied. In this regard, this study aims to investigate nonstationarity in basin response so as to potentially provide useful information in simulating runoff processes in ungauged watershed. For this purpose, HEC-1 rainfall-runoff model was mainly utilized. In addition, this study combined HEC-1 model with Bayesian statistical model to estimate uncertainty of the parameters which is called Bayesian HEC-1 (BHEC-1). The proposed rainfall-runofall model is applied to various catchments along with various rainfall patterns to understand nonstationarities in catchment response. Further discussion about the nonstationarity in catchment response and possible regionalization of the parameters for ungauged watershed are discussed. KEYWORDS: Nonstationary, Catchment response, Uncertainty, Bayesian Acknowledgement This research was supported by a Grant (13SCIPA01) from Smart Civil Infrastructure Research Program funded by the Ministry of Land, Infrastructure and Transport (MOLIT) of Korea government and the Korea Agency for Infrastructure Technology Advancement (KAIA).

  2. Modeling High-Impact Weather and Climate: Lessons From a Tropical Cyclone Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Done, James; Holland, Greg; Bruyere, Cindy

    2013-10-19

    Although the societal impact of a weather event increases with the rarity of the event, our current ability to assess extreme events and their impacts is limited by not only rarity but also by current model fidelity and a lack of understanding of the underlying physical processes. This challenge is driving fresh approaches to assess high-impact weather and climate. Recent lessons learned in modeling high-impact weather and climate are presented using the case of tropical cyclones as an illustrative example. Through examples using the Nested Regional Climate Model to dynamically downscale large-scale climate data the need to treat bias inmore » the driving data is illustrated. Domain size, location, and resolution are also shown to be critical and should be guided by the need to: include relevant regional climate physical processes; resolve key impact parameters; and to accurately simulate the response to changes in external forcing. The notion of sufficient model resolution is introduced together with the added value in combining dynamical and statistical assessments to fill out the parent distribution of high-impact parameters. Finally, through the example of a tropical cyclone damage index, direct impact assessments are resented as powerful tools that distill complex datasets into concise statements on likely impact, and as highly effective communication devices.« less

  3. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  4. Gaze-controlled, computer-assisted communication in Intensive Care Unit: "speaking through the eyes".

    PubMed

    Maringelli, F; Brienza, N; Scorrano, F; Grasso, F; Gregoretti, C

    2013-02-01

    The aim of this study was to test the hypothesis that a gaze-controlled communication system (eye tracker, ET) can improve communication processes between completely dysarthric ICU patients and the hospital staff, in three main domains: 1) basic communication processes (i.e., fundamental needs, desire, and wishes); 2) the ability of the medical staff to understand the clinical condition of the patient; and 3) the level of frustration experienced by patient, nurses and physicians. Fifteen fully conscious medical and surgical patients, 8 physicians, and 15 nurses were included in the study. The experimental procedure was composed by three phases: in phase 1 all groups completed the preintervention questionnaire; in phase 2 the ET was introduced and tested as a communication device; in phase 3 all groups completed the postintervention questionnaire. Patients preintervention questionnaires showed remarkable communication deficits, without any group effect. Answers of physicians and nurses were pretty much similar to the one of patients. Postintervention questionnaires showed in all groups a remarkable and statistically significant improvement in different communication domains, as well as a remarkable decrease of anxiety and disphoric thought. Improvement was also reported by physicians and nurses in their ability to understand patient's clinical conditions. Our results show an improvement in the quality of the examined parameters. Better communication processes seem also to lead to improvements in several psychological parameters, namely anxiety and drop-out depression perceived by both patients and medical staff. Further controlled studies are needed to define the ET role in ICU.

  5. Achieving mask order processing automation, interoperability and standardization based on P10

    NASA Astrophysics Data System (ADS)

    Rodriguez, B.; Filies, O.; Sadran, D.; Tissier, Michel; Albin, D.; Stavroulakis, S.; Voyiatzis, E.

    2007-02-01

    Last year the MUSCLE (Masks through User's Supply Chain: Leadership by Excellence) project was presented. Here is the project advancement. A key process in mask supply chain management is the exchange of technical information for ordering masks. This process is large, complex, company specific and error prone, and leads to longer cycle times and higher costs due to missing or wrong inputs. Its automation and standardization could produce significant benefits. We need to agree on the standard for mandatory and optional parameters, and also a common way to describe parameters when ordering. A system was created to improve the performance in terms of Key Performance Indicators (KPIs) such as cycle time and cost of production. This tool allows us to evaluate and measure the effect of factors, as well as the effect of implementing the improvements of the complete project. Next, a benchmark study and a gap analysis were performed. These studies show the feasibility of standardization, as there is a large overlap in requirements. We see that the SEMI P10 standard needs enhancements. A format supporting the standard is required, and XML offers the ability to describe P10 in a flexible way. Beyond using XML for P10, the semantics of the mask order should also be addressed. A system design and requirements for a reference implementation for a P10 based management system are presented, covering a mechanism for the evolution and for version management and a design for P10 editing and data validation.

  6. The need for sustained and integrated high-resolution mapping of dynamic coastal environments

    USGS Publications Warehouse

    Stockdon, Hilary F.; Lillycrop, Jeff W.; Howd, Peter A.; Wozencraft, Jennifer M.

    2007-01-01

    The evolution of the United States' coastal zone response to both human activities and natural processes is dynamic. Coastal resource and population protection requires understanding, in detail, the processes needed for change as well as the physical setting. Sustained coastal area mapping allows change to be documented and baseline conditions to be established, as well as future behavior to be predicted in conjunction with physical process models. Hyperspectral imagers and airborne lidars, as well as other recent mapping technology advances, allow rapid national scale land use information and high-resolution elevation data collection. Coastal hazard risk evaluation has critical dependence on these rich data sets. A fundamental storm surge model parameter in predicting flooding location, for example, is coastal elevation data, and a foundation in identifying the most vulnerable populations and resources is land use maps. A wealth of information for physical change process study, coastal resource and community management and protection, and coastal area hazard vulnerability determination, is available in a comprehensive national coastal mapping plan designed to take advantage of recent mapping technology progress and data distribution, management, and collection.

  7. SARTools: A DESeq2- and EdgeR-Based R Pipeline for Comprehensive Differential Analysis of RNA-Seq Data.

    PubMed

    Varet, Hugo; Brillet-Guéguen, Loraine; Coppée, Jean-Yves; Dillies, Marie-Agnès

    2016-01-01

    Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods. SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used. SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.

  8. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  9. Degradation of imidacloprid using combined advanced oxidation processes based on hydrodynamic cavitation.

    PubMed

    Patil, Pankaj N; Bote, Sayli D; Gogate, Parag R

    2014-09-01

    The harmful effects of wastewaters containing pesticides or insecticides on human and aquatic life impart the need of effectively treating the wastewater streams containing these contaminants. In the present work, hydrodynamic cavitation reactors have been applied for the degradation of imidacloprid with process intensification studies based on different additives and combination with other similar processes. Effect of different operating parameters viz. concentration (20-60 ppm), pressure (1-8 bar), temperature (34 °C, 39 °C and 42 °C) and initial pH (2.5-8.3) has been investigated initially using orifice plate as cavitating device. It has been observed that 23.85% degradation of imidacloprid is obtained at optimized set of operating parameters. The efficacy of different process intensifying approaches based on the use of hydrogen peroxide (20-80 ppm), Fenton's reagent (H2O2:FeSO4 ratio as 1:1, 1:2, 2:1, 2:2, 4:1 and 4:2), advanced Fenton process (H2O2:Iron Powder ratio as 1:1, 2:1 and 4:1) and combination of Na2S2O8 and FeSO4 (FeSO4:Na2S2O8 ratio as 1:1, 1:2, 1:3 and 1:4) on the extent of degradation has been investigated. It was observed that near complete degradation of imidacloprid was achieved in all the cases at optimized values of process intensifying parameters. The time required for complete degradation of imidacloprid for approach based on hydrogen peroxide was 120 min where as for the Fenton and advance Fenton process, the required time was only 60 min. To check the effectiveness of hydrodynamic cavitation with different cavitating devices, few experiments were also performed with the help of slit venturi as a cavitating device at already optimized values of parameters. The present work has conclusively established that combined processes based on hydrodynamic cavitation can be effectively used for complete degradation of imidacloprid. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Compression Molding of Composite of Recycled HDPE and Recycled Tire Particles

    NASA Technical Reports Server (NTRS)

    Liu, Ping; Waskom, Tommy L.; Chen, Zhengyu; Li, Yanze; Peng, Linda

    1996-01-01

    Plastic and rubber recycling is an effective means of reducing solid waste to the environment and preserving natural resources. A project aimed at developing a new composite material from recycled high density polyethylene (HDPE) and recycled rubber is currently being conducted at Eastern Illinois University. The recycled plastic pellets with recycled rubber particles are extruded into some HDPE/rubber composite strands. The strand can be further cut into pellets that can be used to fabricate other material forms or products. This experiment was inspired by the above-mentioned research activity. In order to measure Durometer hardness of the extruded composite, a specimen with relatively large dimensions was needed. Thus, compression molding was used to form a cylindrical specimen of 1 in. diameter and 1 in. thickness. The initial poor quality of the molded specimen prompted a need to optimize the processing parameters such as temperature, holding time, and pressure. Design of experiment (DOE) was used to obtain optimum combination of the parameters.

  11. Application of PBPK modelling in drug discovery and development at Pfizer.

    PubMed

    Jones, Hannah M; Dickins, Maurice; Youdim, Kuresh; Gosset, James R; Attkins, Neil J; Hay, Tanya L; Gurrell, Ian K; Logan, Y Raj; Bungay, Peter J; Jones, Barry C; Gardner, Iain B

    2012-01-01

    Early prediction of human pharmacokinetics (PK) and drug-drug interactions (DDI) in drug discovery and development allows for more informed decision making. Physiologically based pharmacokinetic (PBPK) modelling can be used to answer a number of questions throughout the process of drug discovery and development and is thus becoming a very popular tool. PBPK models provide the opportunity to integrate key input parameters from different sources to not only estimate PK parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. Using examples from the literature and our own company, we have shown how PBPK techniques can be utilized through the stages of drug discovery and development to increase efficiency, reduce the need for animal studies, replace clinical trials and to increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however, some limitations need to be addressed to realize its application and utility more broadly.

  12. NASA's Laboratory Astrophysics Workshop: Opening Remarks

    NASA Technical Reports Server (NTRS)

    Hasan, Hashima

    2002-01-01

    The Astronomy and Physics Division at NASA Headquarters has an active and vibrant program in Laboratory Astrophysics. The objective of the program is to provide the spectroscopic data required by observers to analyze data from NASA space astronomy missions. The program also supports theoretical investigations to provide those spectroscopic parameters that cannot be obtained in the laboratory; simulate space environment to understand formation of certain molecules, dust grains and ices; and production of critically compiled databases of spectroscopic parameters. NASA annually solicits proposals, and utilizes the peer review process to select meritorious investigations for funding. As the mission of NASA evolves, new missions are launched, and old ones are terminated, the Laboratory Astrophysics program needs to evolve accordingly. Consequently, it is advantageous for NASA and the astronomical community to periodically conduct a dialog to assess the status of the program. This Workshop provides a forum for producers and users of laboratory data to get together and understand each others needs and limitations. A multi-wavelength approach enables a cross fertilization of ideas across wavelength bands.

  13. Effects of design parameters and puff topography on heating coil temperature and mainstream aerosols in electronic cigarettes

    NASA Astrophysics Data System (ADS)

    Zhao, Tongke; Shu, Shi; Guo, Qiuju; Zhu, Yifang

    2016-06-01

    Emissions from electronic cigarettes (ECs) may contribute to both indoor and outdoor air pollution and the number of users is increasing rapidly. ECs operate based on the evaporation of e-liquid by a high-temperature heating coil. Both puff topography and design parameters can affect this evaporation process. In this study, both mainstream aerosols and heating coil temperature were measured concurrently to study the effects of design parameters and puff topography. The heating coil temperatures and mainstream aerosols varied over a wide range across different brands and within same brand. The peak heating coil temperature and the count median diameter (CMD) of EC aerosols increased with a longer puff duration and a lower puff flow rate. The particle number concentration was positively associated with the puff duration and puff flow rate. These results provide a better understanding of how EC emissions are affected by design parameters and puff topography and emphasize the urgent need to better regulate EC products.

  14. Quantifying ligand effects in high-oxidation-state metal catalysis

    NASA Astrophysics Data System (ADS)

    Billow, Brennan S.; McDaniel, Tanner J.; Odom, Aaron L.

    2017-09-01

    Catalysis by high-valent metals such as titanium(IV) impacts our lives daily through reactions like olefin polymerization. In any catalysis, optimization involves a careful choice of not just the metal but also the ancillary ligands. Because these choices dramatically impact the electronic structure of the system and, in turn, catalyst performance, new tools for catalyst development are needed. Understanding ancillary ligand effects is arguably one of the most critical aspects of catalyst optimization and, while parameters for phosphines have been used for decades with low-valent systems, a comparable system does not exist for high-valent metals. A new electronic parameter for ligand donation, derived from experiments on a high-valent chromium species, is now available. Here, we show that the new parameters enable quantitative determination of ancillary ligand effects on catalysis rate and, in some cases, even provide mechanistic information. Analysing reactions in this way can be used to design better catalyst architectures and paves the way for the use of such parameters in a host of high-valent processes.

  15. In Situ Roughness Measurements for the Solar Cell Industry Using an Atomic Force Microscope

    PubMed Central

    González-Jorge, Higinio; Alvarez-Valado, Victor; Valencia, Jose Luis; Torres, Soledad

    2010-01-01

    Areal roughness parameters always need to be under control in the thin film solar cell industry because of their close relationship with the electrical efficiency of the cells. In this work, these parameters are evaluated for measurements carried out in a typical fabrication area for this industry. Measurements are made using a portable atomic force microscope on the CNC diamond cutting machine where an initial sample of transparent conductive oxide is cut into four pieces. The method is validated by making a comparison between the parameters obtained in this process and in the laboratory under optimal conditions. Areal roughness parameters and Fourier Spectral Analysis of the data show good compatibility and open the possibility to use this type of measurement instrument to perform in situ quality control. This procedure gives a sample for evaluation without destroying any of the transparent conductive oxide; in this way 100% of the production can be tested, so improving the measurement time and rate of production. PMID:22319338

  16. In situ roughness measurements for the solar cell industry using an atomic force microscope.

    PubMed

    González-Jorge, Higinio; Alvarez-Valado, Victor; Valencia, Jose Luis; Torres, Soledad

    2010-01-01

    Areal roughness parameters always need to be under control in the thin film solar cell industry because of their close relationship with the electrical efficiency of the cells. In this work, these parameters are evaluated for measurements carried out in a typical fabrication area for this industry. Measurements are made using a portable atomic force microscope on the CNC diamond cutting machine where an initial sample of transparent conductive oxide is cut into four pieces. The method is validated by making a comparison between the parameters obtained in this process and in the laboratory under optimal conditions. Areal roughness parameters and Fourier Spectral Analysis of the data show good compatibility and open the possibility to use this type of measurement instrument to perform in situ quality control. This procedure gives a sample for evaluation without destroying any of the transparent conductive oxide; in this way 100% of the production can be tested, so improving the measurement time and rate of production.

  17. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments

    NASA Astrophysics Data System (ADS)

    Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.

    2014-05-01

    Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.

  18. Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.

    PubMed

    Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L

    2016-05-10

    Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.

  19. OPC modeling by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.

    2005-05-01

    Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.

  20. A learning flight control system for the F8-DFBW aircraft. [Digital Fly-By-Wire

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Mekel, R.; Nachmias, S.

    1978-01-01

    This report contains a complete description of a learning control system designed for the F8-DFBW aircraft. The system is parameter-adaptive with the additional feature that it 'learns' the variation of the control system gains needed over the flight envelope. It, thus, generates and modifies its gain schedule when suitable data are available. The report emphasizes the novel learning features of the system: the forms of representation of the flight envelope and the process by which identified parameters are used to modify the gain schedule. It contains data taken during piloted real-time 6 degree-of-freedom simulations that were used to develop and evaluate the system.

  1. Measurement of atomic Stark parameters of many Mn I and Fe I spectral lines using GMAW process

    NASA Astrophysics Data System (ADS)

    Zielinska, S.; Pellerin, S.; Dzierzega, K.; Valensi, F.; Musiol, K.; Briand, F.

    2010-11-01

    The particular character of the welding arc working in pure argon, whose emission spectrum consists of many spectral lines strongly broadened by the Stark effect, has allowed measurement, sometimes for the first time, of the Stark parameters of 15 Mn I and 10 Fe I atomic spectral lines, and determination of the dependence on temperature of normalized Stark broadening in Ne = 1023 m-3 of the 542.4 nm atomic iron line. These results show that special properties of the MIG plasma may be useful in this domain because composition of the wire-electrode may be easily adapted to the needs of an experiment.

  2. Testing and Performance Analysis of the Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    Soni, Nitin J.

    1996-01-01

    This report provides the test results and performance analysis of the multichannel error correction code decoder (MED) system for a regenerative satellite with asynchronous, frequency-division multiple access (FDMA) uplink channels. It discusses the system performance relative to various critical parameters: the coding length, data pattern, unique word value, unique word threshold, and adjacent-channel interference. Testing was performed under laboratory conditions and used a computer control interface with specifically developed control software to vary these parameters. Needed technologies - the high-speed Bose Chaudhuri-Hocquenghem (BCH) codec from Harris Corporation and the TRW multichannel demultiplexer/demodulator (MCDD) - were fully integrated into the mesh very small aperture terminal (VSAT) onboard processing architecture and were demonstrated.

  3. Noncoherent sampling technique for communications parameter estimations

    NASA Technical Reports Server (NTRS)

    Su, Y. T.; Choi, H. J.

    1985-01-01

    This paper presents a method of noncoherent demodulation of the PSK signal for signal distortion analysis at the RF interface. The received RF signal is downconverted and noncoherently sampled for further off-line processing. Any mismatch in phase and frequency is then compensated for by the software using the estimation techniques to extract the baseband waveform, which is needed in measuring various signal parameters. In this way, various kinds of modulated signals can be treated uniformly, independent of modulation format, and additional distortions introduced by the receiver or the hardware measurement instruments can thus be eliminated. Quantization errors incurred by digital sampling and ensuing software manipulations are analyzed and related numerical results are presented also.

  4. Faraday Rotation Measurement with the SMAP Radiometer

    NASA Technical Reports Server (NTRS)

    Le Vine, D. M.; Abraham, S.

    2016-01-01

    Faraday rotation is an issue that needs to be taken into account in remote sensing of parameters such as soil moisture and ocean salinity at L-band. This is especially important for SMAP because Faraday rotation varies with azimuth around the conical scan. SMAP retrieves Faraday rotation in situ using the ratio of the third and second Stokes parameters, a procedure that was demonstrated successfully by Aquarius. This manuscript reports the performance of this algorithm on SMAP. Over ocean the process works reasonably well and results compare favorably with expected values. But over land, the inhomogeneous nature of the scene results in much noisier, and in some cases unreliable estimates of Faraday rotation.

  5. Post-processing of global model output to forecast point rainfall

    NASA Astrophysics Data System (ADS)

    Hewson, Tim; Pillosu, Fatima

    2016-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts) has recently embarked upon a new project to post-process gridbox rainfall forecasts from its ensemble prediction system, to provide probabilistic forecasts of point rainfall. The new post-processing strategy relies on understanding how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. We use a number of simple global model parameters, such as the convective rainfall fraction, to anticipate the sub-grid variability, and then post-process each ensemble forecast into a pdf (probability density function) for a point-rainfall total. The final forecast will comprise the sum of the different pdfs from all ensemble members. The post-processing is essentially a re-calibration exercise, which needs only rainfall totals from standard global reporting stations (and forecasts) to train it. High density observations are not needed. This presentation will describe results from the initial 'proof of concept' study, which has been remarkably successful. Reference will also be made to other useful outcomes of the work, such as gaining insights into systematic model biases in different synoptic settings. The special case of orographic rainfall will also be discussed. Work ongoing this year will also be described. This involves further investigations of which model parameters can provide predictive skill, and will then move on to development of an operational system for predicting point rainfall across the globe. The main practical benefit of this system will be a greatly improved capacity to predict extreme point rainfall, and thereby provide early warnings, for the whole world, of flash flood potential for lead times that extend beyond day 5. This will be incorporated into the suite of products output by GLOFAS (the GLObal Flood Awareness System) which is hosted at ECMWF. As such this work offers a very cost-effective approach to satisfying user needs right around the world. This field has hitherto relied on using very expensive high-resolution ensembles; by their very nature these can only run over small regions, and only for lead times up to about 2 days.

  6. Co-extrusion of semi-finished aluminium-steel compounds

    NASA Astrophysics Data System (ADS)

    Thürer, S. E.; Uhe, J.; Golovko, O.; Bonk, C.; Bouguecha, A.; Klose, C.; Behrens, B.-A.; Maier, H. J.

    2017-10-01

    The combination of light metals and steels allows for new lightweight components with wear-resistant functional surfaces. Within the Collaborative Research Centre 1153 novel process chains are developed for the manufacture of such hybrid components. Here, the production process of a hybrid bearing bushing made of the aluminium alloy EN AW-6082 and the case-hardened steel 20MnCr5 is developed. Hybrid semi-finished products are an attractive alternative to conventional ones resulting from massive forming processes where the individual components are joined after the forming process. The actual hybrid semi-finished products were manufactured using a lateral angular co-extrusion (LACE) process. The bearing bushings are subsequently produced by die forging. In the present study, a tool concept for the LACE process is described, which renders the continuous joining of a steel rod with an aluminium tube possible. During the LACE process, the rod is fed into the extrusion die at an angle of approx. 90°. Metallographic analysis of the hybrid profile showed that the mechanical bonding between the different materials begins about 75 mm after the edge of the aluminium sheath. In order to improve the bonding strength, the steel rod is to be preheated during extrusion. Systematic investigations using a dilatometer, considering the maximum possible co-extrusion process parameters, were carried out. The variable parameters for the dilatometer experiments were determined by numerical simulation. In order to form a bond between the materials, the oxide layer needs to be disrupted during the co-extrusion process. In an attempt to better understand this effect, a modified sample geometry with chamfered steel was developed for the dilatometer experiments. The influence of the process parameters on the formation of the intermetallic phase at the interface was analysed by scanning electron microscopy and X-ray diffraction. This article, which was originally published online on 16 October 2017, contained an error in the press ratio, where 9:1 should be 6:1. The corrected ratio appears in the Corrigendum attached to the pdf.

  7. Quantifying Groundwater Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E.; Foglia, L.

    2007-12-01

    Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.

  8. NASA Satellite Monitoring of Water Clarity in Mobile Bay for Nutrient Criteria Development

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir; Holekamp, Kara; Spiering, Bruce A.

    2009-01-01

    This project has demonstrated feasibility of deriving from MODIS daily measurements time series of water clarity parameters that provide coverage of a specific location or an area of interest for 30-50% of days. Time series derived for estuarine and coastal waters display much higher variability than time series of ecological parameters (such as vegetation indices) derived for land areas. (Temporal filtering often applied in terrestrial studies cannot be used effectively in ocean color processing). IOP-based algorithms for retrieval of diffuse light attenuation coefficient and TSS concentration perform well for the Mobile Bay environment: only a minor adjustment was needed in the TSS algorithm, despite generally recognized dependence of such algorithms on local conditions. The current IOP-based algorithm for retrieval of chlorophyll a concentration has not performed as well: a more reliable algorithm is needed that may be based on IOPs at additional wavelengths or on remote sensing reflectance from multiple spectral bands. CDOM algorithm also needs improvement to provide better separation between effects of gilvin (gelbstoff) and detritus. (Identification or development of such algorithm requires more data from in situ measurements of CDOM concentration in Gulf of Mexico coastal waters (ongoing collaboration with the EPA Gulf Ecology Division))

  9. Assessment of Spatial Transferability of Process-Based Hydrological Model Parameters in Two Neighboring Catchments in the Himalayan Region

    NASA Astrophysics Data System (ADS)

    Nepal, S.

    2016-12-01

    The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.

  10. Automated data acquisition technology development:Automated modeling and control development

    NASA Technical Reports Server (NTRS)

    Romine, Peter L.

    1995-01-01

    This report documents the completion of, and improvements made to, the software developed for automated data acquisition and automated modeling and control development on the Texas Micro rackmounted PC's. This research was initiated because a need was identified by the Metal Processing Branch of NASA Marshall Space Flight Center for a mobile data acquisition and data analysis system, customized for welding measurement and calibration. Several hardware configurations were evaluated and a PC based system was chosen. The Welding Measurement System (WMS), is a dedicated instrument strickly for use of data acquisition and data analysis. In addition to the data acquisition functions described in this thesis, WMS also supports many functions associated with process control. The hardware and software requirements for an automated acquisition system for welding process parameters, welding equipment checkout, and welding process modeling were determined in 1992. From these recommendations, NASA purchased the necessary hardware and software. The new welding acquisition system is designed to collect welding parameter data and perform analysis to determine the voltage versus current arc-length relationship for VPPA welding. Once the results of this analysis are obtained, they can then be used to develop a RAIL function to control welding startup and shutdown without torch crashing.

  11. Protonation linked equilibria and apparent affinity constants: the thermodynamic profile of the alpha-chymotrypsin-proflavin interaction.

    PubMed

    Bruylants, Gilles; Wintjens, René; Looze, Yvan; Redfield, Christina; Bartik, Kristin

    2007-12-01

    Protonation/deprotonation equilibria are frequently linked to binding processes involving proteins. The presence of these thermodynamically linked equilibria affects the observable thermodynamic parameters of the interaction (K(obs), DeltaH(obs)(0) ). In order to try and elucidate the energetic factors that govern these binding processes, a complete thermodynamic characterisation of each intrinsic equilibrium linked to the complexation event is needed and should furthermore be correlated to structural information. We present here a detailed study, using NMR and ITC, of the interaction between alpha-chymotrypsin and one of its competitive inhibitors, proflavin. By performing proflavin titrations of the enzyme, at different pH values, we were able to highlight by NMR the effect of the complexation of the inhibitor on the ionisable residues of the catalytic triad of the enzyme. Using ITC we determined the intrinsic thermodynamic parameters of the different equilibria linked to the binding process. The possible driving forces of the interaction between alpha-chymotrypsin and proflavin are discussed in the light of the experimental data and on the basis of a model of the complex. This study emphasises the complementarities between ITC and NMR for the study of binding processes involving protonation/deprotonation equilibria.

  12. A process evaluation of implementing a vocational enablement protocol for employees with hearing difficulties in clinical practice.

    PubMed

    Gussenhoven, Arjenne H M; Singh, Amika S; Goverts, S Theo; van Til, Marten; Anema, Johannes R; Kramer, Sophia E

    2015-08-01

    A multidisciplinary vocational rehabilitation programme, the Vocational Enablement Protocol (VEP) was developed to address the specific needs of employees with hearing difficulties. In the current study we evaluated the process of implementing the VEP in audiologic care among employees with hearing impairment. In conjunction with a randomized controlled trial, we collected and analysed data on seven process parameters: recruitment, reach, fidelity, dose delivered, dose received and implemented, satisfaction, and perceived benefit. Sixty-six employees with hearing impairment participated in the VEP. The multidisciplinary team providing the VEP comprised six professionals. The professionals performed the VEP according to the protocol. Of the recommendations delivered by the professionals, 31% were perceived as implemented by the employees. Compliance rate was highest for hearing-aid uptake (51%). Both employees and professionals were highly satisfied with the VEP. Participants rated good perceived benefit from it. Our results indicate that the VEP could be a useful treatment for employees with hearing difficulties from a process evaluation perspective. Implementation research in the audiological setting should be encouraged in order to further provide insight into parameters facilitating or hindering successful implementation of an intervention and to improve its quality and efficacy.

  13. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  14. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  15. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  16. Influence of signal processing strategy in auditory abilities.

    PubMed

    Melo, Tatiana Mendes de; Bevilacqua, Maria Cecília; Costa, Orozimbo Alves; Moret, Adriane Lima Mortari

    2013-01-01

    The signal processing strategy is a parameter that may influence the auditory performance of cochlear implant and is important to optimize this parameter to provide better speech perception, especially in difficult listening situations. To evaluate the individual's auditory performance using two different signal processing strategy. Prospective study with 11 prelingually deafened children with open-set speech recognition. A within-subjects design was used to compare performance with standard HiRes and HiRes 120 in three different moments. During test sessions, subject's performance was evaluated by warble-tone sound-field thresholds, speech perception evaluation, in quiet and in noise. In the silence, children S1, S4, S5, S7 showed better performance with the HiRes 120 strategy and children S2, S9, S11 showed better performance with the HiRes strategy. In the noise was also observed that some children performed better using the HiRes 120 strategy and other with HiRes. Not all children presented the same pattern of response to the different strategies used in this study, which reinforces the need to look at optimizing cochlear implant clinical programming.

  17. Modern information and telecommunication technologies in educational process as the element of ongoing personnel training for high-tech Russian industry

    NASA Astrophysics Data System (ADS)

    Matyatina, A. N.; Isaev, A. A.; Samovarschikov, Y. V.

    2017-01-01

    In the current work the issues of staffing high-tech sectors of Russian industry are considered in the context of global geopolitical instability, the comparative analysis of the age structure of domestic companies with the leading Western industrial organizations was conducted, "growth points" of human resources development were defined. For the purpose of informational and telecommunicational implementation in the educational process the analysis of normative-legal documents regulating the requirements to the electronic educational environment and distance learning technologies is presented. The basic models of distance learning technologies and remote resources as part of teaching materials are used. Taking into account the specifics and requirements of industrial enterprises a number of tools and methodology of e-learning based on the identified needs of the industrial sector were offered. The basis of the proposed model is built on one-parameter model through a three-tier learning: kindergarten - secondary - higher education (professional) where the lifecycle of parameter is a list of the industrial enterprises demands to the educational process.

  18. Warpage Characteristics and Process Development of Through Silicon Via-Less Interconnection Technology.

    PubMed

    Shen, Wen-Wei; Lin, Yu-Min; Wu, Sheng-Tsai; Lee, Chia-Hsin; Huang, Shin-Yi; Chang, Hsiang-Hung; Chang, Tao-Chih; Chen, Kuan-Neng

    2018-08-01

    In this study, through silicon via (TSV)-less interconnection using the fan-out wafer-level-packaging (FO-WLP) technology and a novel redistribution layer (RDL)-first wafer level packaging are investigated. Since warpage of molded wafer is a critical issue and needs to be optimized for process integration, the evaluation of the warpage issue on a 12-inch wafer using finite element analysis (FEA) at various parameters is presented. Related parameters include geometric dimension (such as chip size, chip number, chip thickness, and mold thickness), materials' selection and structure optimization. The effect of glass carriers with various coefficients of thermal expansion (CTE) is also discussed. Chips are bonded onto a 12-inch reconstituted wafer, which includes 2 RDL layers, 3 passivation layers, and micro bumps, followed by using epoxy molding compound process. Furthermore, an optical surface inspector is adopted to measure the surface profile and the results are compared with the results from simulation. In order to examine the quality of the TSV-less interconnection structure, electrical measurement is conducted and the respective results are presented.

  19. Automatic EEG spike detection.

    PubMed

    Harner, Richard

    2009-10-01

    Since the 1970s advances in science and technology during each succeeding decade have renewed the expectation of efficient, reliable automatic epileptiform spike detection (AESD). But even when reinforced with better, faster tools, clinically reliable unsupervised spike detection remains beyond our reach. Expert-selected spike parameters were the first and still most widely used for AESD. Thresholds for amplitude, duration, sharpness, rise-time, fall-time, after-coming slow waves, background frequency, and more have been used. It is still unclear which of these wave parameters are essential, beyond peak-peak amplitude and duration. Wavelet parameters are very appropriate to AESD but need to be combined with other parameters to achieve desired levels of spike detection efficiency. Artificial Neural Network (ANN) and expert-system methods may have reached peak efficiency. Support Vector Machine (SVM) technology focuses on outliers rather than centroids of spike and nonspike data clusters and should improve AESD efficiency. An exemplary spike/nonspike database is suggested as a tool for assessing parameters and methods for AESD and is available in CSV or Matlab formats from the author at brainvue@gmail.com. Exploratory Data Analysis (EDA) is presented as a graphic method for finding better spike parameters and for the step-wise evaluation of the spike detection process.

  20. The Origin of Fracture in the I-ECAP of AZ31B Magnesium Alloy

    NASA Astrophysics Data System (ADS)

    Gzyl, Michal; Rosochowski, Andrzej; Boczkal, Sonia; Qarni, Muhammad Jawad

    2015-11-01

    Magnesium alloys are very promising materials for weight-saving structural applications due to their low density, comparing to other metals and alloys currently used. However, they usually suffer from a limited formability at room temperature and low strength. In order to overcome those issues, processes of severe plastic deformation (SPD) can be utilized to improve mechanical properties, but processing parameters need to be selected with care to avoid fracture, very often observed for those alloys during forming. In the current work, the AZ31B magnesium alloy was subjected to SPD by incremental equal-channel angular pressing (I-ECAP) at temperatures varying from 398 K to 525 K (125 °C to 250 °C) to determine the window of allowable processing parameters. The effects of initial grain size and billet rotation scheme on the occurrence of fracture during I-ECAP were investigated. The initial grain size ranged from 1.5 to 40 µm and the I-ECAP routes tested were A, BC, and C. Microstructures of the processed billets were characterized before and after I-ECAP. It was found that a fine-grained and homogenous microstructure was required to avoid fracture at low temperatures. Strain localization arising from a stress relaxation within recrystallized regions, namely twins and fine-grained zones, was shown to be responsible for the generation of microcracks. Based on the I-ECAP experiments and available literature data for ECAP, a power law between the initial grain size and processing conditions, described by a Zener-Hollomon parameter, has been proposed. Finally, processing by various routes at 473 K (200 °C) revealed that route A was less prone to fracture than routes BC and C.

  1. Study on effect of tool electrodes on surface finish during electrical discharge machining of Nitinol

    NASA Astrophysics Data System (ADS)

    Sahu, Anshuman Kumar; Chatterjee, Suman; Nayak, Praveen Kumar; Sankar Mahapatra, Siba

    2018-03-01

    Electrical discharge machining (EDM) is a non-traditional machining process which is widely used in machining of difficult-to-machine materials. EDM process can produce complex and intrinsic shaped component made of difficult-to-machine materials, largely applied in aerospace, biomedical, die and mold making industries. To meet the required applications, the EDMed components need to possess high accuracy and excellent surface finish. In this work, EDM process is performed using Nitinol as work piece material and AlSiMg prepared by selective laser sintering (SLS) as tool electrode along with conventional copper and graphite electrodes. The SLS is a rapid prototyping (RP) method to produce complex metallic parts by additive manufacturing (AM) process. Experiments have been carried out varying different process parameters like open circuit voltage (V), discharge current (Ip), duty cycle (τ), pulse-on-time (Ton) and tool material. The surface roughness parameter like average roughness (Ra), maximum height of the profile (Rt) and average height of the profile (Rz) are measured using surface roughness measuring instrument (Talysurf). To reduce the number of experiments, design of experiment (DOE) approach like Taguchi’s L27 orthogonal array has been chosen. The surface properties of the EDM specimen are optimized by desirability function approach and the best parametric setting is reported for the EDM process. Type of tool happens to be the most significant parameter followed by interaction of tool type and duty cycle, duty cycle, discharge current and voltage. Better surface finish of EDMed specimen can be obtained with low value of voltage (V), discharge current (Ip), duty cycle (τ) and pulse on time (Ton) along with the use of AlSiMg RP electrode.

  2. Analyzing the effect of cutting parameters on surface roughness and tool wear when machining nickel based hastelloy - 276

    NASA Astrophysics Data System (ADS)

    Khidhir, Basim A.; Mohamed, Bashir

    2011-02-01

    Machining parameters has an important factor on tool wear and surface finish, for that the manufacturers need to obtain optimal operating parameters with a minimum set of experiments as well as minimizing the simulations in order to reduce machining set up costs. The cutting speed is one of the most important cutting parameter to evaluate, it clearly most influences on one hand, tool life, tool stability, and cutting process quality, and on the other hand controls production flow. Due to more demanding manufacturing systems, the requirements for reliable technological information have increased. For a reliable analysis in cutting, the cutting zone (tip insert-workpiece-chip system) as the mechanics of cutting in this area are very complicated, the chip is formed in the shear plane (entrance the shear zone) and is shape in the sliding plane. The temperature contributed in the primary shear, chamfer and sticking, sliding zones are expressed as a function of unknown shear angle on the rake face and temperature modified flow stress in each zone. The experiments were carried out on a CNC lathe and surface finish and tool tip wear are measured in process. Machining experiments are conducted. Reasonable agreement is observed under turning with high depth of cut. Results of this research help to guide the design of new cutting tool materials and the studies on evaluation of machining parameters to further advance the productivity of nickel based alloy Hastelloy - 276 machining.

  3. Investigations of the surface activation of thermoplastic polymers by atmospheric pressure plasma treatment with a stationary plasma jet

    NASA Astrophysics Data System (ADS)

    Moritzer, Elmar; Nordmeyer, Timo; Leister, Christian; Schmidt, Martin Andreas; Grishin, Artur; Knospe, Alexander

    2016-03-01

    The production of high-quality thermoplastic parts often requires an additional process step after the injection molding stage. This may be a coating, bonding process or a 2K-injection moulding process. A commonly used process to improve the bond strength is atmospheric pressure plasma treatment. A variety of applications are realized with the aid of CNC systems. Although they ensure excellent reproducibility, they make it difficult to implement inline applications. This paper therefore examines the possibility of surface treatment using a stationary plasma jet. However, before it is possible to integrate this technology into a production process, preliminary trials need to be carried out to establish which factors influence the process. Experimental tests were performed using a special test set-up, enabling geometric, plasma-specific parameters to be identified. These results can help with the practical integration of this technology into existing production processes.

  4. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  5. Model-based high-throughput design of ion exchange protein chromatography.

    PubMed

    Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo

    2016-08-12

    This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  7. An open, object-based modeling approach for simulating subsurface heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  8. Control of Chemical Effects in the Separation Process of a Differential Mobility / Mass Spectrometer System

    PubMed Central

    Schneider, Bradley B.; Coy, Stephen L.; Krylov, Evgeny V.; Nazarov, Erkinjon G.

    2013-01-01

    Differential mobility spectrometry (DMS) separates ions on the basis of the difference in their migration rates under high versus low electric fields. Several models describing the physical nature of this field mobility dependence have been proposed but emerging as a dominant effect is the clusterization model sometimes referred to as the dynamic cluster-decluster model. DMS resolution and peak capacity is strongly influenced by the addition of modifiers which results in the formation and dissociation of clusters. This process increases selectivity due to the unique chemical interactions that occur between an ion and neutral gas phase molecules. It is thus imperative to bring the parameters influencing the chemical interactions under control and find ways to exploit them in order to improve the analytical utility of the device. In this paper we describe three important areas that need consideration in order to stabilize and capitalize on the chemical processes that dominate a DMS separation. The first involves means of controlling the dynamic equilibrium of the clustering reactions with high concentrations of specific reagents. The second area involves a means to deal with the unwanted heterogeneous cluster ion populations emitted from the electrospray ionization process that degrade resolution and sensitivity. The third involves fine control of parameters that affect the fundamental collision processes, temperature and pressure. PMID:20065515

  9. Mechanism for Plasma Etching of Shallow Trench Isolation Features in an Inductively Coupled Plasma

    NASA Astrophysics Data System (ADS)

    Agarwal, Ankur; Rauf, Shahid; He, Jim; Choi, Jinhan; Collins, Ken

    2011-10-01

    Plasma etching for microelectronics fabrication is facing extreme challenges as processes are developed for advanced technological nodes. As device sizes shrink, control of shallow trench isolation (STI) features become more important in both logic and memory devices. Halogen-based inductively coupled plasmas in a pressure range of 20-60 mTorr are typically used to etch STI features. The need for improved performance and shorter development cycles are placing greater emphasis on understanding the underlying mechanisms to meet process specifications. In this work, a surface mechanism for STI etch process will be discussed that couples a fundamental plasma model to experimental etch process measurements. This model utilizes ion/neutral fluxes and energy distributions calculated using the Hybrid Plasma Equipment Model. Experiments are for blanket Si wafers in a Cl2/HBr/O2/N2 plasma over a range of pressures, bias powers, and flow rates of feedstock gases. We found that kinetic treatment of electron transport was critical to achieve good agreement with experiments. The calibrated plasma model is then coupled to a string-based feature scale model to quantify the effect of varying process parameters on the etch profile. We found that the operating parameters strongly influence critical dimensions but have only a subtle impact on the etch depths.

  10. Multiphysics modeling of selective laser sintering/melting

    NASA Astrophysics Data System (ADS)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon cooling are calculated using the finite difference method. Different case studies are performed and general trends can be seen. This work concludes by discussing future extensions of this model and the need for a multi-scale approach to achieve comprehensive part-level models of the SLS/SLM process.

  11. Estimation of environment-related properties of chemicals for design of sustainable processes: development of group-contribution+ (GC+) property models and uncertainty analysis.

    PubMed

    Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul

    2012-11-26

    The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.

  12. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    NASA Astrophysics Data System (ADS)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  13. Convolutional Dictionary Learning: Acceleration and Convergence

    NASA Astrophysics Data System (ADS)

    Chun, Il Yong; Fessler, Jeffrey A.

    2018-04-01

    Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.

  14. Process design and control of a twin screw hot melt extrusion for continuous pharmaceutical tamper-resistant tablet production.

    PubMed

    Baronsky-Probst, J; Möltgen, C-V; Kessler, W; Kessler, R W

    2016-05-25

    Hot melt extrusion (HME) is a well-known process within the plastic and food industries that has been utilized for the past several decades and is increasingly accepted by the pharmaceutical industry for continuous manufacturing. For tamper-resistant formulations of e.g. opioids, HME is the most efficient production technique. The focus of this study is thus to evaluate the manufacturability of the HME process for tamper-resistant formulations. Parameters such as the specific mechanical energy (SME), as well as the melt pressure and its standard deviation, are important and will be discussed in this study. In the first step, the existing process data are analyzed by means of multivariate data analysis. Key critical process parameters such as feed rate, screw speed, and the concentration of the API in the polymers are identified, and critical quality parameters of the tablet are defined. In the second step, a relationship between the critical material, product and process quality attributes are established by means of Design of Experiments (DoEs). The resulting SME and the temperature at the die are essential data points needed to indirectly qualify the degradation of the API, which should be minimal. NIR-spectroscopy is used to monitor the material during the extrusion process. In contrast to most applications in which the probe is directly integrated into the die, the optical sensor is integrated into the cooling line of the strands. This saves costs in the probe design and maintenance and increases the robustness of the chemometric models. Finally, a process measurement system is installed to monitor and control all of the critical attributes in real-time by means of first principles, DoE models, soft sensor models, and spectroscopic information. Overall, the process is very robust as long as the screw speed is kept low. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Structural properties of templated Ge quantum dot arrays: impact of growth and pre-pattern parameters

    NASA Astrophysics Data System (ADS)

    Tempeler, J.; Danylyuk, S.; Brose, S.; Loosen, P.; Juschkin, L.

    2018-07-01

    In this study we analyze the impact of process and growth parameters on the structural properties of germanium (Ge) quantum dot (QD) arrays. The arrays were deposited by molecular-beam epitaxy on pre-patterned silicon (Si) substrates. Periodic arrays of pits with diameters between 120 and 20 nm and pitches ranging from 200 nm down to 40 nm were etched into the substrate prior to growth. The structural perfection of the two-dimensional QD arrays was evaluated based on SEM images. The impact of two processing steps on the directed self-assembly of Ge QD arrays is investigated. First, a thin Si buffer layer grown on a pre-patterned substrate reshapes the pre-pattern pits and determines the nucleation and initial shape of the QDs. Subsequently, the deposition parameters of the Ge define the overall shape and uniformity of the QDs. In particular, the growth temperature and the deposition rate are relevant and need to be optimized according to the design of the pre-pattern. Applying this knowledge, we are able to fabricate regular arrays of pyramid shaped QDs with dot densities up to 7.2 × 1010 cm‑2.

  16. Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow

    NASA Astrophysics Data System (ADS)

    Lambert, Gregory; Wapperom, Peter; Baird, Donald

    2017-12-01

    Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.

  17. Optimization of kinetic parameters for the degradation of plasmid DNA in rat plasma

    NASA Astrophysics Data System (ADS)

    Chaudhry, Q. A.

    2014-12-01

    Biotechnology is a rapidly growing area of research work in the field of pharmaceutical sciences. The study of pharmacokinetics of plasmid DNA (pDNA) is an important area of research work. It has been observed that the process of gene delivery faces many troubles on the transport of pDNA towards their target sites. The topoforms of pDNA has been termed as super coiled (S-C), open circular (O-C) and linear (L), the kinetic model of which will be presented in this paper. The kinetic model gives rise to system of ordinary differential equations (ODEs), the exact solution of which has been found. The kinetic parameters, which are responsible for the degradation of super coiled, and the formation of open circular and linear topoforms have a great significance not only in vitro but for modeling of further processes as well, therefore need to be addressed in great detail. For this purpose, global optimization techniques have been adopted, thus finding the optimal results for the said model. The results of the model, while using the optimal parameters, were compared against the measured data, which gives a nice agreement.

  18. Structural properties of templated Ge quantum dot arrays: impact of growth and pre-pattern parameters.

    PubMed

    Tempeler, J; Danylyuk, S; Brose, S; Loosen, P; Juschkin, L

    2018-07-06

    In this study we analyze the impact of process and growth parameters on the structural properties of germanium (Ge) quantum dot (QD) arrays. The arrays were deposited by molecular-beam epitaxy on pre-patterned silicon (Si) substrates. Periodic arrays of pits with diameters between 120 and 20 nm and pitches ranging from 200 nm down to 40 nm were etched into the substrate prior to growth. The structural perfection of the two-dimensional QD arrays was evaluated based on SEM images. The impact of two processing steps on the directed self-assembly of Ge QD arrays is investigated. First, a thin Si buffer layer grown on a pre-patterned substrate reshapes the pre-pattern pits and determines the nucleation and initial shape of the QDs. Subsequently, the deposition parameters of the Ge define the overall shape and uniformity of the QDs. In particular, the growth temperature and the deposition rate are relevant and need to be optimized according to the design of the pre-pattern. Applying this knowledge, we are able to fabricate regular arrays of pyramid shaped QDs with dot densities up to 7.2 × 10 10 cm -2 .

  19. Global parameter optimization of a Mather-type plasma focus in the framework of the Gratton-Vargas two-dimensional snowplow model

    NASA Astrophysics Data System (ADS)

    Auluck, S. K. H.

    2014-12-01

    Dense plasma focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool that can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited (Auluck 2013 Phys. Plasmas 20 112501) Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather-type plasma focus fitted to thousands of automated computations, which enables the construction of such a design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a four-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over eight decades in capacitor bank energy. The optimized geometry of plasma focus normalized to the anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance.

  20. The hierarchical expert tuning of PID controllers using tools of soft computing.

    PubMed

    Karray, F; Gueaieb, W; Al-Sharhan, S

    2002-01-01

    We present soft computing-based results pertaining to the hierarchical tuning process of PID controllers located within the control loop of a class of nonlinear systems. The results are compared with PID controllers implemented either in a stand alone scheme or as a part of conventional gain scheduling structure. This work is motivated by the increasing need in the industry to design highly reliable and efficient controllers for dealing with regulation and tracking capabilities of complex processes characterized by nonlinearities and possibly time varying parameters. The soft computing-based controllers proposed are hybrid in nature in that they integrate within a well-defined hierarchical structure the benefits of hard algorithmic controllers with those having supervisory capabilities. The controllers proposed also have the distinct features of learning and auto-tuning without the need for tedious and computationally extensive online systems identification schemes.

  1. Everlasting Dark Printing on Alumina by Laser

    NASA Astrophysics Data System (ADS)

    Penide, J.; Quintero, F.; Arias-González, F.; Fernández, A.; del Val, J.; Comesaña, R.; Riveiro, A.; Lusquiños, F.; Pou, J.

    Marks or prints are needed in almost every material, mainly for decorative or identification purposes. Despite alumina is widely employed in many different industries, the need of printing directly on its surface is still a complex problem. In this sense, lasers have largely demonstrated their high capacities to mark almost every material including ceramics, but performing dark permanent marks on alumina is still an open challenge. In this work we present the results of a comprehensive experimental analysis on the process of marking alumina by laser. Four different laser sources were used in this study: a fiber laser (1075 nm) and three diode pumped Nd:YVO4 lasers emitting at near-infrared (1064 nm), visible (532 nm) and ultraviolet (355 nm) wavelengths, respectively. The results obtained with the four lasers were compared and physical processes involved were explained in detail. Colorimetric analyses allowed to identify the optimal parameters and conditions to produce everlasting and high contrast marks on alumina.

  2. Experimental Identification and Characterization of Multirotor UAV Propulsion

    NASA Astrophysics Data System (ADS)

    Kotarski, Denis; Krznar, Matija; Piljek, Petar; Simunic, Nikola

    2017-07-01

    In this paper, an experimental procedure for the identification and characterization of multirotor Unmanned Aerial Vehicle (UAV) propulsion is presented. Propulsion configuration needs to be defined precisely in order to achieve required flight performance. Based on the accurate dynamic model and empirical measurements of multirotor propulsion physical parameters, it is possible to design diverse configurations with different characteristics for various purposes. As a case study, we investigated design considerations for a micro indoor multirotor which is suitable for control algorithm implementation in structured environment. It consists of open source autopilot, sensors for indoor flight, “take off the shelf” propulsion components and frame. The series of experiments were conducted to show the process of parameters identification and the procedure for analysis and propulsion characterization. Additionally, we explore battery performance in terms of mass and specific energy. Experimental results show identified and estimated propulsion parameters through which blade element theory is verified.

  3. Design of a gap tunable flux qubit with FastHenry

    NASA Astrophysics Data System (ADS)

    Akhtar, Naheed; Zheng, Yarui; Nazir, Mudassar; Wu, Yulin; Deng, Hui; Zheng, Dongning; Zhu, Xiaobo

    2016-12-01

    In the preparations of superconducting qubits, circuit design is a vital process because the parameters and layout of the circuit not only determine the way we address the qubits, but also strongly affect the qubit coherence properties. One of the most important circuit parameters, which needs to be carefully designed, is the mutual inductance among different parts of a superconducting circuit. In this paper we demonstrate how to design a gap-tunable flux qubit by layout design and inductance extraction using a fast field solver FastHenry. The energy spectrum of the gap-tunable flux qubit shows that the measured parameters are close to the design values. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374344, 11404386, and 91321208), the National Basic Research Program of China (Grant No. 2014CB921401), and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB07010300).

  4. Benchmarking in Thoracic Surgery. Third Edition.

    PubMed

    Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás

    2016-04-01

    Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.

  5. Biomachining: metal etching via microorganisms.

    PubMed

    Díaz-Tena, Estíbaliz; Barona, Astrid; Gallastegui, Gorka; Rodríguez, Adrián; López de Lacalle, L Norberto; Elías, Ana

    2017-05-01

    The use of microorganisms to remove metal from a workpiece is known as biological machining or biomachining, and it has gained in both importance and scientific relevance over the past decade. Conversely to mechanical methods, the use of readily available microorganisms is low-energy consuming, and no thermal damage is caused during biomachining. The performance of this sustainable process is assessed by the material removal rate, and certain parameters have to be controlled for manufacturing the machined part with the desired surface finish. Although the variety of microorganisms is scarce, cell concentration or density plays an important role in the process. There is a need to control the temperature to maintain microorganism activity at its optimum, and a suitable shaking rate provides an efficient contact between the workpiece and the biological medium. The system's tolerance to the sharp changes in pH is quite limited, and in many cases, an acid medium has to be maintained for effective performance. This process is highly dependent on the type of metal being removed. Consequently, the operating parameters need to be determined on a case-by-case basis. The biomachining time is another variable with a direct impact on the removal rate. This biological technique can be used for machining simple and complex shapes, such as series of linear, circular, and square micropatterns on different metal surfaces. The optimal biomachining process should be fast enough to ensure high production, a smooth and homogenous surface finish and, in sum, a high-quality piece. As a result of the high global demand for micro-components, biomachining provides an effective and sustainable alternative. However, its industrial-scale implementation is still pending.

  6. Development of low-stress Iridium coatings for astronomical x-ray mirrors

    NASA Astrophysics Data System (ADS)

    Döhring, Thorsten; Probst, Anne-Catherine; Stollenwerk, Manfred; Wen, Mingwu; Proserpio, Laura

    2016-07-01

    Previously used mirror technologies are not suitable for the challenging needs of future X-ray telescopes. This is why the required high precision mirror manufacturing triggers new technical developments around the world. Some aspects of X-ray mirrors production are studied within the interdisciplinary project INTRAAST, a German acronym for "industry transfer of astronomical mirror technologies". The project is embedded in a cooperation of Aschaffenburg University of Applied Sciences and the Max-Planck-Institute for extraterrestrial Physics. One important task is the development of low-stress Iridium coatings for X-ray mirrors based on slumped thin glass substrates. The surface figure of the glass substrates is measured before and after the coating process by optical methods. Correlating the surface shape deformation to the parameters of coating deposition, here especially to the Argon sputtering pressure, allows for an optimization of the process. The sputtering parameters also have an influence on the coating layer density and on the micro-roughness of the coatings, influencing their X-ray reflection properties. Unfortunately the optimum coating process parameters seem to be contrarious: low Argon pressure resulted in better micro-roughness and higher density, whereas higher pressure leads to lower coating stress. Therefore additional measures like intermediate coating layers and temperature treatment will be considered for further optimization. The technical approach for the low-stress Iridium coating development, the experimental equipment, and the obtained first experimental results are presented within this paper.

  7. On the interpretation of weight vectors of linear models in multivariate neuroimaging.

    PubMed

    Haufe, Stefan; Meinecke, Frank; Görgen, Kai; Dähne, Sven; Haynes, John-Dylan; Blankertz, Benjamin; Bießmann, Felix

    2014-02-15

    The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state. These equally important but complementary objectives require different analysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming backward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Magnetic damping of thermocapillary convection in the floating-zone growth of semiconductor crystals

    NASA Astrophysics Data System (ADS)

    Morthland, Timothy Edward

    The floating zone is one process used to grow high purity semiconductor single crystals. In the floating-zone process, a liquid bridge of molten semiconductor, or melt, is held by surface tension between the upper, melting polycrystalline feed rod and the lower, solidifying single crystal. A perfect crystal would require a quiescent melt with pure diffusion of dopants during the entire period needed to grow the crystal. However, temperature variations along the free surface of the melt lead to gradients of the temperature-dependent surface tension, driving a strong and unsteady flow in the melt, commonly labeled thermocapillary or Marangoni convection. For small temperature differences along the free surface, unsteady thermocapillary convection occurs, disrupting the diffusion controlled solidification and creating undesirable dopant concentration variations in the semiconductor single crystal. Since molten semiconductors are good electrical conductors, an externally applied, steady magnetic field can eliminate the unsteadiness in the melt and can reduce the magnitude of the residual steady motion. Crystal growers hope that a strong enough magnetic field will lead to diffusion controlled solidification, but the magnetic field strengths needed to damp the unsteady thermocapillary convection as a function of floating-zone process parameters is unknown. This research has been conducted in the area of the magnetic damping of thermocapillary convection in floating zones. Both steady and unsteady flows have been investigated. Due to the added complexities in solving Maxwells equations in these magnetohydrodynamic problems and due to the thin boundary layers in these flows, a direct numerical simulation of the fluid and heat transfer in the floating zone is virtually impossible, and it is certainly impossible to run enough simulations to search for neutral stability as a function of magnetic field strength over the entire parameter space. To circumvent these difficulties, we have used matched asymptotic expansions, linear stability theory and numerics to characterize these flows. Some fundamental aspects of the heat transfer and fluid mechanics in these magnetohydrodynamic flows are elucidated in addition to the calculation of the magnetic field strengths required to damp unsteady thermocapillary convection as a function of process parameters.

  9. Higher Plants in life support systems: design of a model and plant experimental compartment

    NASA Astrophysics Data System (ADS)

    Hezard, Pauline; Farges, Berangere; Sasidharan L, Swathy; Dussap, Claude-Gilles

    The development of closed ecological life support systems (CELSS) requires full control and efficient engineering for fulfilling the common objectives of water and oxygen regeneration, CO2 elimination and food production. Most of the proposed CELSS contain higher plants, for which a growth chamber and a control system are needed. Inside the compartment the development of higher plants must be understood and modeled in order to be able to design and control the compartment as a function of operating variables. The plant behavior must be analyzed at different sub-process scales : (i) architecture and morphology describe the plant shape and lead to calculate the morphological parameters (leaf area, stem length, number of meristems. . . ) characteristic of life cycle stages; (ii) physiology and metabolism of the different organs permit to assess the plant composition depending on the plant input and output rates (oxygen, carbon dioxide, water and nutrients); (iii) finally, the physical processes are light interception, gas exchange, sap conduction and root uptake: they control the available energy from photosynthesis and the input and output rates. These three different sub-processes are modeled as a system of equations using environmental and plant parameters such as light intensity, temperature, pressure, humidity, CO2 and oxygen partial pressures, nutrient solution composition, total leaf surface and leaf area index, chlorophyll content, stomatal conductance, water potential, organ biomass distribution and composition, etc. The most challenging issue is to develop a comprehensive and operative mathematical model that assembles these different sub-processes in a unique framework. In order to assess the parameters for testing a model, a polyvalent growth chamber is necessary. It should permit a controlled environment in order to test and understand the physiological response and determine the control strategy. The final aim of this model is to have an envi-ronmental control of plant behavior: this requires an extended knowledge of plant response to environment variations. This needs a large number of experiments, which would be easier to perform in a high-throughput system.

  10. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondy, Lisa Ann; Rao, Rekha Ranjana; Shelden, Bion

    We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions,more » following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.« less

  12. Analysis of residual stress state in sheet metal parts processed by single point incremental forming

    NASA Astrophysics Data System (ADS)

    Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.

    2018-05-01

    The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.

  13. Modelling health care processes for eliciting user requirements: a way to link a quality paradigm and clinical information system design.

    PubMed

    Staccini, P; Joubert, M; Quaranta, J F; Fieschi, D; Fieschi, M

    2000-01-01

    Hospital information systems have to support quality improvement objectives. The design issues of health care information system can be classified into three categories: 1) time-oriented and event-labelled storage of patient data; 2) contextual support of decision-making; 3) capabilities for modular upgrading. The elicitation of the requirements has to meet users' needs in relation to both the quality (efficacy, safety) and the monitoring of all health care activities (traceability). Information analysts need methods to conceptualize clinical information systems that provide actors with individual benefits and guide behavioural changes. A methodology is proposed to elicit and structure users' requirements using a process-oriented analysis, and it is applied to the field of blood transfusion. An object-oriented data model of a process has been defined in order to identify its main components: activity, sub-process, resources, constrains, guidelines, parameters and indicators. Although some aspects of activity, such as "where", "what else", and "why" are poorly represented by the data model alone, this method of requirement elicitation fits the dynamic of data input for the process to be traced. A hierarchical representation of hospital activities has to be found for this approach to be generalised within the organisation, for the processes to be interrelated, and for their characteristics to be shared.

  14. Modeling the Effects of Coolant Application in Friction Stir Processing on Material Microstructure Using 3D CFD Analysis

    NASA Astrophysics Data System (ADS)

    Aljoaba, Sharif; Dillon, Oscar; Khraisheh, Marwan; Jawahir, I. S.

    2012-07-01

    The ability to generate nano-sized grains is one of the advantages of friction stir processing (FSP). However, the high temperatures generated during the stirring process within the processing zone stimulate the grains to grow after recrystallization. Therefore, maintaining the small grains becomes a critical issue when using FSP. In the present reports, coolants are applied to the fixture and/or processed material in order to reduce the temperature and hence, grain growth. Most of the reported data in the literature concerning cooling techniques are experimental. We have seen no reports that attempt to predict these quantities when using coolants while the material is undergoing FSP. Therefore, there is need to develop a model that predicts the resulting grain size when using coolants, which is an important step toward designing the material microstructure. In this study, two three-dimensional computational fluid dynamics (CFD) models are reported which simulate FSP with and without coolant application while using the STAR CCM+ CFD commercial software. In the model with the coolant application, the fixture (backing plate) is modeled while is not in the other model. User-defined subroutines were incorporated in the software and implemented to investigate the effects of changing process parameters on temperature, strain rate and material velocity fields in, and around, the processed nugget. In addition, a correlation between these parameters and the Zener-Holloman parameter used in material science was developed to predict the grain size distribution. Different stirring conditions were incorporated in this study to investigate their effects on material flow and microstructural modification. A comparison of the results obtained by using each of the models on the processed microstructure is also presented for the case of Mg AZ31B-O alloy. The predicted results are also compared with the available experimental data and generally show good agreement.

  15. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  16. Classification of hydrological parameter sensitivity and evaluation of parameter transferability across 431 US MOPEX basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi

    The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less

  17. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  18. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  19. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  20. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  1. No Future in the Past? The role of initial topography on landform evolution model predictions

    NASA Astrophysics Data System (ADS)

    Hancock, G. R.; Coulthard, T. J.; Lowry, J.

    2014-12-01

    Our understanding of earth surface processes is based on long-term empirical understandings, short-term field measurements as well as numerical models. In particular, numerical landscape evolution models (LEMs) have been developed which have the capability to capture a range of both surface (erosion and deposition), tectonics, as well as near surface or critical zone processes (i.e. pedogenesis). These models have a range of applications for understanding both surface and whole of landscape dynamics through to more applied situations such as degraded site rehabilitation. LEMs are now at the stage of development where if calibrated, can provide some level of reliability. However, these models are largely calibrated based on parameters determined from present surface conditions which are the product of much longer-term geology-soil-climate-vegetation interactions. Here, we assess the effect of the initial landscape dimensions and associated error as well as parameterisation for a potential post-mining landform design. The results demonstrate that subtle surface changes in the initial DEM as well as parameterisation can have a large impact on landscape behaviour, erosion depth and sediment discharge. For example, the predicted sediment output from LEM's is shown to be highly variable even with very subtle changes in initial surface conditions. This has two important implications in that decadal time scale field data is needed to (a) better parameterise models and (b) evaluate their predictions. We question how a LEM using parameters derived from field plots can firstly be employed to examine long-term landscape evolution. Secondly, the potential range of outcomes is examined based on estimated temporal parameter change and thirdly, the need for more detailed and rigorous field data for calibration and validation of these models is discussed.

  2. Desorption kinetics of hydrophobic organic chemicals from sediment to water: a review of data and models.

    PubMed

    Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J

    2007-03-01

    Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.

  3. A GPU-Accelerated Parameter Interpolation Thermodynamic Integration Free Energy Method.

    PubMed

    Giese, Timothy J; York, Darrin M

    2018-03-13

    There has been a resurgence of interest in free energy methods motivated by the performance enhancements offered by molecular dynamics (MD) software written for specialized hardware, such as graphics processing units (GPUs). In this work, we exploit the properties of a parameter-interpolated thermodynamic integration (PI-TI) method to connect states by their molecular mechanical (MM) parameter values. This pathway is shown to be better behaved for Mg 2+ → Ca 2+ transformations than traditional linear alchemical pathways (with and without soft-core potentials). The PI-TI method has the practical advantage that no modification of the MD code is required to propagate the dynamics, and unlike with linear alchemical mixing, only one electrostatic evaluation is needed (e.g., single call to particle-mesh Ewald) leading to better performance. In the case of AMBER, this enables all the performance benefits of GPU-acceleration to be realized, in addition to unlocking the full spectrum of features available within the MD software, such as Hamiltonian replica exchange (HREM). The TI derivative evaluation can be accomplished efficiently in a post-processing step by reanalyzing the statistically independent trajectory frames in parallel for high throughput. We also show how one can evaluate the particle mesh Ewald contribution to the TI derivative evaluation without needing to perform two reciprocal space calculations. We apply the PI-TI method with HREM on GPUs in AMBER to predict p K a values in double stranded RNA molecules and make comparison with experiments. Convergence to under 0.25 units for these systems required 100 ns or more of sampling per window and coupling of windows with HREM. We find that MM charges derived from ab initio QM/MM fragment calculations improve the agreement between calculation and experimental results.

  4. Progress in remote sensing of global land surface heat fluxes and evaporations with a turbulent heat exchange parameterization method

    NASA Astrophysics Data System (ADS)

    Chen, Xuelong; Su, Bob

    2017-04-01

    Remote sensing has provided us an opportunity to observe Earth land surface with a much higher resolution than any of GCM simulation. Due to scarcity of information for land surface physical parameters, up-to-date GCMs still have large uncertainties in the coupled land surface process modeling. One critical issue is a large amount of parameters used in their land surface models. Thus remote sensing of land surface spectral information can be used to provide information on these parameters or assimilated to decrease the model uncertainties. Satellite imager could observe the Earth land surface with optical, thermal and microwave bands. Some basic Earth land surface status (land surface temperature, canopy height, canopy leaf area index, soil moisture etc.) has been produced with remote sensing technique, which already help scientists understanding Earth land and atmosphere interaction more precisely. However, there are some challenges when applying remote sensing variables to calculate global land-air heat and water exchange fluxes. Firstly, a global turbulent exchange parameterization scheme needs to be developed and verified, especially for global momentum and heat roughness length calculation with remote sensing information. Secondly, a compromise needs to be innovated to overcome the spatial-temporal gaps in remote sensing variables to make the remote sensing based land surface fluxes applicable for GCM model verification or comparison. A flux network data library (more 200 flux towers) was collected to verify the designed method. Important progress in remote sensing of global land flux and evaporation will be presented and its benefits for GCM models will also be discussed. Some in-situ studies on the Tibetan Plateau and problems of land surface process simulation will also be discussed.

  5. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    NASA Astrophysics Data System (ADS)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  6. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).

  7. An approach to and web-based tool for infectious disease outbreak intervention analysis

    NASA Astrophysics Data System (ADS)

    Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; Deshpande, Alina

    2017-04-01

    Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public health community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.

  8. Direct approach for bioprocess optimization in a continuous flat-bed photobioreactor system.

    PubMed

    Kwon, Jong-Hee; Rögner, Matthias; Rexroth, Sascha

    2012-11-30

    Application of photosynthetic micro-organisms, such as cyanobacteria and green algae, for the carbon neutral energy production raises the need for cost-efficient photobiological processes. Optimization of these processes requires permanent control of many independent and mutably dependent parameters, for which a continuous cultivation approach has significant advantages. As central factors like the cell density can be kept constant by turbidostatic control, light intensity and iron content with its strong impact on productivity can be optimized. Both are key parameters due to their strong dependence on photosynthetic activity. Here we introduce an engineered low-cost 5 L flat-plate photobioreactor in combination with a simple and efficient optimization procedure for continuous photo-cultivation of microalgae. Based on direct determination of the growth rate at constant cell densities and the continuous measurement of O₂ evolution, stress conditions and their effect on the photosynthetic productivity can be directly observed. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Radiation levels and image quality in patients undergoing chest X-ray examinations

    NASA Astrophysics Data System (ADS)

    de Oliveira, Paulo Márcio Campos; do Carmo Santana, Priscila; de Sousa Lacerda, Marco Aurélio; da Silva, Teógenes Augusto

    2017-11-01

    Patient dose monitoring for different radiographic procedures has been used as a parameter to evaluate the performance of radiology services; skin entrance absorbed dose values for each type of examination were internationally established and recommended aiming patient protection. In this work, a methodology for dose evaluation was applied to three diagnostic services: one with a conventional film and two with digital computerized radiography processing techniques. The x-ray beam parameters were selected and "doses" (specifically the entrance surface and incident air kerma) were evaluated based on images approved in European criteria during postero-anterior (PA) and lateral (LAT) incidences. Data were collected from 200 patients related to 200 PA and 100 LAT incidences. Results showed that doses distributions in the three diagnostic services were very different; the best relation between dose and image quality was found in the institution with the chemical film processing. This work contributed for disseminating the radiation protection culture by emphasizing the need of a continuous dose reduction without losing the quality of the diagnostic image.

  10. Mechanical Characteristics of SiC Coating Layer in TRISO Fuel Particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. Hosemann; J. N. Martos; D. Frazer

    2013-11-01

    Tristructural isotropic (TRISO) particles are considered as advanced fuel forms for a variety of fission platforms. While these fuel structures have been tested and deployed in reactors, the mechanical properties of these structures as a function of production parameters need to be investigated in order to ensure their reliability during service. Nanoindentation techniques, indentation crack testing, and half sphere crush testing were utilized in order to evaluate the integrity of the SiC coating layer that is meant to prevent fission product release in the coated particle fuel form. The results are complimented by scanning electron microscopy (SEM) of the grainmore » structure that is subject to change as a function of processing parameters and can alter the mechanical properties such as hardness, elastic modulus, fracture toughness and fracture strength. Through utilization of these advanced techniques, subtle differences in mechanical properties that can be important for in-pile fuel performance can be distinguished and optimized in iteration with processing science of coated fuel particle production.« less

  11. Initial planetary base construction techniques and machine implementation

    NASA Technical Reports Server (NTRS)

    Crockford, William W.

    1987-01-01

    Conceptual designs of (1) initial planetary base structures, and (2) an unmanned machine to perform the construction of these structures using materials local to the planet are presented. Rock melting is suggested as a possible technique to be used by the machine in fabricating roads, platforms, and interlocking bricks. Identification of problem areas in machine design and materials processing is accomplished. The feasibility of the designs is contingent upon favorable results of an analysis of the engineering behavior of the product materials. The analysis requires knowledge of several parameters for solution of the constitutive equations of the theory of elasticity. An initial collection of these parameters is presented which helps to define research needed to perform a realistic feasibility study. A qualitative approach to estimating power and mass lift requirements for the proposed machine is used which employs specifications of currently available equipment. An initial, unmanned mission scenario is discussed with emphasis on identifying uncompleted tasks and suggesting design considerations for vehicles and primitive structures which use the products of the machine processing.

  12. Building model analysis applications with the Joint Universal Parameter IdenTification and Evaluation of Reliability (JUPITER) API

    USGS Publications Warehouse

    Banta, E.R.; Hill, M.C.; Poeter, E.; Doherty, J.E.; Babendreier, J.

    2008-01-01

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.

  13. The Model of Gas Supply Capacity Simulation In Regional Energy Security Framework: Policy Studies PT. X Cirebon Area

    NASA Astrophysics Data System (ADS)

    Nuryadin; Ronny Rahman Nitibaskara, Tb; Herdiansyah, Herdis; Sari, Ravita

    2017-10-01

    The needs of energy are increasing every year. The unavailability of energy will cause economic losses and weaken energy security. To overcome the availability of gas supply in the future, planning are cruacially needed. Therefore, it is necessary to approach the system, so that the process of gas distribution is running properly. In this research, system dynamic method will be used to measure how much supply capacity planning is needed until 2050, with parameters of demand in industrial, household and commercial sectors. From the model obtained PT.X Cirebon area in 2031 was not able to meet the needs of gas customers in the Cirebon region, as well as with Businnes as usual scenario, the ratio of gas fulfillment only until 2027. The implementation of the national energy policy that is the use of NRE as government intervention in the model is produced up to 2035 PT.X Cirebon area is still able to supply the gas needs of its customers.

  14. The Role of Anchor Stations in the Validation of Earth Observation Satellite Data and Products. The Valencia and the Alacant Anchor Stations

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Geraldo Ferreira, A.; Saleh-Contell, Kauzar

    Space technology facilitates humanity and science with a global revolutionary view of the Earth through the acquisition of Earth Observation satellite data. Satellites capture information over different spatial and temporal scales and assist in understanding natural climate processes and in detecting and explaining climate change. Accurate Earth Observation data is needed to describe climate processes by improving the parameterisations of different climate elements. Algorithms to produce geophysical parameters from raw satellite observations should go through selection processes or participate in inter-comparison programmes to ensure performance reliability. Geophysical parameter datasets, obtained from satellite observations, should pass a quality control before they are accepted in global databases for impact, diagnostic or sensitivity studies. Calibration and Validation, or simply "Cal/Val", is the activity that endeavours to ensure that remote sensing products are highly consistent and reproducible. This is an evolving scientific activity that is becoming increasingly important as more long-term studies on global change are undertaken, and new satellite missions are launched. Calibration is the process of quantitatively defining the system responses to known, controlled signal inputs. Validation refers to the process of assessing, by independent means, the quality of the data products derived from the system outputs. These definitions are generally accepted and most often used in the remote sensing context to refer specifically and respectively to sensor radiometric calibration and geophysical parameter validation. Anchor Stations are carefully selected locations at which instruments measure quantities that are needed to run, calibrate or validate models and algorithms. These are needed to quanti-tatively evaluate satellite data and convert it into geophysical information. The instruments collect measurements of basic quantities over a long timescale. Measurements are made of meteorological and hydrological background data, and of quantities not readily assessed at operational stations. Anchor Stations also offer infrastructure to undertake validation experi-ments. These are more detailed measurements over shorter intensive observation periods. The Valencia Anchor Station is showing its capabilities and conditions as a reference validation site in the framework of low spatial resolution remote sensing missions such as CERES, GERB and SMOS. The Alacant Anchor Station is a reference site in studies on the interactions between desertification and climate. This paper presents the activities so far carried out at both Anchor Stations, the precise and detailed ground and aircraft experiments carefully designed to develop a specific methodology to validate low spatial resolution satellite data and products, and the knowledge exchange currently being exercised between the University of Valencia, Spain, and FUNCEME, Brazil, in common objectives of mutual interest.

  15. Improving Information Exchange in the Chicken Processing Sector Using Standardised Data Lists

    NASA Astrophysics Data System (ADS)

    Donnelly, Kathryn Anne-Marie; van der Roest, Joop; Höskuldsson, Stefán Torfi; Olsen, Petter; Karlsen, Kine Mari

    Research has shown that to improve electronic communication between companies, universal standardised data lists are necessary. In food supply chains in particular there is an increased need to exchange data in the wake of food safety incidents. Food supply chain companies already record numerous measurements, properties and parameters. These records are necessary for legal reasons, labelling, traceability, profiling desirable characteristics, showing compliance and for meeting customer requirements. Universal standards for name and content of each of these data elements would improve information exchange between buyers, sellers, authorities, consumers and other interested parties. A case study, carried out for the chicken sector, attempted to identify the most relevant parameters including which of these were already communicated to external bodies.

  16. Nonholonomic Hamiltonian Method for Molecular Dynamics Simulations of Reacting Shocks

    NASA Astrophysics Data System (ADS)

    Fahrenthold, Eric; Bass, Joseph

    2015-06-01

    Conventional molecular dynamics simulations of reacting shocks employ a holonomic Hamiltonian formulation: the breaking and forming of covalent bonds is described by potential functions. In general these potential functions: (a) are algebraically complex, (b) must satisfy strict smoothness requirements, and (c) contain many fitted parameters. In recent research the authors have developed a new noholonomic formulation of reacting molecular dynamics. In this formulation bond orders are determined by rate equations and the bonding-debonding process need not be described by differentiable functions. This simplifies the representation of complex chemistry and reduces the number of fitted model parameters. Example applications of the method show molecular level shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.

  17. Image quality enhancement for skin cancer optical diagnostics

    NASA Astrophysics Data System (ADS)

    Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey

    2017-12-01

    The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.

  18. Systems Analyze Water Quality in Real Time

    NASA Technical Reports Server (NTRS)

    2010-01-01

    A water analyzer developed under Small Business Innovation Research (SBIR) contracts with Kennedy Space Center now monitors treatment processes at water and wastewater facilities around the world. Originally designed to provide real-time detection of nutrient levels in hydroponic solutions for growing plants in space, the ChemScan analyzer, produced by ASA Analytics Inc., of Waukesha, Wisconsin, utilizes spectrometry and chemometric algorithms to automatically analyze multiple parameters in the water treatment process with little need for maintenance, calibration, or operator intervention. The company has experienced a compound annual growth rate of 40 percent over its 15-year history as a direct result of the technology's success.

  19. Balancing consumer protection and scientific integrity in the face of uncertainty: the example of gluten-free foods.

    PubMed

    McCabe, Margaret Sova

    2010-01-01

    In 2009, gluten-free foods were not only "hot" in the marketplace, several countries, including the United States, continued efforts to define gluten-free and appropriate labeling parameters. The regulatory process illuminates how difficult regulations based on safe scientific thresholds can be for regulators, manufacturers and consumers. This article analyzes the gluten-free regulatory landscape, challenges to defining a safe gluten threshold, and how consumers might need more label information beyond the term "gluten-free." The article includes an overview of international gluten-free regulations, the Food and Drug Administration (FDA) rulemaking process, and issues for consumers.

  20. Application of twin screw extrusion to the manufacture of cocrystals: scale-up of AMG 517-sorbic acid cocrystal production.

    PubMed

    Daurio, Dominick; Nagapudi, Karthik; Li, Lan; Quan, Peter; Nunez, Fernando-Alvarez

    2014-01-01

    The application of twin screw extrusion (TSE) in the scale-up of cocrystal production was investigated by using AMG 517-sorbic acid as a model system. Extrusion parameters that influenced conversion to the cocrystal such as temperature, feed rate and screw speed were investigated. Extent of conversion to the cocrystal was found to have a strong dependence on temperature and a moderate dependence on feed rate and screw speed. Cocrystals made by the TSE process were found to have superior mechanical properties than solution grown cocrystals. Additionally, moving to a TSE process eliminated the need for solvent.

  1. Bringing scientific rigor to community-developed programs in Hong Kong.

    PubMed

    Fabrizio, Cecilia S; Hirschmann, Malia R; Lam, Tai Hing; Cheung, Teresa; Pang, Irene; Chan, Sophia; Stewart, Sunita M

    2012-12-31

    This paper describes efforts to generate evidence for community-developed programs to enhance family relationships in the Chinese culture of Hong Kong, within the framework of community-based participatory research (CBPR). The CBPR framework was applied to help maximize the development of the intervention and the public health impact of the studies, while enhancing the capabilities of the social service sector partners. Four academic-community research teams explored the process of designing and implementing randomized controlled trials in the community. In addition to the expected cultural barriers between teams of academics and community practitioners, with their different outlooks, concerns and languages, the team navigated issues in utilizing the principles of CBPR unique to this Chinese culture. Eventually the team developed tools for adaptation, such as an emphasis on building the relationship while respecting role delineation and an iterative process of defining the non-negotiable parameters of research design while maintaining scientific rigor. Lessons learned include the risk of underemphasizing the size of the operational and skills shift between usual agency practices and research studies, the importance of minimizing non-negotiable parameters in implementing rigorous research designs in the community, and the need to view community capacity enhancement as a long term process. The four pilot studies under the FAMILY Project demonstrated that nuanced design adaptations, such as wait list controls and shorter assessments, better served the needs of the community and led to the successful development and vigorous evaluation of a series of preventive, family-oriented interventions in the Chinese culture of Hong Kong.

  2. Inside out: a neuro-behavioral signature of free recall dynamics.

    PubMed

    Shapira-Lichter, Irit; Vakil, Eli; Glikmann-Johnston, Yifat; Siman-Tov, Tali; Caspi, Dan; Paran, Daphna; Hendler, Talma

    2012-07-01

    Free recall (FR) is a ubiquitous internally-driven retrieval operation that crucially affects our day-to-day life. The neural correlates of FR, however, are not sufficiently understood, partly due to the methodological challenges presented by its emerging property and endogenic nature. Using fMRI and performance measures, the neuro-behavioral correlates of FR were studied in 33 healthy participants who repeatedly encoded and retrieved word-lists. Retrieval was determined either overtly via verbal output (Experiment 1) or covertly via motor responses (Experiment 2). Brain activation during FR was characterized by two types of performance-based parametric analyses of retrieval changes over time. First was the elongation in inter response time (IRT) assumed to represent the prolongation of memory search over time, as increased effort was needed. Using a derivative of this parameter in whole brain analysis revealed the default mode network (DMN): longer IRT within FR blocks correlated with less deactivation of the DMN, representing its greater recruitment. Second was the increased number of words retrieved in repeated encoding-recall cycles, assumed to represent the learning process. Using this parameter in whole brain analysis revealed increased deactivation in the DMN (i.e., less recruitment). Together our results demonstrate the naturally occurring dynamics in the recruitment of the DMN during utilization of internally generated processes during FR. The contrasting effects of increased and decreased recruitment of the DMN following dynamics in memory search and learning, respectively, supports the idea that with learning FR is less dependent on neural operations of internally-generated processes such as those initially needed for memory search. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. On the use of PGD for optimal control applied to automated fibre placement

    NASA Astrophysics Data System (ADS)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  4. Handling Input and Output for COAMPS

    NASA Technical Reports Server (NTRS)

    Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine

    2007-01-01

    Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.

  5. Applying Item Response Theory methods to design a learning progression-based science assessment

    NASA Astrophysics Data System (ADS)

    Chen, Jing

    Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels. Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study. The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels. This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all the defined boundaries. This ensures the accuracy of the classification. Third, when item threshold parameters vary a bit, the scoring rubrics and the items need to be reviewed to make the threshold parameters similar across items. This is because one important design criterion of the learning progression-based items is that ideally, a student should be at the same level across items, which means that the item threshold parameters (d1, d 2 and d3) should be similar across items. To design a learning progression-based science assessment, we need to understand whether the assessment measures a single construct or several constructs and how items are associated with the constructs being measured. Results from dimension analyses indicate that items of different carbon transforming processes measure different aspects of the carbon cycle construct. However, items of different practices assess the same construct. In general, there are high correlations among different processes or practices. It is not clear whether the strong correlations are due to the inherent links among these process/practice dimensions or due to the fact that the student sample does not show much variation in these process/practice dimensions. Future data are needed to examine the dimensionalities in terms of process/practice in detail. Finally, based on item characteristics analysis, recommendations are made to write more discriminative CR items and better OMC, MTF options. Item writers can follow these recommendations to write better learning progression-based items.

  6. Diagnostics for a waste processing plasma arc furnace (invited) (abstract)a)

    NASA Astrophysics Data System (ADS)

    Woskov, P. P.

    1995-01-01

    Maintaining the quality of our environment has become an important goal of society. As part of this goal new technologies are being sought to clean up hazardous waste sites and to treat ongoing waste streams. A 1 MW pilot scale dc graphite electrode plasma arc furnace (Mark II) has been constructed at MIT under a joint program among Pacific Northwest Laboratory (PNL), MIT, and Electro-Pyrolysis, Inc. (EPI)c) for the remediation of buried wastes in the DOE complex. A key part of this program is the development of new and improved diagnostics to study, monitor, and control the entire waste remediation process for the optimization of this technology and to safeguard the environment. Continuous, real time diagnostics are needed for a variety of the waste process parameters. These parameters include internal furnace temperatures, slag fill levels, trace metals content in the off-gas stream, off-gas molecular content, feed and slag characterization, and off-gas particulate size, density, and velocity distributions. Diagnostics are currently being tested at MIT for the first three parameters. An active millimeter-wave radiometer with a novel, rotatable graphite waveguide/mirror antenna system has been implemented on Mark II for the measurement of surface emission and emissivity which can be used to determine internal furnace temperatures and fill levels. A microwave torch plasma is being evaluated for use as a excitation source in the furnace off-gas stream for continuous atomic emission spectroscopy of trace metals. These diagnostics should find applicability not only to waste remediation, but also to other high temperature processes such as incinerators, power plants, and steel plants.

  7. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    PubMed

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Analysis of Zinc Oxide Thin Films Synthesized by Sol-Gel via Spin Coating

    NASA Astrophysics Data System (ADS)

    Wolgamott, Jon Carl

    Transparent conductive oxides are gaining an increasingly important role in optoelectronic devices such as solar cells. Doped zinc oxide is a candidate as a low cost and nontoxic alternative to tin doped indium oxide. Lab results have shown that both n-type and p-type zinc oxide can be created on a small scale. This can allow zinc oxide to be used as either an electrode as well as a buffer layer to increase efficiency and protect the active layer in solar cells. Sol-gel synthesis is emerging as a low temperature, low cost, and resource efficient alternative to producing transparent conducting oxides such as zinc oxide. For sol-gel derived zinc oxide thin films to reach their potential, research in this topic must continue to optimize the known processing parameters and expand to new parameters to tighten control and create novel processing techniques that improve performance. The processing parameters of drying and annealing temperatures as well as cooling rate were analyzed to see their effect on the structure of the prepared zinc oxide thin films. There were also preliminary tests done to modify the sol-gel process to include silver as a dopant to produce a p-type thin film. The results from this work show that the pre- and post- heating temperatures as well as the cooling rate all play their own unique role in the crystallization of the film. Results from silver doping show that more work needs to be done to create a sol-gel derived p-type zinc oxide thin film.

  9. Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC)

    NASA Astrophysics Data System (ADS)

    Dethloff, Klaus; Rex, Markus; Shupe, Matthew

    2016-04-01

    The Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) is an international initiative under the International Arctic Science Committee (IASC) umbrella that aims to improve numerical model representations of sea ice, weather, and climate processes through coupled system observations and modeling activities that link the central Arctic atmosphere, sea ice, ocean, and the ecosystem. Observations of many critical parameters such as cloud properties, surface energy fluxes, atmospheric aerosols, small-scale sea-ice and oceanic processes, biological feedbacks with the sea-ice ice and ocean, and others have never been made in the central Arctic in all seasons, and certainly not in a coupled system fashion. The primary objective of MOSAiC is to develop a better understanding of these important coupled-system processes so they can be more accurately represented in regional- and global-scale weather- and climate models. Such enhancements will contribute to improved modeling of global climate and weather, and Arctic sea-ice predictive capabilities. The MOSAiC observations are an important opportunity to gather the high quality and comprehensive observations needed to improve numerical modeling of critical, scale-dependent processes impacting Arctic predictability given diminished sea ice coverage and increased model complexity. Model improvements are needed to understand the effects of a changing Arctic on mid-latitude weather and climate. MOSAiC is specifically designed to provide the multi-parameter, coordinated observations needed to improve sub-grid scale model parameterizations especially with respect to thinner ice conditions. To facilitate, evaluate, and develop the needed model improvements, MOSAiC will employ a hierarchy of modeling approaches ranging from process model studies, to regional climate model intercomparisons, to operational forecasts and assimilation of real-time observations. Model evaluations prior to the field program will be used to identify specific gaps and parameterization needs. Preliminary modeling and operational forecasting will also be necessary to directly guide field planning and optimal implementation of field resources, and to support the safety of the project. The MOSAiC Observatory will be deployed in, and drift with, the Arctic sea-ice pack for at least a full annual cycle, starting in fall 2019 and ending in autumn 2020. Initial plans are for the drift to start in the newly forming autumn sea-ice in, or near, the East Siberian Sea. The specific location will be selected to allow for the observatory to follow the Transpolar Drift towards the North Pole and on to the Fram Strait. IASC has adopted MOSAiC as a key international activity, the German Alfred Wegener Institute has made the huge contribution of the icebreaker Polarstern to serve as the central drifting observatory for this year long endeavor, and the US Department of Energy has committed a comprehensive atmospheric measurement suite. Many other nations and agencies have expressed interest in participation and in gaining access to this unprecedented observational dataset. International coordination is needed to support this groundbreaking endeavor.

  10. Understanding Climate Uncertainty with an Ocean Focus

    NASA Astrophysics Data System (ADS)

    Tokmakian, R. T.

    2009-12-01

    Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in ocean circulation due to parameter specification will be described and early results using the ocean/ice components of the CCSM climate model in a designed experiment framework will be shown. Cox, P. and D. Stephenson, Climate Change: A Changing Climate for Prediction, 2007, Science 317 (5835), 207, DOI: 10.1126/science.1145956. Rougier, J. C., 2007: Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations, Climatic Change, 81, 247-264. Smith L., 2002, What might we learn from climate forecasts? Proc. Nat’l Academy of Sciences, Vol. 99, suppl. 1, 2487-2492 doi:10.1073/pnas.012580599.

  11. Analysis and improved design considerations for airborne pulse Doppler radar signal processing in the detection of hazardous windshear

    NASA Technical Reports Server (NTRS)

    Lee, Jonggil

    1990-01-01

    High resolution windspeed profile measurements are needed to provide reliable detection of hazardous low altitude windshear with an airborne pulse Doppler radar. The system phase noise in a Doppler weather radar may degrade the spectrum moment estimation quality and the clutter cancellation capability which are important in windshear detection. Also the bias due to weather return Doppler spectrum skewness may cause large errors in pulse pair spectral parameter estimates. These effects are analyzed for the improvement of an airborne Doppler weather radar signal processing design. A method is presented for the direct measurement of windspeed gradient using low pulse repetition frequency (PRF) radar. This spatial gradient is essential in obtaining the windshear hazard index. As an alternative, the modified Prony method is suggested as a spectrum mode estimator for both the clutter and weather signal. Estimation of Doppler spectrum modes may provide the desired windshear hazard information without the need of any preliminary processing requirement such as clutter filtering. The results obtained by processing a NASA simulation model output support consideration of mode identification as one component of a windshear detection algorithm.

  12. Process wastewater treatability study for Westinghouse fluidized-bed coal gasification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winton, S.L.; Buvinger, B.J.; Evans, J.M.

    1983-11-01

    In the development of a synthetic fuels facility, water usage and wastewater treatment are major areas of concern. Coal gasification processes generally produce relatively large volumes of gas condensates. These wastewaters are typically composed of a variety of suspended and dissolved organic and inorganic solids and dissolved gaseous contaminants. Fluidized-bed coal gasification (FBG) processes are no exception to this rule. The Department of Energy's Morgantown Energy Technology Center (METC), the Gas Research Institute (GRI), and the Environmental Protection Agency (EPA/IERLRTP) recognized the need for a FBG treatment program to provide process design data for FBG wastewaters during the environmental, health,more » and safety characterization of the Westinghouse Process Development Unit (PDU). In response to this need, METC developed conceptual designs and a program plan to obtain process design and performance data for treating wastewater from commercial-scale Westinghouse-based synfuels plants. As a result of this plan, METC, GRI, and EPA entered into a joint program to develop performance data, design parameters, conceptual designs, and cost estimates for treating wastewaters from a FBG plant. Wastewater from the Westinghouse PDU consists of process quench and gas cooling condensates which are similar to those produced by other FBG processes such as U-Gas, and entrained-bed gasification processes such as Texaco. Therefore, wastewater from this facility was selected as the basis for this study. This paper outlines the current program for developing process design and cost data for the treatment of these wastewaters.« less

  13. Mathematical Modeling of Ammonia Electro-Oxidation on Polycrystalline Pt Deposited Electrodes

    NASA Astrophysics Data System (ADS)

    Diaz Aldana, Luis A.

    The ammonia electrolysis process has been proposed as a feasible way for electrochemical generation of fuel grade hydrogen (H2). Ammonia is identified as one of the most suitable energy carriers due to its high hydrogen density, and its safe and efficient distribution chain. Moreover, the fact that this process can be applied even at low ammonia concentration feedstock opens its application to wastewater treatment along with H 2 co-generation. In the ammonia electrolysis process, ammonia is electro-oxidized in the anode side to produce N2 while H2 is evolved from water reduction in the cathode. A thermodynamic energy requirement of just five percent of the energy used in hydrogen production from water electrolysis is expected from ammonia electrolysis. However, the absence of a complete understanding of the reaction mechanism and kinetics involved in the ammonia electro-oxidation has not yet allowed the full commercialization of this process. For that reason, a kinetic model that can be trusted in the design and scale up of the ammonia electrolyzer needs to be developed. This research focused on the elucidation of the reaction mechanism and kinetic parameters for the ammonia electro-oxidation. The definition of the most relevant elementary reactions steps was obtained through the parallel analysis of experimental data and the development of a mathematical model of the ammonia electro-oxidation in a well defined hydrodynamic system, such as the rotating disk electrode (RDE). Ammonia electro-oxidation to N 2 as final product was concluded to be a slow surface confined process where parallel reactions leading to the deactivation of the catalyst are present. Through the development of this work it was possible to define a reaction mechanism and values for the kinetic parameters for ammonia electro-oxidation that allow an accurate representation of the experimental observations on a RDE system. Additionally, the validity of the reaction mechanism and kinetic parameters were supplemented by means of process scale up, performance evaluation, and hydrodynamic analysis in a flow cell electrolyzer. An adequate simulation of the flow electrolyzer performance was accomplished using the obtained kinetic parameters.

  14. Experimental Investigation and Optimization of Response Variables in WEDM of Inconel - 718

    NASA Astrophysics Data System (ADS)

    Karidkar, S. S.; Dabade, U. A.

    2016-02-01

    Effective utilisation of Wire Electrical Discharge Machining (WEDM) technology is challenge for modern manufacturing industries. Day by day new materials with high strengths and capabilities are being developed to fulfil the customers need. Inconel - 718 is similar kind of material which is extensively used in aerospace applications, such as gas turbine, rocket motors, and spacecraft as well as in nuclear reactors and pumps etc. This paper deals with the experimental investigation of optimal machining parameters in WEDM for Surface Roughness, Kerf Width and Dimensional Deviation using DoE such as Taguchi methodology, L9 orthogonal array. By keeping peak current constant at 70 A, the effect of other process parameters on above response variables were analysed. Obtained experimental results were statistically analysed using Minitab-16 software. Analysis of Variance (ANOVA) shows pulse on time as the most influential parameter followed by wire tension whereas spark gap set voltage is observed to be non-influencing parameter. Multi-objective optimization technique, Grey Relational Analysis (GRA), shows optimal machining parameters such as pulse on time 108 Machine unit, spark gap set voltage 50 V and wire tension 12 gm for optimal response variables considered for the experimental analysis.

  15. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  16. Application of all relevant feature selection for failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.

    2015-07-01

    The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.

  17. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  18. Exploiting mAb structure characteristics for a directed QbD implementation in early process development.

    PubMed

    Karlberg, Micael; von Stosch, Moritz; Glassey, Jarka

    2018-03-07

    In today's biopharmaceutical industries, the lead time to develop and produce a new monoclonal antibody takes years before it can be launched commercially. The reasons lie in the complexity of the monoclonal antibodies and the need for high product quality to ensure clinical safety which has a significant impact on the process development time. Frameworks such as quality by design are becoming widely used by the pharmaceutical industries as they introduce a systematic approach for building quality into the product. However, full implementation of quality by design has still not been achieved due to attrition mainly from limited risk assessment of product properties as well as the large number of process factors affecting product quality that needs to be investigated during the process development. This has introduced a need for better methods and tools that can be used for early risk assessment and predictions of critical product properties and process factors to enhance process development and reduce costs. In this review, we investigate how the quantitative structure-activity relationships framework can be applied to an existing process development framework such as quality by design in order to increase product understanding based on the protein structure of monoclonal antibodies. Compared to quality by design, where the effect of process parameters on the drug product are explored, quantitative structure-activity relationships gives a reversed perspective which investigates how the protein structure can affect the performance in different unit operations. This provides valuable information that can be used during the early process development of new drug products where limited process understanding is available. Thus, quantitative structure-activity relationships methodology is explored and explained in detail and we investigate the means of directly linking the structural properties of monoclonal antibodies to process data. The resulting information as a decision tool can help to enhance the risk assessment to better aid process development and thereby overcome some of the limitations and challenges present in QbD implementation today.

  19. Stent manufacturing using cobalt chromium molybdenum (CoCrMo) by selective laser melting technology

    NASA Astrophysics Data System (ADS)

    Omar, Mohd Asnawi; Baharudin, BT-HT; Sulaiman, S.

    2017-12-01

    This paper reviews the capabilities of additive manufacturing (AM) technology and the use of Cobalt super alloy stent fabrication by looking at the dimensional accuracy and mechanical properties of the stent. Current conventional process exhibit many processes which affect the supply chain, costing, and post processing. By alternatively switching to AM, the step of production can be minimized and thus customization of stent can be carried out according to patients need. The proposed methodology is a perfect choice as surgeons need to have an accurate size during stent implantation. It also is able to reduce time-to-market delivery in a matter of hours and from days. The suggested stent model was taken from the third party vendor and flow optimization was carried out using Materialise Magics TM software. By using SLM125TM printer, the printing parameters such as Energy Density (DE), Laser Power (PL), Scanning Speed (SS) and Hatching Distance (DH) was used to print the stent. The properties of the finished product, such as strength, surface finish and orientation was investigated.

  20. Fast Printing and In-Situ Morphology Observation of Organic Photovoltaics using Slot-Die Coating

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Ferdous, Sunzida; Wang, Cheng; Hexamer, Alexander; Russell, Thomas; Cheng Wang Collaboration; Thomas Russell Team

    2014-03-01

    The solvent-processibility of polymer semiconductors is a key advantage for the fabrication of large area, organic bulk-heterojunction (BHJ) photovoltaic devices. Most reported power conversion efficiencies (PCE) are based on small active areas, fabricated by spin-coating technique. In general, this does not reflect device fabrication in an industrial setting. To realize commercial viability, devices need to be fabricated in a roll-to-roll fashion. The evolution of the morphology associated with different processing parameters, like solvent choice, concentration and temperature, needs to be understood and controlled. We developed a mini slot-die coater, to fabricate BHJ devices using various low band gap polymers mixed with phenyl-C71-butyric acid methyl ester (PCBM). Solvent choice, processing additives, coating rate and coating temperatures were used to control the final morphology. Efficiencies comparable to lab-setting spin-coated devices are obtained. The evolution of the morphology was monitored by in situ scattering measurements, detecting the onset of the polymer chain packing in solution that led to the formation of a fibrillar network in the film.

  1. Clustering analysis of moving target signatures

    NASA Astrophysics Data System (ADS)

    Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto

    2010-04-01

    Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.

  2. Individual Learning Route as a Way of Highly Qualified Specialists Training for Extraction of Solid Commercial Minerals Enterprises

    NASA Astrophysics Data System (ADS)

    Oschepkova, Elena; Vasinskaya, Irina; Sockoluck, Irina

    2017-11-01

    In view of changing educational paradigm (adopting of two-tier system of higher education concept - undergraduate and graduate programs) a need of using of modern learning and information and communications technologies arises putting into practice learner-centered approaches in training of highly qualified specialists for extraction and processing of solid commercial minerals enterprises. In the unstable market demand situation and changeable institutional environment, from one side, and necessity of work balancing, supplying conditions and product quality when mining-and-geological parameters change, from the other side, mining enterprises have to introduce and develop the integrated management process of product and informative and logistic flows under united management system. One of the main limitations, which keeps down the developing process on Russian mining enterprises, is staff incompetence at all levels of logistic management. Under present-day conditions extraction and processing of solid commercial minerals enterprises need highly qualified specialists who can do self-directed researches, develop new and improve present arranging, planning and managing technologies of technical operation and commercial exploitation of transport and transportation and processing facilities based on logistics. Learner-centered approach and individualization of the learning process necessitate the designing of individual learning route (ILR), which can help the students to realize their professional facilities according to requirements for specialists for extraction and processing of solid commercial minerals enterprises.

  3. Integrated processes for expansion and differentiation of human pluripotent stem cells in suspended microcarriers cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Alan Tin-Lun, E-mail: alan_lam@bti.a-star.edu.sg; Chen, Allen Kuan-Liang; Ting, Sherwin Qi-Peng

    Current methods for human pluripotent stem cells (hPSC) expansion and differentiation can be limited in scalability and costly (due to their labor intensive nature). This can limit their use in cell therapy, drug screening and toxicity assays. One of the approaches that can overcome these limitations is microcarrier (MC) based cultures in which cells are expanded as cell/MC aggregates and then directly differentiated as embryoid bodies (EBs) in the same agitated reactor. This integrated process can be scaled up and eliminate the need for some culture manipulation used in common monolayer and EBs cultures. This review describes the principles ofmore » such microcarriers based integrated hPSC expansion and differentiation process, and parameters that can affect its efficiency (such as MC type and extracellular matrix proteins coatings, cell/MC aggregates size, and agitation). Finally examples of integrated process for generation cardiomyocytes (CM) and neural progenitor cells (NPC) as well as challenges to be solved are described. - Highlights: • Expansion of hPSC on microcarriers. • Differentiation of hPSC on microcarriers. • Parameters that can affect the expansion and differentiation of hPSC on microcarriers. • Integration of expansion and differentiation of hPSC on microcarriers in one unit operation.« less

  4. Microstructure based procedure for process parameter control in rolling of aluminum thin foils

    NASA Astrophysics Data System (ADS)

    Johannes, Kronsteiner; Kabliman, Evgeniya; Klimek, Philipp-Christoph

    2018-05-01

    In present work, a microstructure based procedure is used for a numerical prediction of strength properties for Al-Mg-Sc thin foils during a hot rolling process. For this purpose, the following techniques were developed and implemented. At first, a toolkit for a numerical analysis of experimental stress-strain curves obtained during a hot compression testing by a deformation dilatometer was developed. The implemented techniques allow for the correction of a temperature increase in samples due to adiabatic heating and for the determination of a yield strength needed for the separation of the elastic and plastic deformation regimes during numerical simulation of multi-pass hot rolling. At the next step, an asymmetric Hot Rolling Simulator (adjustable table inlet/outlet height as well as separate roll infeed) was developed in order to match the exact processing conditions of a semi-industrial rolling procedure. At each element of a finite element mesh the total strength is calculated by in-house Flow Stress Model based on evolution of mean dislocation density. The strength values obtained by numerical modelling were found in a reasonable agreement with results of tensile tests for thin Al-Mg-Sc foils. Thus, the proposed simulation procedure might allow to optimize the processing parameters with respect to the microstructure development.

  5. Shortcuts to adiabatic passage for the generation of a maximal Bell state and W state in an atom–cavity system

    NASA Astrophysics Data System (ADS)

    Lu, Mei; Chen, Qing-Qin

    2018-05-01

    We propose an efficient scheme to generate the maximal entangle states in an atom–cavity system between two three-level atoms in cavity quantum electronic dynamics system based on shortcuts to adiabatic passage. In the accelerate scheme, there is no need to design a time-varying coupling coefficient for the cavity. We only need to tactfully design time-dependent lasers to drive the system into the desired entangled states. Controlling the detuning between the cavity mode and lasers, we deduce a determinate analysis formula for this quantum information processing. The lasers do not need to distinguish which atom is to be affected, therefore the implementation of the experiment is simpler. The method is also generalized to generate a W state. Moreover, the accelerated program can be extended to a multi-body system and an analytical solution in a higher-dimensional system can be achieved. The influence of decoherence and variations of the parameters are discussed by numerical simulation. The results show that the maximally entangled states can be quickly prepared in a short time with high fidelity, and which are robust against both parameter fluctuations and dissipation. Our study enriches the physics and applications of multi-particle quantum entanglement preparation via shortcuts to adiabatic passage in quantum electronic dynamics.

  6. Sensorimotor Training in Virtual Reality: A Review

    PubMed Central

    Adamovich, Sergei V.; Fluet, Gerard G.; Tunik, Eugene; Merians, Alma S.

    2010-01-01

    Recent experimental evidence suggests that rapid advancement of virtual reality (VR) technologies has great potential for the development of novel strategies for sensorimotor training in neurorehabilitation. We discuss what the adaptive and engaging virtual environments can provide for massive and intensive sensorimotor stimulation needed to induce brain reorganization. Second, discrepancies between the veridical and virtual feedback can be introduced in VR to facilitate activation of targeted brain networks, which in turn can potentially speed up the recovery process. Here we review the existing experimental evidence regarding the beneficial effects of training in virtual environments on the recovery of function in the areas of gait, upper extremity function and balance, in various patient populations. We also discuss possible mechanisms underlying these effects. We feel that future research in the area of virtual rehabilitation should follow several important paths. Imaging studies to evaluate the effects of sensory manipulation on brain activation patterns and the effect of various training parameters on long term changes in brain function are needed to guide future clinical inquiry. Larger clinical studies are also needed to establish the efficacy of sensorimotor rehabilitation using VR approaches in various clinical populations and most importantly, to identify VR training parameters that are associated with optimal transfer into real-world functional improvements. PMID:19713617

  7. Parameter Balancing in Kinetic Models of Cell Metabolism†

    PubMed Central

    2010-01-01

    Kinetic modeling of metabolic pathways has become a major field of systems biology. It combines structural information about metabolic pathways with quantitative enzymatic rate laws. Some of the kinetic constants needed for a model could be collected from ever-growing literature and public web resources, but they are often incomplete, incompatible, or simply not available. We address this lack of information by parameter balancing, a method to complete given sets of kinetic constants. Based on Bayesian parameter estimation, it exploits the thermodynamic dependencies among different biochemical quantities to guess realistic model parameters from available kinetic data. Our algorithm accounts for varying measurement conditions in the input data (pH value and temperature). It can process kinetic constants and state-dependent quantities such as metabolite concentrations or chemical potentials, and uses prior distributions and data augmentation to keep the estimated quantities within plausible ranges. An online service and free software for parameter balancing with models provided in SBML format (Systems Biology Markup Language) is accessible at www.semanticsbml.org. We demonstrate its practical use with a small model of the phosphofructokinase reaction and discuss its possible applications and limitations. In the future, parameter balancing could become an important routine step in the kinetic modeling of large metabolic networks. PMID:21038890

  8. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  9. High-volume image quality assessment systems: tuning performance with an interactive data visualization tool

    NASA Astrophysics Data System (ADS)

    Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael

    1999-03-01

    Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.

  10. Developing a quality by design approach to model tablet dissolution testing: an industrial case study.

    PubMed

    Yekpe, Ketsia; Abatzoglou, Nicolas; Bataille, Bernard; Gosselin, Ryan; Sharkawi, Tahmer; Simard, Jean-Sébastien; Cournoyer, Antoine

    2018-07-01

    This study applied the concept of Quality by Design (QbD) to tablet dissolution. Its goal was to propose a quality control strategy to model dissolution testing of solid oral dose products according to International Conference on Harmonization guidelines. The methodology involved the following three steps: (1) a risk analysis to identify the material- and process-related parameters impacting the critical quality attributes of dissolution testing, (2) an experimental design to evaluate the influence of design factors (attributes and parameters selected by risk analysis) on dissolution testing, and (3) an investigation of the relationship between design factors and dissolution profiles. Results show that (a) in the case studied, the two parameters impacting dissolution kinetics are active pharmaceutical ingredient particle size distributions and tablet hardness and (b) these two parameters could be monitored with PAT tools to predict dissolution profiles. Moreover, based on the results obtained, modeling dissolution is possible. The practicality and effectiveness of the QbD approach were demonstrated through this industrial case study. Implementing such an approach systematically in industrial pharmaceutical production would reduce the need for tablet dissolution testing.

  11. Molecular dynamics study of combustion reactions in supercritical environment. Part 1: Carbon dioxide and water force field parameters refitting and critical isotherms of binary mixtures

    DOE PAGES

    Masunov, Artem E.; Atlanov, Arseniy Alekseyevich; Vasu, Subith S.

    2016-10-04

    Oxy-fuel combustion process is expected to drastically increase the energy efficiency and enable easy carbon sequestration. In this technology the combustion products (carbon dioxide and water) are used to control the temperature and nitrogen is excluded from the combustion chamber, so that nitrogen oxide pollutants do not form. Therefore, in oxycombustion the carbon dioxide and water are present in large concentrations in their transcritical state, and may play an important role in kinetics. The computational chemistry methods may assist in understanding these effects, and Molecular Dynamics with ReaxFF force field seem to be a suitable tool for such a study.more » Here we investigate applicability of the ReaxFF to describe the critical phenomena in carbon dioxide and water and find that several nonbonding parameters need adjustment. We report the new parameter set, capable to reproduce the critical temperatures and pressures. Furthermore, the critical isotherms of CO 2/H 2O binary mixtures are computationally studied here for the first time and their critical parameters are reported.« less

  12. Molecular dynamics study of combustion reactions in supercritical environment. Part 1: Carbon dioxide and water force field parameters refitting and critical isotherms of binary mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masunov, Artem E.; Atlanov, Arseniy Alekseyevich; Vasu, Subith S.

    Oxy-fuel combustion process is expected to drastically increase the energy efficiency and enable easy carbon sequestration. In this technology the combustion products (carbon dioxide and water) are used to control the temperature and nitrogen is excluded from the combustion chamber, so that nitrogen oxide pollutants do not form. Therefore, in oxycombustion the carbon dioxide and water are present in large concentrations in their transcritical state, and may play an important role in kinetics. The computational chemistry methods may assist in understanding these effects, and Molecular Dynamics with ReaxFF force field seem to be a suitable tool for such a study.more » Here we investigate applicability of the ReaxFF to describe the critical phenomena in carbon dioxide and water and find that several nonbonding parameters need adjustment. We report the new parameter set, capable to reproduce the critical temperatures and pressures. Furthermore, the critical isotherms of CO 2/H 2O binary mixtures are computationally studied here for the first time and their critical parameters are reported.« less

  13. Life Prediction/Reliability Data of Glass-Ceramic Material Determined for Radome Applications

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Gyekenyesi, John P.

    2002-01-01

    Brittle materials, ceramics, are candidate materials for a variety of structural applications for a wide range of temperatures. However, the process of slow crack growth, occurring in any loading configuration, limits the service life of structural components. Therefore, it is important to accurately determine the slow crack growth parameters required for component life prediction using an appropriate test methodology. This test methodology also should be useful in determining the influence of component processing and composition variables on the slow crack growth behavior of newly developed or existing materials, thereby allowing the component processing and composition to be tailored and optimized to specific needs. Through the American Society for Testing and Materials (ASTM), the authors recently developed two test methods to determine the life prediction parameters of ceramics. The two test standards, ASTM 1368 for room temperature and ASTM C 1465 for elevated temperatures, were published in the 2001 Annual Book of ASTM Standards, Vol. 15.01. Briefly, the test method employs constant stress-rate (or dynamic fatigue) testing to determine flexural strengths as a function of the applied stress rate. The merit of this test method lies in its simplicity: strengths are measured in a routine manner in flexure at four or more applied stress rates with an appropriate number of test specimens at each applied stress rate. The slow crack growth parameters necessary for life prediction are then determined from a simple relationship between the strength and the applied stress rate. Extensive life prediction testing was conducted at the NASA Glenn Research Center using the developed ASTM C 1368 test method to determine the life prediction parameters of a glass-ceramic material that the Navy will use for radome applications.

  14. Accounting for Parameter Uncertainty in Complex Atmospheric Models, With an Application to Greenhouse Gas Emissions Evaluation

    NASA Astrophysics Data System (ADS)

    Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.

    2016-12-01

    In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.

  15. Machining of Molybdenum by EDM-EP and EDC Processes

    NASA Astrophysics Data System (ADS)

    Wu, K. L.; Chen, H. J.; Lee, H. M.; Lo, J. S.

    2017-12-01

    Molybdenum metal (Mo) can be machined with conventional tools and equipment, however, its refractory propertytends to chip when being machined. In this study, the nonconventional processes of electrical discharge machining (EDM) and electro-polishing (EP) have been conducted to investigate the machining of Mo metal and fabrication of Mo grid. Satisfactory surface quality was obtained using appropriate EDM parameters of Ip ≦ 3A and Ton < 80μs at a constant pulse interval of 100μs. The finished Mometal has accomplished by selecting appropriate EP parameters such as electrolyte flow rate of 0.42m/s under EP voltage of 50V and flush time of 20 sec to remove the recast layer and craters on the surface of Mo metal. The surface roughness of machined Mo metal can be improved from Ra of 0.93μm (Rmax = 8.51μm) to 0.23μm (Rmax = 1.48μm). Machined Mo metal surface, when used as grid component in electron gun, needs to be modified by coating materials with high work function, such as silicon carbide (SiC). The main purpose of this study is to explore the electrical discharge coating (EDC) process for coating the SiC layer on EDMed Mo metal. Experimental results proved that the appropriate parameters of Ip = 5A and Ton = 50μs at Toff = 10μs can obtain the deposit with about 60μm thickness. The major phase of deposit on machined Mo surface was SiC ceramic, while the minor phases included MoSi2 and/or SiO2 with the presence of free Si due to improper discharging parameters and the use of silicone oil as the dielectric fluid.

  16. Strong Motion Seismograph Based On MEMS Accelerometer

    NASA Astrophysics Data System (ADS)

    Teng, Y.; Hu, X.

    2013-12-01

    The MEMS strong motion seismograph we developed used the modularization method to design its software and hardware.It can fit various needs in different application situation.The hardware of the instrument is composed of a MEMS accelerometer,a control processor system,a data-storage system,a wired real-time data transmission system by IP network,a wireless data transmission module by 3G broadband,a GPS calibration module and power supply system with a large-volumn lithium battery in it. Among it,the seismograph's sensor adopted a three-axis with 14-bit high resolution and digital output MEMS accelerometer.Its noise level just reach about 99μg/√Hz and ×2g to ×8g dynamically selectable full-scale.Its output data rates from 1.56Hz to 800Hz. Its maximum current consumption is merely 165μA,and the device is so small that it is available in a 3mm×3mm×1mm QFN package. Furthermore,there is access to both low pass filtered data as well as high pass filtered data,which minimizes the data analysis required for earthquake signal detection. So,the data post-processing can be simplified. Controlling process system adopts a 32-bit low power consumption embedded ARM9 processor-S3C2440 and is based on the Linux operation system.The processor's operating clock at 400MHz.The controlling system's main memory is a 64MB SDRAM with a 256MB flash-memory.Besides,an external high-capacity SD card data memory can be easily added.So the system can meet the requirements for data acquisition,data processing,data transmission,data storage,and so on. Both wired and wireless network can satisfy remote real-time monitoring, data transmission,system maintenance,status monitoring or updating software.Linux was embedded and multi-layer designed conception was used.The code, including sensor hardware driver,the data acquisition,earthquake setting out and so on,was written on medium layer.The hardware driver consist of IIC-Bus interface driver, IO driver and asynchronous notification driver. The application program layer mainly concludes: earthquake parameter module, local database managing module, data transmission module, remote monitoring, FTP service and so on. The application layer adopted multi-thread process. The whole strong motion seismograph was encapsulated in a small aluminum box, which size is 80mm×120mm×55mm. The inner battery can work continuesly more than 24 hours. The MEMS accelerograph uses modular design for its software part and hardware part. It has remote software update function and can meet the following needs: a) Auto picking up the earthquake event; saving the data on wave-event files and hours files; It may be used for monitoring strong earthquake, explosion, bridge and house health. b) Auto calculate the earthquake parameters, and transferring those parameters by 3G wireless broadband network. This kind of seismograph has characteristics of low cost, easy installation. They can be concentrated in the urban region or areas need to specially care. We can set up a ground motion parameters quick report sensor network while large earthquake break out. Then high-resolution-fine shake-map can be easily produced for the need of emergency rescue. c) By loading P-wave detection program modules, it can be used for earthquake early warning for large earthquakes; d) Can easily construct a high-density layout seismic monitoring network owning remote control and modern intelligent earthquake sensor.

  17. Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.

    PubMed

    Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A

    2017-04-15

    Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Final technical report. In-situ FT-IR monitoring of a black liquor recovery boiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James Markham; Joseph Cosgrove; David Marran

    1999-05-31

    This project developed and tested advanced Fourier transform infrared (FT-IR) instruments for process monitoring of black liquor recovery boilers. The state-of-the-art FT-IR instruments successfully operated in the harsh environment of a black liquor recovery boiler and provided a wealth of real-time process information. Concentrations of multiple gas species were simultaneously monitored in-situ across the combustion flow of the boiler and extractively at the stack. Sensitivity to changes of particulate fume and carryover levels in the process flow were also demonstrated. Boiler set-up and operation is a complex balance of conditions that influence the chemical and physical processes in the combustionmore » flow. Operating parameters include black liquor flow rate, liquor temperature, nozzle pressure, primary air, secondary air, tertiary air, boiler excess oxygen and others. The in-process information provided by the FT-IR monitors can be used as a boiler control tool since species indicative of combustion efficiency (carbon monoxide, methane) and pollutant emissions (sulfur dioxide, hydrochloric acid and fume) were monitored in real-time and observed to fluctuate as operating conditions were varied. A high priority need of the U.S. industrial boiler market is improved measurement and control technology. The sensor technology demonstrated in this project is applicable to the need of industry.« less

  19. Impact of the interaction of material production and mechanical processing on the magnetic properties of non-oriented electrical steel

    NASA Astrophysics Data System (ADS)

    Leuning, Nora; Steentjes, Simon; Stöcker, Anett; Kawalla, Rudolf; Wei, Xuefei; Dierdorf, Jens; Hirt, Gerhard; Roggenbuck, Stefan; Korte-Kerzel, Sandra; Weiss, Hannes A.; Volk, Wolfram; Hameyer, Kay

    2018-04-01

    Thin laminations of non-grain oriented (NO) electrical steels form the magnetic core of rotating electrical machines. The magnetic properties of these laminations are therefore key elements for the efficiency of electric drives and need to be fully utilized. Ideally, high magnetization and low losses are realized over the entire polarization and frequency spectrum at reasonable production and processing costs. However, such an ideal material does not exist and thus, achievable magnetic properties need to be deduced from the respective application requirements. Parameters of the electrical steel such as lamination thickness, microstructure and texture affect the magnetic properties as well as their polarization and frequency dependence. These structural features represent possibilities to actively alter the magnetic properties, e.g., magnetization curve, magnetic loss or frequency dependence. This paper studies the influence of production and processing on the resulting magnetic properties of a 2.4 wt% Si electrical steel. Aim is to close the gap between production influence on the material properties and its resulting effect on the magnetization curves and losses at different frequencies with a strong focus on occurring interdependencies between production and mechanical processing. The material production is realized on an experimental processing route that comprises the steps of hot rolling, cold rolling, annealing and punching.

  20. Parameter optimization of electrochemical machining process using black hole algorithm

    NASA Astrophysics Data System (ADS)

    Singh, Dinesh; Shukla, Rajkamal

    2017-12-01

    Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.

  1. Standard Reference Specimens in Quality Control of Engineering Surfaces

    PubMed Central

    Song, J. F.; Vorburger, T. V.

    1991-01-01

    In the quality control of engineering surfaces, we aim to understand and maintain a good relationship between the manufacturing process and surface function. This is achieved by controlling the surface texture. The control process involves: 1) learning the functional parameters and their control values through controlled experiments or through a long history of production and use; 2) maintaining high accuracy and reproducibility with measurements not only of roughness calibration specimens but also of real engineering parts. In this paper, the characteristics, utilizations, and limitations of different classes of precision roughness calibration specimens are described. A measuring procedure of engineering surfaces, based on the calibration procedure of roughness specimens at NIST, is proposed. This procedure involves utilization of check specimens with waveform, wavelength, and other roughness parameters similar to functioning engineering surfaces. These check specimens would be certified under standardized reference measuring conditions, or by a reference instrument, and could be used for overall checking of the measuring procedure and for maintaining accuracy and agreement in engineering surface measurement. The concept of “surface texture design” is also suggested, which involves designing the engineering surface texture, the manufacturing process, and the quality control procedure to meet the optimal functional needs. PMID:28184115

  2. Biomaterials innovation for next generation ex vivo immune tissue engineering.

    PubMed

    Singh, Ankur

    2017-06-01

    Primary and secondary lymphoid organs are tissues that facilitate differentiation of B and T cells, leading to the induction of adaptive immune responses. These organs are present in the body from birth and are also recognized as locations where self-reactive B and T cells can be eliminated during the natural selection process. Many insights into the mechanisms that control the process of immune cell development and maturation in response to infection come from the analysis of various gene-deficient mice that lack some or all hallmark features of lymphoid tissues. The complexity of such animal models limits our ability to modulate the parameters that control the process of immune cell development, differentiation, and immunomodulation. Engineering functional, living immune tissues using biomaterials can grant researchers the ability to reproduce immunological events with tunable parameters for more rapid development of immunotherapeutics, cell-based therapy, and enhancing our understanding of fundamental biology as well as improving efforts in regenerative medicine. Here the author provides his review and perspective on the bioengineering of primary and secondary lymphoid tissues, and biomaterials innovation needed for the construction of these immune organs in tissue culture plates and on-chip. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Deepak Condenser Model (DeCoM)

    NASA Technical Reports Server (NTRS)

    Patel, Deepak

    2013-01-01

    Development of the DeCoM comes from the requirement of analyzing the performance of a condenser. A component of a loop heat pipe (LHP), the condenser, is interfaced with the radiator in order to reject heat. DeCoM simulates the condenser, with certain input parameters. Systems Improved Numerical Differencing Analyzer (SINDA), a thermal analysis software, calculates the adjoining component temperatures, based on the DeCoM parameters and interface temperatures to the radiator. Application of DeCoM is (at the time of this reporting) restricted to small-scale analysis, without the need for in-depth LHP component integrations. To efficiently develop a model to simulate the LHP condenser, DeCoM was developed to meet this purpose with least complexity. DeCoM is a single-condenser, single-pass simulator for analyzing its behavior. The analysis is done based on the interactions between condenser fluid, the wall, and the interface between the wall and the radiator. DeCoM is based on conservation of energy, two-phase equations, and flow equations. For two-phase, the Lockhart- Martinelli correlation has been used in order to calculate the convection value between fluid and wall. Software such as SINDA (for thermal analysis analysis) and Thermal Desktop (for modeling) are required. DeCoM also includes the ability to implement a condenser into a thermal model with the capability of understanding the code process and being edited to user-specific needs. DeCoM requires no license, and is an open-source code. Advantages to DeCoM include time dependency, reliability, and the ability for the user to view the code process and edit to their needs.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, M; Schmidt, M; Knutson, N

    Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less

  5. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  6. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  7. LCE: leaf carbon exchange data set for tropical, temperate, and boreal species of North and Central America.

    PubMed

    Smith, Nicholas G; Dukes, Jeffrey S

    2017-11-01

    Leaf canopy carbon exchange processes, such as photosynthesis and respiration, are substantial components of the global carbon cycle. Climate models base their simulations of photosynthesis and respiration on an empirical understanding of the underlying biochemical processes, and the responses of those processes to environmental drivers. As such, data spanning large spatial scales are needed to evaluate and parameterize these models. Here, we present data on four important biochemical parameters defining leaf carbon exchange processes from 626 individuals of 98 species at 12 North and Central American sites spanning ~53° of latitude. The four parameters are the maximum rate of Rubisco carboxylation (V cmax ), the maximum rate of electron transport for the regeneration of Ribulose-1,5,-bisphosphate (J max ), the maximum rate of phosphoenolpyruvate carboxylase carboxylation (V pmax ), and leaf dark respiration (R d ). The raw net photosynthesis by intercellular CO 2 (A/C i ) data used to calculate V cmax , J max , and V pmax rates are also presented. Data were gathered on the same leaf of each individual (one leaf per individual), allowing for the examination of each parameter relative to others. Additionally, the data set contains a number of covariates for the plants measured. Covariate data include (1) leaf-level traits (leaf mass, leaf area, leaf nitrogen and carbon content, predawn leaf water potential), (2) plant-level traits (plant height for herbaceous individuals and diameter at breast height for trees), (3) soil moisture at the time of measurement, (4) air temperature from nearby weather stations for the day of measurement and each of the 90 d prior to measurement, and (5) climate data (growing season mean temperature, precipitation, photosynthetically active radiation, vapor pressure deficit, and aridity index). We hope that the data will be useful for obtaining greater understanding of the abiotic and biotic determinants of these important biochemical parameters and for evaluating and improving large-scale models of leaf carbon exchange. © 2017 by the Ecological Society of America.

  8. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  9. Computational time reduction for sequential batch solutions in GNSS precise point positioning technique

    NASA Astrophysics Data System (ADS)

    Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando

    2017-08-01

    Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.

  10. Evaluation of the orthodontic treatment need in a paediatric sample from Southern Italy and its importance among paediatricians for improving oral health in pediatric dentistry

    PubMed Central

    Ierardo, Gaetano; Corridore, Denise; Di Carlo, Gabriele; Di Giorgio, Gianni; Leonardi, Emanuele; Campus, Guglielmo-Giuseppe; Vozza, Iole; Polimeni, Antonella; Bossù, Maurizio

    2017-01-01

    Background Data from epidemiological studies investigating the prevalence and severity of malocclusions in children are of great relevance to public health programs aimed at orthodontic prevention. Previous epidemiological studies focused mainly on the adolescence age group and reported a prevalence of malocclusion with a high variability, going from 32% to 93%. Aim of our study was to assess the need for orthodontic treatment in a paediatric sample from Southern Italy in order to improve awareness among paediatricians about oral health preventive strategies in pediatric dentistry. Material and Methods The study used the IOTN-DHC index to evaluate the need for orthodontic treatment for several malocclusions (overjet, reverse overjet, overbite, openbite, crossbite) in a sample of 579 children in the 2-9 years age range. Results The most frequently altered occlusal parameter was the overbite (prevalence: 24.5%), while the occlusal anomaly that most frequently presented a need for orthodontic treatment was the crossbite (8.8%). The overall prevalence of need for orthodontic treatment was of 19.3%, while 49% of the sample showed one or more altered occlusal parameters. No statistically significant difference was found between males and females. Conclusions Results from this study support the idea that the establishment of a malocclusion is a gradual process starting at an early age. Effective orthodontic prevention programs should therefore include preschool children being aware paediatricians of the importance of early first dental visit. Key words:Orthodontic treatment, malocclusion, oral health, pediatric dentistry. PMID:28936290

  11. Boltzmann Energy-based Image Analysis Demonstrates that Extracellular Domain Size Differences Explain Protein Segregation at Immune Synapses

    PubMed Central

    Burroughs, Nigel J.; Köhler, Karsten; Miloserdov, Vladimir; Dustin, Michael L.; van der Merwe, P. Anton; Davis, Daniel M.

    2011-01-01

    Immune synapses formed by T and NK cells both show segregation of the integrin ICAM1 from other proteins such as CD2 (T cell) or KIR (NK cell). However, the mechanism by which these proteins segregate remains unclear; one key hypothesis is a redistribution based on protein size. Simulations of this mechanism qualitatively reproduce observed segregation patterns, but only in certain parameter regimes. Verifying that these parameter constraints in fact hold has not been possible to date, this requiring a quantitative coupling of theory to experimental data. Here, we address this challenge, developing a new methodology for analysing and quantifying image data and its integration with biophysical models. Specifically we fit a binding kinetics model to 2 colour fluorescence data for cytoskeleton independent synapses (2 and 3D) and test whether the observed inverse correlation between fluorophores conforms to size dependent exclusion, and further, whether patterned states are predicted when model parameters are estimated on individual synapses. All synapses analysed satisfy these conditions demonstrating that the mechanisms of protein redistribution have identifiable signatures in their spatial patterns. We conclude that energy processes implicit in protein size based segregation can drive the patternation observed in individual synapses, at least for the specific examples tested, such that no additional processes need to be invoked. This implies that biophysical processes within the membrane interface have a crucial impact on cell∶cell communication and cell signalling, governing protein interactions and protein aggregation. PMID:21829338

  12. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-12-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  13. Identification of the Quality Spot Welding used Non Destructive Test-Ultrasonic Testing: (Effect of Welding Time)

    NASA Astrophysics Data System (ADS)

    Sifa, A.; Endramawan, T.; Badruzzaman

    2017-03-01

    Resistance Spot Welding (RSW) is frequently used as one way of welding is used in the manufacturing process, especially in the automotive industry [4][5][6][7]. Several parameters influence the process of welding points. To determine the quality of a welding job needs to be tested, either by damaging or testing without damage, in this study conducted experimental testing the quality of welding or identify quality of the nugget by using Non-Destructive Test (NDT) -Ultrasonic Testing (UT), in which the identification of the quality of the welding is done with parameter thickness of worksheet after welding using NDT-UT with use same material worksheet and have more thickness of worksheet, the thickness of the worksheet single plate 1mm, with the capability of propagation Ultrasonic Testing (UT) standard limited> 3 mm [1], welding process parameters such as the time difference between 1-10s and the welding current of 8 KV, visually Heat Affected Zone ( HAZ ) have different results due to the length of time of welding. UT uses a probe that is used with a frequency of 4 MHz, diameter 10 mm, range 100 and the couplant used is oil. Identification techniques using drop 6dB, with sound velocity 2267 m / s of Fe, with the result that the effect of the Welding time affect the size of the HAZ, identification with the lowest time 1s show results capable identified joined through NDT - UT.

  14. Augmenting epidemiological models with point-of-care diagnostics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  15. Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2012-04-01

    Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

  16. Augmenting epidemiological models with point-of-care diagnostics data

    DOE PAGES

    Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; ...

    2016-04-20

    Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less

  17. Verification of Bayesian Clustering in Travel Behaviour Research – First Step to Macroanalysis of Travel Behaviour

    NASA Astrophysics Data System (ADS)

    Satra, P.; Carsky, J.

    2018-04-01

    Our research is looking at the travel behaviour from a macroscopic view, taking one municipality as a basic unit. The travel behaviour of one municipality as a whole is becoming one piece of a data in the research of travel behaviour of a larger area, perhaps a country. A data pre-processing is used to cluster the municipalities in groups, which show similarities in their travel behaviour. Such groups can be then researched for reasons of their prevailing pattern of travel behaviour without any distortion caused by municipalities with a different pattern. This paper deals with actual settings of the clustering process, which is based on Bayesian statistics, particularly the mixture model. An optimization of the settings parameters based on correlation of pointer model parameters and relative number of data in clusters is helpful, however not fully reliable method. Thus, method for graphic representation of clusters needs to be developed in order to check their quality. A training of the setting parameters in 2D has proven to be a beneficial method, because it allows visual control of the produced clusters. The clustering better be applied on separate groups of municipalities, where competition of only identical transport modes can be found.

  18. Integration of Energy Consumption and CO2 Emissions into the DES Tool with Lean Thinking

    NASA Astrophysics Data System (ADS)

    Nujoom, Reda; Wang, Qian

    2018-01-01

    Products are often made by accomplishing a number of manufacturing processes on a sequential flow line which is also known as manufacturing systems. In a traditional way, design or evaluation of a manufacturing system involves a determination or an analysis of the system performance by adjusting system parameters relating to such as system capacity, material processing time, material-handling and transportation and shop-floor layout. Environment related parameters, however, are not considered or considered as separate issues. In the past decade, there has been a growing concern about the environmental protection and governments almost in all over the world enforced certain rules and regulation to promote energy saving and reduce carbon dioxide (CO2) emissions in manufacturing industry. To date, development of a sustainable manufacturing system requires designers who need not merely to apply traditional methods of improving system efficiency and productivity but also examine the environmental issues in production of the developed manufacturing system. Most researchers, however, focused on operational systems, which do not incorporate the effect of environmental factors that may also affect the system performance. This paper presents a research work aiming to addresses these issues in design and evaluation of sustainable manufacturing systems incorporating parameters of energy consumption and CO2 emissions into a DES (discrete event simulation) tool.

  19. On the Transformation Behavior of NiTi Shape-Memory Alloy Produced by SLM

    NASA Astrophysics Data System (ADS)

    Speirs, Mathew; Wang, X.; Van Baelen, S.; Ahadi, A.; Dadbakhsh, S.; Kruth, J.-P.; Van Humbeeck, J.

    2016-12-01

    Selective laser melting has been applied as a production technique of nickel titanium (NiTi) parts. In this study, the scanning parameters and atmosphere control used during production were varied to assess the effects on the final component transformation criteria. Two production runs were completed: one in a high ( 1800 ppm O2) and one in a low-oxygen ( 220 ppm O2) environment. Further solution treatment was applied to analyze precipitation effects. It was found that the transformation temperature varies greatly even at identical energy densities highlighting the need for further in-depth investigations. In this respect, it was observed that oxidation was the dominating factor, increased with higher laser power adapted to higher scanning velocity. Once the atmospheric oxygen content was lowered from 1800 to about 220 ppm, a much smaller variation of transformation temperatures was obtained. In addition to oxidation, other contributing factors, such as nickel depletion (via evaporation during processing) as well as thermal stresses and textures, are further discussed and/or postulated. These results demonstrated the importance of processing and material conditions such as O2 content, powder composition, and laser scanning parameters. These parameters should be precisely controlled to reach desired transformation criteria for functional components made by SLM.

  20. Next Generation Ship-Borne ASW-System: An Exemplary Exertion of Methodologies and Tools Applied According to the German Military Acquisition Guidelines

    DTIC Science & Technology

    2013-06-01

    as well as the evaluation of product parameters, operational test and functional limits. The product will be handed over to the designated ...which results in a system design that can be tested , produced, and fielded to satisfy the need. The concept development phase enables us to determine...specifications that can be tested or verified. The requirements presented earlier are the minimum necessary to allow the design process to find

  1. Design and cost drivers in 2-D braiding

    NASA Technical Reports Server (NTRS)

    Morales, Alberto

    1993-01-01

    Fundamentally, the braiding process is a highly efficient, low cost method for combining single yarns into circumferential shapes, as evidenced by the number of applications for continuous sleeving. However, this braiding approach cannot fully demonstrate that it can drastically reduce the cost of complex shape structural preforms. Factors such as part geometry, machine design and configuration, materials used, and operating parameters are described as key cost drivers and what is needed to minimize their effect on elevating the cost of structural braided preforms.

  2. Microstructure-failure mode correlations in braided composites

    NASA Technical Reports Server (NTRS)

    Filatovs, G. J.; Sadler, Robert L.; El-Shiekh, Aly

    1992-01-01

    Explication of the fracture processes of braided composites is needed for modeling their behavior. Described is a systematic exploration of the relationship between microstructure, loading mode, and micro-failure mechanisms in carbon/epoxy braided composites. The study involved compression and fracture toughness tests and optical and scanning electron fractography, including dynamic in-situ testing. Principal failure mechanisms of low sliding, buckling, and unstable crack growth are correlated to microstructural parameters and loading modes; these are used for defining those microstructural conditions which are strength limiting.

  3. Concerning modeling of double-stage water evaporation cooling

    NASA Astrophysics Data System (ADS)

    Shatskiy, V. P.; Fedulova, L. I.; Gridneva, I. V.

    2018-03-01

    The matter of need for setting technical norms for production, as well as acceptable microclimate parameters, such as temperature and humidity, at the work place, remains urgent. Use of certain units should be economically sound and that should be taken into account for construction, assembly, operation, technological, and environmental requirements. Water evaporation coolers are simple to maintain, environmentally friendly, and quite cheap, but the development of the most efficient solutions requires mathematical modeling of the heat and mass transfer processes that take place in them.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  5. Nimbus 7 Coastal Zone Color Scanner (CZCS). Level 2 data product users' guide

    NASA Technical Reports Server (NTRS)

    Williams, S. P.; Szajna, E. F.; Hovis, W. A.

    1985-01-01

    The coastal zone color scanner (CZCS) is a scanning multispectral radiometer designed for the remote sensing of ocean color parameters from an earth orbiting space platform. A Technical Manual was designed for users of NIMBUS 7 CZCS Level 2 data products. It contains information which describes how the Level 1 data was process to obtain the Level 2 (derived) product. It contains information needed to operate on the data using digital computers and related equipment.

  6. Anaerobic Membrane Bioreactors for Treatment of Wastewater at Contingency Locations

    DTIC Science & Technology

    2009-05-01

    Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Outline • Need for wastewater treatment • What is an MBR • Anaerobic Process • Characteristics of...Storm (deploymentlink.osd.mil, 2009) Existing Conditions, Nasiriyah Iraq All pictures (USAID, 2009) Defenselink.mil (2009) What is an MBR EffluentInfluent...Energy kWh/m3 3-7.3 (Liao, 2006) AnMBR Configurations - Submerged Parameter Units Submerged AnMBR Flux L/m2-h 15 Pressure kPA 15-50 Cross Flow

  7. ITO-based evolutionary algorithm to solve traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Dong, Wenyong; Sheng, Kang; Yang, Chuanhua; Yi, Yunfei

    2014-03-01

    In this paper, a ITO algorithm inspired by ITO stochastic process is proposed for Traveling Salesmen Problems (TSP), so far, many meta-heuristic methods have been successfully applied to TSP, however, as a member of them, ITO needs further demonstration for TSP. So starting from designing the key operators, which include the move operator, wave operator, etc, the method based on ITO for TSP is presented, and moreover, the ITO algorithm performance under different parameter sets and the maintenance of population diversity information are also studied.

  8. Rock mechanics issues in completion and stimulation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    Rock mechanisms parameters such as the in situ stresses, elastic properties, failure characteristics, and poro-elastic response are important to most completion and stimulation operations. Perforating, hydraulic fracturing, wellbore stability, and sand production are examples of technology that are largely controlled by the rock mechanics of the process. While much research has been performed in these areas, there has been insufficient application that research by industry. In addition, there are new research needs that must be addressed for technology advancement.

  9. ISRU System Model Tool: From Excavation to Oxygen Production

    NASA Technical Reports Server (NTRS)

    Santiago-Maldonado, Edgardo; Linne, Diane L.

    2007-01-01

    In the late 80's, conceptual designs for an in situ oxygen production plant were documented in a study by Eagle Engineering [1]. In the "Summary of Findings" of this study, it is clearly pointed out that: "reported process mass and power estimates lack a consistent basis to allow comparison." The study goes on to say: "A study to produce a set of process mass, power, and volume requirements on a consistent basis is recommended." Today, approximately twenty years later, as humans plan to return to the moon and venture beyond, the need for flexible up-to-date models of the oxygen extraction production process has become even more clear. Multiple processes for the production of oxygen from lunar regolith are being investigated by NASA, academia, and industry. Three processes that have shown technical merit are molten regolith electrolysis, hydrogen reduction, and carbothermal reduction. These processes have been selected by NASA as the basis for the development of the ISRU System Model Tool (ISMT). In working to develop up-to-date system models for these processes NASA hopes to accomplish the following: (1) help in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the excavation and oxygen production processes, and (4) provide estimates on energy and power requirements, mass and volume of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters. Also, as confidence and high fidelity is achieved with each component's model, new techniques and processes can be introduced and analyzed at a fraction of the cost of traditional hardware development and test approaches. A first generation ISRU System Model Tool has been used to provide inputs to the Lunar Architecture Team studies.

  10. A modified dynamical model of drying process of polymer blend solution coated on a flat substrate

    NASA Astrophysics Data System (ADS)

    Kagami, Hiroyuki

    2008-05-01

    We have proposed and modified a model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication. And for example numerical simulation of the model reproduces a typical thickness profile of the polymer film formed after drying. Then we have clarified dependence of distribution of polymer molecules on a flat substrate on a various parameters based on analysis of numerical simulations. Then we drove nonlinear equations of drying process from the dynamical model and the fruits were reported. The subject of above studies was limited to solution having one kind of solute though the model could essentially deal with solution having some kinds of solutes. But nowadays discussion of drying process of a solution having some kinds of solutes is needed because drying process of solution having some kinds of solutes appears in many industrial scenes. Polymer blend solution is one instance. And typical resist consists of a few kinds of polymers. Then we introduced a dynamical model of drying process of polymer blend solution coated on a flat substrate and results of numerical simulations of the dynamical model. But above model was the simplest one. In this study, we modify above dynamical model of drying process of polymer blend solution adding effects that some parameters change with time as functions of some variables to it. Then we consider essence of drying process of polymer blend solution through comparison between results of numerical simulations of the modified model and those of the former model.

  11. Validating a large geophysical data set: Experiences with satellite-derived cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Knighton, James E.; Pursch, Andrew; Granger-Gallegos, Stephanie

    1992-01-01

    We are validating the global cloud parameters derived from the satellite-borne HIRS2 and MSU atmospheric sounding instrument measurements, and are using the analysis of these data as one prototype for studying large geophysical data sets in general. The HIRS2/MSU data set contains a total of 40 physical parameters, filling 25 MB/day; raw HIRS2/MSU data are available for a period exceeding 10 years. Validation involves developing a quantitative sense for the physical meaning of the derived parameters over the range of environmental conditions sampled. This is accomplished by comparing the spatial and temporal distributions of the derived quantities with similar measurements made using other techniques, and with model results. The data handling needed for this work is possible only with the help of a suite of interactive graphical and numerical analysis tools. Level 3 (gridded) data is the common form in which large data sets of this type are distributed for scientific analysis. We find that Level 3 data is inadequate for the data comparisons required for validation. Level 2 data (individual measurements in geophysical units) is needed. A sampling problem arises when individual measurements, which are not uniformly distributed in space or time, are used for the comparisons. Standard 'interpolation' methods involve fitting the measurements for each data set to surfaces, which are then compared. We are experimenting with formal criteria for selecting geographical regions, based upon the spatial frequency and variability of measurements, that allow us to quantify the uncertainty due to sampling. As part of this project, we are also dealing with ways to keep track of constraints placed on the output by assumptions made in the computer code. The need to work with Level 2 data introduces a number of other data handling issues, such as accessing data files across machine types, meeting large data storage requirements, accessing other validated data sets, processing speed and throughput for interactive graphical work, and problems relating to graphical interfaces.

  12. Atmospheric new particle formation at the research station Melpitz, Germany: connection with gaseous precursors and meteorological parameters

    NASA Astrophysics Data System (ADS)

    Größ, Johannes; Hamed, Amar; Sonntag, André; Spindler, Gerald; Elina Manninen, Hanna; Nieminen, Tuomo; Kulmala, Markku; Hõrrak, Urmas; Plass-Dülmer, Christian; Wiedensohler, Alfred; Birmili, Wolfram

    2018-02-01

    This paper revisits the atmospheric new particle formation (NPF) process in the polluted Central European troposphere, focusing on the connection with gas-phase precursors and meteorological parameters. Observations were made at the research station Melpitz (former East Germany) between 2008 and 2011 involving a neutral cluster and air ion spectrometer (NAIS). Particle formation events were classified by a new automated method based on the convolution integral of particle number concentration in the diameter interval 2-20 nm. To study the relevance of gaseous sulfuric acid as a precursor for nucleation, a proxy was derived on the basis of direct measurements during a 1-month campaign in May 2008. As a major result, the number concentration of freshly produced particles correlated significantly with the concentration of sulfur dioxide as the main precursor of sulfuric acid. The condensation sink, a factor potentially inhibiting NPF events, played a subordinate role only. The same held for experimentally determined ammonia concentrations. The analysis of meteorological parameters confirmed the absolute need for solar radiation to induce NPF events and demonstrated the presence of significant turbulence during those events. Due to its tight correlation with solar radiation, however, an independent effect of turbulence for NPF could not be established. Based on the diurnal evolution of aerosol, gas-phase, and meteorological parameters near the ground, we further conclude that the particle formation process is likely to start in elevated parts of the boundary layer rather than near ground level.

  13. Working memory consolidation: insights from studies on attention and working memory.

    PubMed

    Ricker, Timothy J; Nieuwenstein, Mark R; Bayliss, Donna M; Barrouillet, Pierre

    2018-04-10

    Working memory, the system that maintains a limited set of representations for immediate use in cognition, is a central part of human cognition. Three processes have recently been proposed to govern information storage in working memory: consolidation, refreshing, and removal. Here, we discuss in detail the theoretical construct of working memory consolidation, a process critical to the creation of a stable working memory representation. We present a brief overview of the research that indicated the need for a construct such as working memory consolidation and the subsequent research that has helped to define the parameters of the construct. We then move on to explicitly state the points of agreement as to what processes are involved in working memory consolidation. © 2018 New York Academy of Sciences.

  14. Numerical modeling of materials processing applications of a pulsed cold cathode electron gun

    NASA Astrophysics Data System (ADS)

    Etcheverry, J. I.; Martínez, O. E.; Mingolo, N.

    1998-04-01

    A numerical study of the application of a pulsed cold cathode electron gun to materials processing is performed. A simple semiempirical model of the discharge is used, together with backscattering and energy deposition profiles obtained by a Monte Carlo technique, in order to evaluate the energy source term inside the material. The numerical computation of the heat equation with the calculated source term is performed in order to obtain useful information on melting and vaporization thresholds, melted radius and depth, and on the dependence of these variables on processing parameters such as operating pressure, initial voltage of the discharge and cathode-sample distance. Numerical results for stainless steel are presented, which demonstrate the need for several modifications of the experimental design in order to achieve a better efficiency.

  15. Analyzing Strategic Business Rules through Simulation Modeling

    NASA Astrophysics Data System (ADS)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  16. Effect of Bearing Housings on Centrifugal Pump Rotor Dynamics

    NASA Astrophysics Data System (ADS)

    Yashchenko, A. S.; Rudenko, A. A.; Simonovskiy, V. I.; Kozlov, O. M.

    2017-08-01

    The article deals with the effect of a bearing housing on rotor dynamics of a barrel casing centrifugal boiler feed pump rotor. The calculation of the rotor model including the bearing housing has been performed by the method of initial parameters. The calculation of a rotor solid model including the bearing housing has been performed by the finite element method. Results of both calculations highlight the need to add bearing housings into dynamic analyses of the pump rotor. The calculation performed by modern software packages is more a time-taking process, at the same time it is a preferred one due to a graphic editor that is employed for creating a numerical model. When it is necessary to view many variants of design parameters, programs for beam modeling should be used.

  17. Classification framework for partially observed dynamical systems

    NASA Astrophysics Data System (ADS)

    Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira

    2017-04-01

    We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.

  18. General rigid motion correction for computed tomography imaging based on locally linear embedding

    NASA Astrophysics Data System (ADS)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  19. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  20. Accounting for human neurocognitive function in the design and evaluation of 360 degree situational awareness display systems

    NASA Astrophysics Data System (ADS)

    Metcalfe, Jason S.; Mikulski, Thomas; Dittman, Scott

    2011-06-01

    The current state and trajectory of development for display technologies supporting information acquisition, analysis and dissemination lends a broad informational infrastructure to operators of complex systems. The amount of information available threatens to outstrip the perceptual-cognitive capacities of operators, thus limiting their ability to effectively interact with targeted technologies. Therefore, a critical step in designing complex display systems is to find an appropriate match between capabilities, operational needs, and human ability to utilize complex information. The present work examines a set of evaluation parameters that were developed to facilitate the design of systems to support a specific military need; that is, the capacity to support the achievement and maintenance of real-time 360° situational awareness (SA) across a range of complex military environments. The focal point of this evaluation is on the reciprocity native to advanced engineering and human factors practices, with a specific emphasis on aligning the operator-systemenvironment fit. That is, the objective is to assess parameters for evaluation of 360° SA display systems that are suitable for military operations in tactical platforms across a broad range of current and potential operational environments. The approach is centered on five "families" of parameters, including vehicle sensors, data transmission, in-vehicle displays, intelligent automation, and neuroergonomic considerations. Parameters are examined under the assumption that displays designed to conform to natural neurocognitive processing will enhance and stabilize Soldier-system performance and, ultimately, unleash the human's potential to actively achieve and maintain the awareness necessary to enhance lethality and survivability within modern and future operational contexts.

  1. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  2. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  3. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  4. Techniques and Protocols for Dispersing Nanoparticle Powders in Aqueous Media-Is there a Rationale for Harmonization?

    PubMed

    Hartmann, Nanna B; Jensen, Keld Alstrup; Baun, Anders; Rasmussen, Kirsten; Rauscher, Hubert; Tantra, Ratna; Cupi, Denisa; Gilliland, Douglas; Pianella, Francesca; Riego Sintes, Juan M

    2015-01-01

    Selecting appropriate ways of bringing engineered nanoparticles (ENP) into aqueous dispersion is a main obstacle for testing, and thus for understanding and evaluating, their potential adverse effects to the environment and human health. Using different methods to prepare (stock) dispersions of the same ENP may be a source of variation in the toxicity measured. Harmonization and standardization of dispersion methods applied in mammalian and ecotoxicity testing are needed to ensure a comparable data quality and to minimize test artifacts produced by modifications of ENP during the dispersion preparation process. Such harmonization and standardization will also enhance comparability among tests, labs, and studies on different types of ENP. The scope of this review was to critically discuss the essential parameters in dispersion protocols for ENP. The parameters are identified from individual scientific studies and from consensus reached in larger scale research projects and international organizations. A step-wise approach is proposed to develop tailored dispersion protocols for ecotoxicological and mammalian toxicological testing of ENP. The recommendations of this analysis may serve as a guide to researchers, companies, and regulators when selecting, developing, and evaluating the appropriateness of dispersion methods applied in mammalian and ecotoxicity testing. However, additional experimentation is needed to further document the protocol parameters and investigate to what extent different stock dispersion methods affect ecotoxicological and mammalian toxicological responses of ENP.

  5. Integrated identification and control for nanosatellites reclaiming failed satellite

    NASA Astrophysics Data System (ADS)

    Han, Nan; Luo, Jianjun; Ma, Weihua; Yuan, Jianping

    2018-05-01

    Using nanosatellites to reclaim a failed satellite needs nanosatellites to attach to its surface to take over its attitude control function. This is challenging, since parameters including the inertia matrix of the combined spacecraft and the relative attitude information of attached nanosatellites with respect to the given body-fixed frame of the failed satellite are all unknown after the attachment. Besides, if the total control capacity needs to be increased during the reclaiming process by new nanosatellites, real-time parameters updating will be necessary. For these reasons, an integrated identification and control method is proposed in this paper, which enables the real-time parameters identification and attitude takeover control to be conducted concurrently. Identification of the inertia matrix of the combined spacecraft and the relative attitude information of attached nanosatellites are both considered. To guarantee sufficient excitation for the identification of the inertia matrix, a modified identification equation is established by filtering out sample points leading to ill-conditioned identification, and the identification performance of the inertia matrix is improved. Based on the real-time estimated inertia matrix, an attitude takeover controller is designed, the stability of the controller is analysed using Lyapunov method. The commanded control torques are allocated to each nanosatellite while the control saturation constraint being satisfied using the Quadratic Programming (QP) method. Numerical simulations are carried out to demonstrate the feasibility and effectiveness of the proposed integrated identification and control method.

  6. The Effect of Stochastically Varying Creep Parameters on Residual Stresses in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Mital, Subodh K.; Bednarcyk, Brett A.; Arnold, Steven M.

    2015-01-01

    Constituent properties, along with volume fraction, have a first order effect on the microscale fields within a composite material and influence the macroscopic response. Therefore, there is a need to assess the significance of stochastic variation in the constituent properties of composites at the higher scales. The effect of variability in the parameters controlling the time-dependent behavior, in a unidirectional SCS-6 SiC fiber-reinforced RBSN matrix composite lamina, on the residual stresses induced during processing is investigated numerically. The generalized method of cells micromechanics theory is utilized to model the ceramic matrix composite lamina using a repeating unit cell. The primary creep phases of the constituents are approximated using a Norton-Bailey, steady state, power law creep model. The effect of residual stresses on the proportional limit stress and strain to failure of the composite is demonstrated. Monte Carlo simulations were conducted using a normal distribution for the power law parameters and the resulting residual stress distributions were predicted.

  7. Measurements of Cuspal Slope Inclination Angles in Palaeoanthropological Applications

    NASA Astrophysics Data System (ADS)

    Gaboutchian, A. V.; Knyaz, V. A.; Leybova, N. A.

    2017-05-01

    Tooth crown morphological features, studied in palaeoanthropology, provide valuable information about human evolution and development of civilization. Tooth crown morphology represents biological and historical data of high taxonomical value as it characterizes genetically conditioned tooth relief features averse to substantial changes under environmental factors during lifetime. Palaeoanthropological studies are still based mainly on descriptive techniques and manual measurements of limited number of morphological parameters. Feature evaluation and measurement result analysis are expert-based. Development of new methods and techniques in 3D imaging creates a background provides for better value of palaeoanthropological data processing, analysis and distribution. The goals of the presented research are to propose new features for automated odontometry and to explore their applicability to paleoanthropological studies. A technique for automated measuring of given morphological tooth parameters needed for anthropological study is developed. It is based on using original photogrammetric system as a teeth 3D models acquisition device and on a set of algorithms for given tooth parameters estimation.

  8. The Microscope Space Mission and the In-Orbit Calibration Plan for its Instrument

    NASA Astrophysics Data System (ADS)

    Levy, Agnès Touboul, Pierre; Rodrigues, Manuel; Onera, Émilie Hardy; Métris, Gilles; Robert, Alain

    2015-01-01

    The MICROSCOPE space mission aims at testing the Equivalence Principle (EP) with an accuracy of 10-15. This principle is one of the basis of the General Relativity theory; it states the equivalence between gravitational and inertial mass. The test is based on the precise measurement of a gravitational signal by a differential electrostatic accelerometer which includes two cylindrical test masses made of different materials. The accelerometers constitute the payload accommodated on board a drag-free micro-satellite which is controlled inertial or rotating about the normal to the orbital plane. The acceleration estimates used for the EP test are disturbed by the instruments physical parameters and by the instrument environment conditions on-board the satellite. These parameters are partially measured with ground tests or during the integration of the instrument in the satellite (alignment). Nevertheless, the ground evaluations are not sufficient with respect to the EP test accuracy objectives. An in-orbit calibration is therefore needed to characterize them finely. The calibration process for each parameter has been defined.

  9. Neuromorphic vision sensors and preprocessors in system applications

    NASA Astrophysics Data System (ADS)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  10. The Mathematical Modeling and Computer Simulation of Electrochemical Micromachining Using Ultrashort Pulses

    NASA Astrophysics Data System (ADS)

    Kozak, J.; Gulbinowicz, D.; Gulbinowicz, Z.

    2009-05-01

    The need for complex and accurate three dimensional (3-D) microcomponents is increasing rapidly for many industrial and consumer products. Electrochemical machining process (ECM) has the potential of generating desired crack-free and stress-free surfaces of microcomponents. This paper reports a study of pulse electrochemical micromachining (PECMM) using ultrashort (nanoseconds) pulses for generating complex 3-D microstructures of high accuracy. A mathematical model of the microshaping process with taking into consideration unsteady phenomena in electrical double layer has been developed. The software for computer simulation of PECM has been developed and the effects of machining parameters on anodic localization and final shape of machined surface are presented.

  11. A new method of cardiographic image segmentation based on grammar

    NASA Astrophysics Data System (ADS)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  12. Online analysis and process control in recombinant protein production (review).

    PubMed

    Palmer, Shane M; Kunji, Edmund R S

    2012-01-01

    Online analysis and control is essential for efficient and reproducible bioprocesses. A key factor in real-time control is the ability to measure critical variables rapidly. Online in situ measurements are the preferred option and minimize the potential loss of sterility. The challenge is to provide sensors with a good lifespan that withstand harsh bioprocess conditions, remain stable for the duration of a process without the need for recalibration, and offer a suitable working range. In recent decades, many new techniques that promise to extend the possibilities of analysis and control, not only by providing new parameters for analysis, but also through the improvement of accepted, well practiced, measurements have arisen.

  13. Development, Characterization, and Resultant Properties of a Carbon, Boron, and Chromium Ternary Diffusion System

    NASA Astrophysics Data System (ADS)

    Domec, Brennan S.

    In today's industry, engineering materials are continuously pushed to the limits. Often, the application only demands high-specification properties in a narrowly-defined region of the material, such as the outermost surface. This, in combination with the economic benefits, makes case hardening an attractive solution to meet industry demands. While case hardening has been in use for decades, applications demanding high hardness, deep case depth, and high corrosion resistance are often under-served by this process. Instead, new solutions are required. The goal of this study is to develop and characterize a new borochromizing process applied to a pre-carburized AISI 8620 alloy steel. The process was successfully developed using a combination of computational simulations, calculations, and experimental testing. Process kinetics were studied by fitting case depth measurement data to Fick's Second Law of Diffusion and an Arrhenius equation. Results indicate that the kinetics of the co-diffusion method are unaffected by the addition of chromium to the powder pack. The results also show that significant structural degradation of the case occurs when chromizing is applied sequentially to an existing boronized case. The amount of degradation is proportional to the chromizing parameters. Microstructural evolution was studied using metallographic methods, simulation and computational calculations, and analytical techniques. While the co-diffusion process failed to enrich the substrate with chromium, significant enrichment is obtained with the sequential diffusion process. The amount of enrichment is directly proportional to the chromizing parameters with higher parameters resulting in more enrichment. The case consists of M7C3 and M23C6 carbides nearest the surface, minor amounts of CrB, and a balance of M2B. Corrosion resistance was measured with salt spray and electrochemical methods. These methods confirm the benefit of surface enrichment by chromium in the sequential diffusion method with corrosion resistance increasing directly with chromium concentration. The results also confirm the deleterious effect of surface-breaking case defects and the need to reduce or eliminate them. The best combination of microstructural integrity, mean surface hardness, effective case depth, and corrosion resistance is obtained in samples sequentially boronized and chromized at 870°C for 6hrs. Additional work is required to further optimize process parameters and case properties.

  14. Development of a thermally-assisted piercing (TAP) process for introducing holes into thermoplastic composites

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas W. A.

    Composite parts can be manufactured to near-net shape with minimum wastage of material; however, there is almost always a need for further machining. The most common post-manufacture machining operations for composite materials are to create holes for assembly. This thesis presents and discusses a thermally-assisted piercing process that can be used as a technique for introducing holes into thermoplastic composites. The thermally-assisted piercing process heats up, and locally melts, thermoplastic composites to allow material to be displaced around a hole, rather than cutting them out from the structure. This investigation was concerned with how the variation of piercing process parameters (such as the size of the heated area, the temperature of the laminate prior to piercing and the geometry of the piercing spike) changed the material microstructure within carbon fibre/Polyetheretherketone (PEEK) laminates. The variation of process parameters was found to significantly affect the formation of resin rich regions, voids and the fibre volume fraction in the material surrounding the hole. Mechanical testing (using open-hole tension, open-hole compression, plain-pin bearing and bolted bearing tests) showed that the microstructural features created during piercing were having significant influence over the resulting mechanical performance of specimens. By optimising the process parameters strength improvements of up to 11% and 21% were found for pierced specimens when compared with drilled specimens for open-hole tension and compression loading, respectively. For plain-pin and bolted bearing tests, maximum strengths of 77% and 85%, respectively, were achieved when compared with drilled holes. Improvements in first failure force (by 10%) and the stress at 4% hole elongation (by 18%), however, were measured for the bolted bearing tests when compared to drilled specimens. The overall performance of pierced specimens in an industrially relevant application ultimately depends on the properties required for that specific scenario. The results within this thesis show that the piercing technique could be used as a direct replacement to drilling depending on this application.

  15. Transcranial electric stimulation for the investigation of speech perception and comprehension

    PubMed Central

    Zoefel, Benedikt; Davis, Matthew H.

    2017-01-01

    ABSTRACT Transcranial electric stimulation (tES), comprising transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), involves applying weak electrical current to the scalp, which can be used to modulate membrane potentials and thereby modify neural activity. Critically, behavioural or perceptual consequences of this modulation provide evidence for a causal role of neural activity in the stimulated brain region for the observed outcome. We present tES as a tool for the investigation of which neural responses are necessary for successful speech perception and comprehension. We summarise existing studies, along with challenges that need to be overcome, potential solutions, and future directions. We conclude that, although standardised stimulation parameters still need to be established, tES is a promising tool for revealing the neural basis of speech processing. Future research can use this method to explore the causal role of brain regions and neural processes for the perception and comprehension of speech. PMID:28670598

  16. Fabrication and Characterization of High Strength Al-Cu Alloys Processed Using Laser Beam Melting in Metal Powder Bed

    NASA Astrophysics Data System (ADS)

    Ahuja, Bhrigu; Karg, Michael; Nagulin, Konstantin Yu.; Schmidt, Michael

    The proposed paper illustrates fabrication and characterization of high strength Aluminium Copper alloys processed using Laser Beam Melting process. Al-Cu alloys EN AW-2219 and EN AW-2618 are classified as wrought alloys and 2618 is typically considered difficult to weld. Laser Beam Melting (LBM) process from the family of Additive Manufacturing processes, has the unique ability to form fully dense complex 3D geometries using micro sized metallic powder in a layer by layer fabrication methodology. LBM process can most closely be associated to the conventional laser welding process, but has significant differences in terms of the typical laser intensities and scan speeds used. Due to the use of high intensities and fast scan speeds, the process induces extremely high heating and cooling rates. This property gives it a unique physical attribute and therefore its ability to process high strength Al-Cu alloys needs to be investigated. Experiments conducted during the investigations associate the induced energy density controlled by varying process parameters to the achieved relative densities of the fabricated 3D structures.

  17. Development of an intelligent system for cooling rate and fill control in GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Einerson, C.J.; Smartt, H.B.; Johnson, J.A.

    1992-09-01

    A control strategy for gas metal arc welding (GMAW) is developed in which the welding system detects certain existing conditions and adjusts the process in accordance to pre-specified rules. This strategy is used to control the reinforcement and weld bead centerline cooling rate during welding. Relationships between heat and mass transfer rates to the base metal and the required electrode speed and welding speed for specific open circuit voltages are taught to a artificial neural network. Control rules are programmed into a fuzzy logic system. TRADITOINAL CONTROL OF THE GMAW PROCESS is based on the use of explicit welding proceduresmore » detailing allowable parameter ranges on a pass by pass basis for a given weld. The present work is an exploration of a completely different approach to welding control. In this work the objectives are to produce welds having desired weld bead reinforcements while maintaining the weld bead centerline cooling rate at preselected values. The need for this specific control is related to fabrication requirements for specific types of pressure vessels. The control strategy involves measuring weld joint transverse cross-sectional area ahead of the welding torch and the weld bead centerline cooling rate behind the weld pool, both by means of video (2), calculating the required process parameters necessary to obtain the needed heat and mass transfer rates (in appropriate dimensions) by means of an artificial neural network, and controlling the heat transfer rate by means of a fuzzy logic controller (3). The result is a welding machine that senses the welding conditions and responds to those conditions on the basis of logical rules, as opposed to producing a weld based on a specific procedure.« less

  18. Development of an intelligent system for cooling rate and fill control in GMAW. [Gas Metal Arc Welding (GMAW)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Einerson, C.J.; Smartt, H.B.; Johnson, J.A.

    1992-01-01

    A control strategy for gas metal arc welding (GMAW) is developed in which the welding system detects certain existing conditions and adjusts the process in accordance to pre-specified rules. This strategy is used to control the reinforcement and weld bead centerline cooling rate during welding. Relationships between heat and mass transfer rates to the base metal and the required electrode speed and welding speed for specific open circuit voltages are taught to a artificial neural network. Control rules are programmed into a fuzzy logic system. TRADITOINAL CONTROL OF THE GMAW PROCESS is based on the use of explicit welding proceduresmore » detailing allowable parameter ranges on a pass by pass basis for a given weld. The present work is an exploration of a completely different approach to welding control. In this work the objectives are to produce welds having desired weld bead reinforcements while maintaining the weld bead centerline cooling rate at preselected values. The need for this specific control is related to fabrication requirements for specific types of pressure vessels. The control strategy involves measuring weld joint transverse cross-sectional area ahead of the welding torch and the weld bead centerline cooling rate behind the weld pool, both by means of video (2), calculating the required process parameters necessary to obtain the needed heat and mass transfer rates (in appropriate dimensions) by means of an artificial neural network, and controlling the heat transfer rate by means of a fuzzy logic controller (3). The result is a welding machine that senses the welding conditions and responds to those conditions on the basis of logical rules, as opposed to producing a weld based on a specific procedure.« less

  19. Sensor selection and chemo-sensory optimization: toward an adaptable chemo-sensory system.

    PubMed

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to "adapt" in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve.

  20. Sensor Selection and Chemo-Sensory Optimization: Toward an Adaptable Chemo-Sensory System

    PubMed Central

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to “adapt” in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve. PMID:22319492

  1. Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions

    NASA Astrophysics Data System (ADS)

    Vermeulen, Petrus

    2017-04-01

    A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.

  2. Beyond Roughness: Maximum-Likelihood Estimation of Topographic "Structure" on Venus and Elsewhere in the Solar System

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Eggers, G. L.; Lewis, K. W.; Olhede, S. C.

    2015-12-01

    What numbers "capture" topography? If stationary, white, and Gaussian: mean and variance. But "whiteness" is strong; we are led to a "baseline" over which to compute means and variances. We then have subscribed to topography as a correlated process, and to the estimation (noisy, afftected by edge effects) of the parameters of a spatial or spectral covariance function. What if the covariance function or the point process itself aren't Gaussian? What if the region under study isn't regularly shaped or sampled? How can results from differently sized patches be compared robustly? We present a spectral-domain "Whittle" maximum-likelihood procedure that circumvents these difficulties and answers the above questions. The key is the Matern form, whose parameters (variance, range, differentiability) define the shape of the covariance function (Gaussian, exponential, ..., are all special cases). We treat edge effects in simulation and in estimation. Data tapering allows for the irregular regions. We determine the estimation variance of all parameters. And the "best" estimate may not be "good enough": we test whether the "model" itself warrants rejection. We illustrate our methodology on geologically mapped patches of Venus. Surprisingly few numbers capture planetary topography. We derive them, with uncertainty bounds, we simulate "new" realizations of patches that look to the geologists exactly as if they were derived from similar processes. Our approach holds in 1, 2, and 3 spatial dimensions, and generalizes to multiple variables, e.g. when topography and gravity are being considered jointly (perhaps linked by flexural rigidity, erosion, or other surface and sub-surface modifying processes). Our results have widespread implications for the study of planetary topography in the Solar System, and are interpreted in the light of trying to derive "process" from "parameters", the end goal to assign likely formation histories for the patches under consideration. Our results should also be relevant for whomever needed to perform spatial interpolation or out-of-sample extension (e.g. kriging), machine learning and feature detection, on geological data. We present procedural details but focus on high-level results that have real-world implications for the study of Venus, Earth, other planets, and moons.

  3. An overview of mesoscale aerosol processes, comparisons, and validation studies from DRAGON networks

    NASA Astrophysics Data System (ADS)

    Holben, Brent N.; Kim, Jhoon; Sano, Itaru; Mukai, Sonoyo; Eck, Thomas F.; Giles, David M.; Schafer, Joel S.; Sinyuk, Aliaksandr; Slutsker, Ilya; Smirnov, Alexander; Sorokin, Mikhail; Anderson, Bruce E.; Che, Huizheng; Choi, Myungje; Crawford, James H.; Ferrare, Richard A.; Garay, Michael J.; Jeong, Ukkyo; Kim, Mijin; Kim, Woogyung; Knox, Nichola; Li, Zhengqiang; Lim, Hwee S.; Liu, Yang; Maring, Hal; Nakata, Makiko; Pickering, Kenneth E.; Piketh, Stuart; Redemann, Jens; Reid, Jeffrey S.; Salinas, Santo; Seo, Sora; Tan, Fuyi; Tripathi, Sachchida N.; Toon, Owen B.; Xiao, Qingyang

    2018-01-01

    Over the past 24 years, the AErosol RObotic NETwork (AERONET) program has provided highly accurate remote-sensing characterization of aerosol optical and physical properties for an increasingly extensive geographic distribution including all continents and many oceanic island and coastal sites. The measurements and retrievals from the AERONET global network have addressed satellite and model validation needs very well, but there have been challenges in making comparisons to similar parameters from in situ surface and airborne measurements. Additionally, with improved spatial and temporal satellite remote sensing of aerosols, there is a need for higher spatial-resolution ground-based remote-sensing networks. An effort to address these needs resulted in a number of field campaign networks called Distributed Regional Aerosol Gridded Observation Networks (DRAGONs) that were designed to provide a database for in situ and remote-sensing comparison and analysis of local to mesoscale variability in aerosol properties. This paper describes the DRAGON deployments that will continue to contribute to the growing body of research related to meso- and microscale aerosol features and processes. The research presented in this special issue illustrates the diversity of topics that has resulted from the application of data from these networks.

  4. Autocorrelated residuals in inverse modelling of soil hydrological processes: a reason for concern or something that can safely be ignored?

    NASA Astrophysics Data System (ADS)

    Scharnagl, Benedikt; Durner, Wolfgang

    2013-04-01

    Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.

  5. Particle behavior and char burnout mechanisms under pressurized combustion conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, C.M.; Spliethoff, H.; Hein, K.R.G.

    Combined cycle systems with coal-fired gas turbines promise highest cycle efficiencies for this fuel. Pressurized pulverized coal combustion, in particular, yields high cycle efficiencies due to the high flue gas temperatures possible. The main problem, however, is to ensure a flue gas clean enough to meet the high gas turbine standards with a dirty fuel like coal. On the one hand, a profound knowledge of the basic chemical and physical processes during fuel conversion under elevated pressures is required whereas on the other hand suitable hot gas cleaning systems need to be developed. The objective of this work was tomore » provide experimental data to enable a detailed description of pressurized coal combustion processes. A series of experiments were performed with two German hvb coals, Ensdorf and Goettelborn, and one German brown coal, Garzweiler, using a semi-technical scale pressurized entrained flow reactor. The parameters varied in the experiments were pressure, gas temperature and bulk gas oxygen concentration. A two-color pyrometer was used for in-situ determination of particle surface temperatures and particle sizes. Flue gas composition was measured and solid residue samples taken and subsequently analyzed. The char burnout reaction rates were determinated varying the parameters pressure, gas temperature and initial oxygen concentration. Variation of residence time was achieved by taking the samples at different points along the reaction zone. The most influential parameters on char burnout reaction rates were found to be oxygen partial pressure and fuel volatile content. With increasing pressure the burn-out reactions are accelerated and are mostly controlled by product desorption and pore diffusion being the limiting processes. The char burnout process is enhanced by a higher fuel volatile content.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yiqi; Ahlström, Anders; Allison, Steven D.

    Soil carbon (C) is a critical component of Earth system models (ESMs) and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the 3rd to 5th assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe themore » environmental conditions that soils experience. Firstly, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by 1st-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic SOC dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Secondly, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based datasets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Thirdly, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable datasets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.« less

  7. Computer algorithm for analyzing and processing borehole strainmeter data

    USGS Publications Warehouse

    Langbein, John O.

    2010-01-01

    The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.

  8. Simulation of spatial and temporal properties of aftershocks by means of the fiber bundle model

    NASA Astrophysics Data System (ADS)

    Monterrubio-Velasco, Marisol; Zúñiga, F. R.; Márquez-Ramírez, Victor Hugo; Figueroa-Soto, Angel

    2017-11-01

    The rupture processes of any heterogeneous material constitute a complex physical problem. Earthquake aftershocks show temporal and spatial behaviors which are consequence of the heterogeneous stress distribution and multiple rupturing following the main shock. This process is difficult to model deterministically due to the number of parameters and physical conditions, which are largely unknown. In order to shed light on the minimum requirements for the generation of aftershock clusters, in this study, we perform a simulation of the main features of such a complex process by means of a fiber bundle (FB) type model. The FB model has been widely used to analyze the fracture process in heterogeneous materials. It is a simple but powerful tool that allows modeling the main characteristics of a medium such as the brittle shallow crust of the earth. In this work, we incorporate spatial properties, such as the Coulomb stress change pattern, which help simulate observed characteristics of aftershock sequences. In particular, we introduce a parameter ( P) that controls the probability of spatial distribution of initial loads. Also, we use a "conservation" parameter ( π), which accounts for the load dissipation of the system, and demonstrate its influence on the simulated spatio-temporal patterns. Based on numerical results, we find that P has to be in the range 0.06 < P < 0.30, whilst π needs to be limited by a very narrow range ( 0.60 < π < 0.66) in order to reproduce aftershocks pattern characteristics which resemble those of observed sequences. This means that the system requires a small difference in the spatial distribution of initial stress, and a very particular fraction of load transfer in order to generate realistic aftershocks.

  9. Experimental research and numerical optimisation of multi-point sheet metal forming implementation using a solid elastic cushion system

    NASA Astrophysics Data System (ADS)

    Tolipov, A. A.; Elghawail, A.; Shushing, S.; Pham, D.; Essa, K.

    2017-09-01

    There is a growing demand for flexible manufacturing techniques that meet the rapid changes in customer needs. A finite element analysis numerical optimisation technique was used to optimise the multi-point sheet forming process. Multi-point forming (MPF) is a flexible sheet metal forming technique where the same tool can be readily changed to produce different parts. The process suffers from some geometrical defects such as wrinkling and dimpling, which have been found to be the cause of the major surface quality problems. This study investigated the influence of parameters such as the elastic cushion hardness, blank holder force, coefficient of friction, cushion thickness and radius of curvature, on the quality of parts formed in a flexible multi-point stamping die. For those reasons, in this investigation, a multipoint forming stamping process using a blank holder was carried out in order to study the effects of the wrinkling, dimpling, thickness variation and forming force. The aim was to determine the optimum values of these parameters. Finite element modelling (FEM) was employed to simulate the multi-point forming of hemispherical shapes. Using the response surface method, the effects of process parameters on wrinkling, maximum deviation from the target shape and thickness variation were investigated. The results show that elastic cushion with proper thickness and polyurethane with the hardness of Shore A90. It has also been found that the application of lubrication cans improve the shape accuracy of the formed workpiece. These final results were compared with the numerical simulation results of the multi-point forming for hemispherical shapes using a blank-holder and it was found that using cushion hardness realistic to reduce wrinkling and maximum deviation.

  10. Optical sectioning microscopy using two-frame structured illumination and Hilbert-Huang data processing

    NASA Astrophysics Data System (ADS)

    Trusiak, M.; Patorski, K.; Tkaczyk, T.

    2014-12-01

    We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).

  11. Sigma metrics as a tool for evaluating the performance of internal quality control in a clinical chemistry laboratory.

    PubMed

    Kumar, B Vinodh; Mohan, Thuthi

    2018-01-01

    Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.

  12. Toward more realistic projections of soil carbon dynamics by Earth system models

    USGS Publications Warehouse

    Luo, Y.; Ahlström, Anders; Allison, Steven D.; Batjes, Niels H.; Brovkin, V.; Carvalhais, Nuno; Chappell, Adrian; Ciais, Philippe; Davidson, Eric A.; Finzi, Adien; Georgiou, Katerina; Guenet, Bertrand; Hararuk, Oleksandra; Harden, Jennifer; He, Yujie; Hopkins, Francesca; Jiang, L.; Koven, Charles; Jackson, Robert B.; Jones, Chris D.; Lara, M.; Liang, J.; McGuire, A. David; Parton, William; Peng, Changhui; Randerson, J.; Salazar, Alejandro; Sierra, Carlos A.; Smith, Matthew J.; Tian, Hanqin; Todd-Brown, Katherine E. O; Torn, Margaret S.; van Groenigen, Kees Jan; Wang, Ying; West, Tristram O.; Wei, Yaxing; Wieder, William R.; Xia, Jianyang; Xu, Xia; Xu, Xiaofeng; Zhou, T.

    2016-01-01

    Soil carbon (C) is a critical component of Earth system models (ESMs), and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the third to fifth assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe the environmental conditions that soils experience. First, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by first-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic soil organic C (SOC) dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Second, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based data sets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Third, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable data sets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.

  13. Calculation of electromagnetic force in electromagnetic forming process of metal sheet

    NASA Astrophysics Data System (ADS)

    Xu, Da; Liu, Xuesong; Fang, Kun; Fang, Hongyuan

    2010-06-01

    Electromagnetic forming (EMF) is a forming process that relies on the inductive electromagnetic force to deform metallic workpiece at high speed. Calculation of the electromagnetic force is essential to understand the EMF process. However, accurate calculation requires complex numerical solution, in which the coupling between the electromagnetic process and the deformation of workpiece needs be considered. In this paper, an appropriate formula has been developed to calculate the electromagnetic force in metal work-piece in the sheet EMF process. The effects of the geometric size of coil, the material properties, and the parameters of discharge circuit on electromagnetic force are taken into consideration. Through the formula, the electromagnetic force at different time and in different positions of the workpiece can be predicted. The calculated electromagnetic force and magnetic field are in good agreement with the numerical and experimental results. The accurate prediction of the electromagnetic force provides an insight into the physical process of the EMF and a powerful tool to design optimum EMF systems.

  14. Mass-number and excitation-energy dependence of the spin cutoff parameter

    DOE PAGES

    Grimes, S. M.; Voinov, A. V.; Massey, T. N.

    2016-07-12

    Here, the spin cutoff parameter determining the nuclear level density spin distribution ρ(J) is defined through the spin projection as < J 2 z > 1/2 or equivalently for spherical nuclei, (< J(J+1) >/3) 1/2. It is needed to divide the total level density into levels as a function of J. To obtain the total level density at the neutron binding energy from the s-wave resonance count, the spin cutoff parameter is also needed. The spin cutoff parameter has been calculated as a function of excitation energy and mass with a super-conducting Hamiltonian. Calculations have been compared with two commonlymore » used semiempirical formulas. A need for further measurements is also observed. Some complications for deformed nuclei are discussed. The quality of spin cut off parameter data derived from isomeric ratio measurement is examined.« less

  15. Challenges in Special Steel Making

    NASA Astrophysics Data System (ADS)

    Balachandran, G.

    2018-02-01

    Special bar quality [SBQ] is a long steel product where an assured quality is delivered by the steel mill to its customer. The bars have enhanced tolerance to higher stress application and it is demanded for specialised component making. The SBQ bars are sought for component making processing units such as closed die hot forging, hot extrusion, cold forging, machining, heat treatment, welding operations. The final component quality of the secondary processing units depends on the quality maintained at the steel maker end along with quality maintained at the fabricator end. Thus, quality control is ensured at every unit process stages. The various market segments catered to by SBQ steel segment is ever growing and is reviewed. Steel mills need adequate infrastructure and technological capability to make these higher quality steels. Some of the critical stages of processing SBQ and the critical quality maintenance parameters at the steel mill in the manufacture has been brought out.

  16. Optimizing electrocoagulation process using experimental design for COD removal from unsanitary landfill leachate.

    PubMed

    Ogedey, Aysenur; Tanyol, Mehtap

    2017-12-01

    Leachate is the most difficult wastewater to be treated due to its complex content and high pollution release. For this reason, since it is not possible to be treated with a single process, a pre-treatment is needed. In the present study, a batch electrocoagulation reactor containing aluminum and iron electrodes was used to reduce chemical oxygen demand (COD) from landfill leachate (Tunceli, Turkey). Optimization of COD elimination was carried out with response surface methodology to describe the interaction effect of four main process independent parameters (current density, inter-electrode distance, pH and time of electrolysis). The optimum current density, inter-electrode distance, pH and time of electrolysis for maximum COD removal (43%) were found to be 19.42 mA/m 2 , 0.96 cm, 7.23 and 67.64 min, respectively. The results shown that the electrocoagulation process can be used as a pre-treatment step for leachate.

  17. Understanding controls of hydrologic processes across two monolithological catchments using model-data integration

    NASA Astrophysics Data System (ADS)

    Xiao, D.; Shi, Y.; Li, L.

    2016-12-01

    Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.

  18. Real-time parameter optimization based on neural network for smart injection molding

    NASA Astrophysics Data System (ADS)

    Lee, H.; Liau, Y.; Ryu, K.

    2018-03-01

    The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.

  19. Modeling and parameter identification of the simultaneous saccharification-fermentation process for ethanol production.

    PubMed

    Ochoa, Silvia; Yoo, Ahrim; Repke, Jens-Uwe; Wozny, Günter; Yang, Dae Ryook

    2007-01-01

    Despite many environmental advantages of using alcohol as a fuel, there are still serious questions about its economical feasibility when compared with oil-based fuels. The bioethanol industry needs to be more competitive, and therefore, all stages of its production process must be simple, inexpensive, efficient, and "easy" to control. In recent years, there have been significant improvements in process design, such as in the purification technologies for ethanol dehydration (molecular sieves, pressure swing adsorption, pervaporation, etc.) and in genetic modifications of microbial strains. However, a lot of research effort is still required in optimization and control, where the first step is the development of suitable models of the process, which can be used as a simulated plant, as a soft sensor or as part of the control algorithm. Thus, toward developing good, reliable, and simple but highly predictive models that can be used in the future for optimization and process control applications, in this paper an unstructured and a cybernetic model are proposed and compared for the simultaneous saccharification-fermentation process (SSF) for the production of ethanol from starch by a recombinant Saccharomyces cerevisiae strain. The cybernetic model proposed is a new one that considers the degradation of starch not only into glucose but also into dextrins (reducing sugars) and takes into account the intracellular reactions occurring inside the cells, giving a more detailed description of the process. Furthermore, an identification procedure based on the Metropolis Monte Carlo optimization method coupled with a sensitivity analysis is proposed for the identification of the model's parameters, employing experimental data reported in the literature.

  20. Viscoelastic processing and characterization of high-performance polymeric composite systems

    NASA Astrophysics Data System (ADS)

    Buehler, Frederic Ulysse

    2000-10-01

    Fiber reinforced composites, a combination of reinforcing fiber and resin matrix, offer many advantages over traditional materials, and have therefore found wide application in the aerospace and sporting goods industry. Among the advantages that composite materials offer, the most often cited are weight saving, high modulus, high strength-to-weight ratio, corrosion resistance, and fatigue resistance. As much as their attributes are desirable, composites are difficult to process due to their heterogeneous, anisotropic, and viscoelastic nature. It is therefore not surprising that the interrelationship between structure, property, and process is not fully understood. Consequently, the major purpose of this research work was to investigate this interrelationship, and ways to scale it to utilization. First, four prepreg materials, which performed differently in the manufacturing of composite parts, but were supposedly identical, were characterized. The property variations that were found among these prepregs in terms of tack and frictional resistance assessed the need for improved understanding of the prepregging process. Therefore, the influence of the processing parameters on final prepreg quality were investigated, and led to the definition of more adequate process descriptors. Additionally, one of the characterization techniques used in this work, temperature modulated differential scanning calorimetry, was examined in depth with the development of a mathematical model. This model, which enabled the exploration of the relationship between user parameters, sample thermophysical properties, and final results, was then compared to literature data. Collectively, this work explored and identified the key connectors between process, structure, and property as they relate to the manufacturing, design, and performance of composite materials.

Top