Sample records for specific process parameters

  1. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  2. Fermentation process using specific oxygen uptake rates as a process control

    DOEpatents

    Van Hoek, Pim; Aristidou, Aristos; Rush, Brian J.

    2016-08-30

    Specific oxygen uptake (OUR) is used as a process control parameter in fermentation processes. OUR is determined during at least the production phase of a fermentation process, and process parameters are adjusted to maintain the OUR within desired ranges. The invention is particularly applicable when the fermentation is conducted using a microorganism having a natural PDC pathway that has been disrupted so that it no longer functions. Microorganisms of this sort often produce poorly under strictly anaerobic conditions. Microaeration controlled by monitoring OUR allows the performance of the microorganism to be optimized.

  3. Fermentation process using specific oxygen uptake rates as a process control

    DOEpatents

    Van Hoek, Pim [Minnetonka, MN; Aristidou, Aristos [Maple Grove, MN; Rush, Brian [Minneapolis, MN

    2011-05-10

    Specific oxygen uptake (OUR) is used as a process control parameter in fermentation processes. OUR is determined during at least the production phase of a fermentation process, and process parameters are adjusted to maintain the OUR within desired ranges. The invention is particularly applicable when the fermentation is conducted using a microorganism having a natural PDC pathway that has been disrupted so that it no longer functions. Microorganisms of this sort often produce poorly under strictly anaerobic conditions. Microaeration controlled by monitoring OUR allows the performance of the microorganism to be optimized.

  4. Fermentation process using specific oxygen uptake rates as a process control

    DOEpatents

    Hoek, Van; Pim, Aristidou [Minnetonka, MN; Aristos, Rush [Maple Grove, MN; Brian, [Minneapolis, MN

    2007-06-19

    Specific oxygen uptake (OUR) is used as a process control parameter in fermentation processes. OUR is determined during at least the production phase of a fermentation process, and process parameters are adjusted to maintain the OUR within desired ranges. The invention is particularly applicable when the fermentation is conducted using a microorganism having a natural PDC pathway that has been disrupted so that it no longer functions. Microorganisms of this sort often produce poorly under strictly anaerobic conditions. Microaeration controlled by monitoring OUR allows the performance of the microorganism to be optimized.

  5. Fermentation process using specific oxygen uptake rates as a process control

    DOEpatents

    Van Hoek, Pim; Aristidou, Aristos; Rush, Brian

    2014-09-09

    Specific oxygen uptake (OUR) is used as a process control parameter in fermentation processes. OUR is determined during at least the production phase of a fermentation process, and process parameters are adjusted to maintain the OUR within desired ranges. The invention is particularly applicable when the fermentation is conducted using a microorganism having a natural PDC pathway that has been disrupted so that it no longer functions. Microorganisms of this sort often produce poorly under strictly anaerobic conditions. Microaeration controlled by monitoring OUR allows the performance of the microorganism to be optimized.

  6. Modalities of Thinking: State and Trait Effects on Cross-Frequency Functional Independent Brain Networks.

    PubMed

    Milz, Patricia; Pascual-Marqui, Roberto D; Lehmann, Dietrich; Faber, Pascal L

    2016-05-01

    Functional states of the brain are constituted by the temporally attuned activity of spatially distributed neural networks. Such networks can be identified by independent component analysis (ICA) applied to frequency-dependent source-localized EEG data. This methodology allows the identification of networks at high temporal resolution in frequency bands of established location-specific physiological functions. EEG measurements are sensitive to neural activity changes in cortical areas of modality-specific processing. We tested effects of modality-specific processing on functional brain networks. Phasic modality-specific processing was induced via tasks (state effects) and tonic processing was assessed via modality-specific person parameters (trait effects). Modality-specific person parameters and 64-channel EEG were obtained from 70 male, right-handed students. Person parameters were obtained using cognitive style questionnaires, cognitive tests, and thinking modality self-reports. EEG was recorded during four conditions: spatial visualization, object visualization, verbalization, and resting. Twelve cross-frequency networks were extracted from source-localized EEG across six frequency bands using ICA. RMANOVAs, Pearson correlations, and path modelling examined effects of tasks and person parameters on networks. Results identified distinct state- and trait-dependent functional networks. State-dependent networks were characterized by decreased, trait-dependent networks by increased alpha activity in sub-regions of modality-specific pathways. Pathways of competing modalities showed opposing alpha changes. State- and trait-dependent alpha were associated with inhibitory and automated processing, respectively. Antagonistic alpha modulations in areas of competing modalities likely prevent intruding effects of modality-irrelevant processing. Considerable research suggested alpha modulations related to modality-specific states and traits. This study identified the distinct electrophysiological cortical frequency-dependent networks within which they operate.

  7. Multi-surface topography targeted plateau honing for the processing of cylinder liner surfaces of automotive engines

    NASA Astrophysics Data System (ADS)

    Lawrence, K. Deepak; Ramamoorthy, B.

    2016-03-01

    Cylinder bores of automotive engines are 'engineered' surfaces that are processed using multi-stage honing process to generate multiple layers of micro geometry for meeting the different functional requirements of the piston assembly system. The final processed surfaces should comply with several surface topographic specifications that are relevant for the good tribological performance of the engine. Selection of the process parameters in three stages of honing to obtain multiple surface topographic characteristics simultaneously within the specification tolerance is an important module of the process planning and is often posed as a challenging task for the process engineers. This paper presents a strategy by combining the robust process design and gray-relational analysis to evolve the operating levels of honing process parameters in rough, finish and plateau honing stages targeting to meet multiple surface topographic specifications on the final running surface of the cylinder bores. Honing experiments were conducted in three stages namely rough, finish and plateau honing on cast iron cylinder liners by varying four honing process parameters such as rotational speed, oscillatory speed, pressure and honing time. Abbott-Firestone curve based functional parameters (Rk, Rpk, Rvk, Mr1 and Mr2) coupled with mean roughness depth (Rz, DIN/ISO) and honing angle were measured and identified as the surface quality performance targets to be achieved. The experimental results have shown that the proposed approach is effective to generate cylinder liner surface that would simultaneously meet the explicit surface topographic specifications currently practiced by the industry.

  8. Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.; Kachare, A. H.

    1981-01-01

    The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.

  9. Kinematical and mechanical aspects of wafer slicing

    NASA Technical Reports Server (NTRS)

    Werner, P. G.

    1982-01-01

    Some recently achieved results concerning the technological fundamentals of slurry sawing are presented. The specific material removal process and the related kinematic and geometric contact conditions between workpiece and saw blade are described. The result of a functional description of the slurry sawing process is presented, expressing the main process criteria, such as infeed per stroke, specific removal rate, specific tool wear, and vertical stroke intensity, in terms of the dominating process parameters, such as stroke length, width of workpiece, stroke frequency, specific cutting force and slurry specification.

  10. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  11. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  12. How certain are the process parameterizations in our models?

    NASA Astrophysics Data System (ADS)

    Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard

    2016-04-01

    Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.

  13. Effect of processing parameters on surface finish for fused deposition machinable wax patterns

    NASA Technical Reports Server (NTRS)

    Roberts, F. E., III

    1995-01-01

    This report presents a study on the effect of material processing parameters used in layer-by-layer material construction on the surface finish of a model to be used as an investment casting pattern. The data presented relate specifically to fused deposition modeling using a machinable wax.

  14. The specificity of the effects of stimulant medication on classroom learning-related measures of cognitive processing for attention deficit disorder children.

    PubMed

    Balthazor, M J; Wagner, R K; Pelham, W E

    1991-02-01

    There appear to be beneficial effects of stimulant medication on daily classroom measures of cognitive functioning for Attention Deficit Disorder (ADD) children, but the specificity and origin of such effects is unclear. Consistent with previous results, 0.3 mg/kg methylphenidate improved ADD children's performance on a classroom reading comprehension measure. Using the Posner letting-matching task and four additional measures of phonological processing, we attempted to isolate the effects of methylphenidate to parameter estimates of (a) selective attention, (b) the basic cognitive process of retrieving name codes from permanent memory, and (c) a constant term that represented nonspecific aspects of information processing. Responses to the letter-matching stimuli were faster and more accurate with medication compared to placebo. The improvement in performance was isolated to the parameter estimate that reflected nonspecific aspects of information processing. A lack of medication effect on the other measures of phonological processing supported the Posner task findings in indicating that methylphenidate appears to exert beneficial effects on academic processing through general rather than specific aspects of information processing.

  15. A review of pharmaceutical extrusion: critical process parameters and scaling-up.

    PubMed

    Thiry, J; Krier, F; Evrard, B

    2015-02-01

    Hot melt extrusion has been a widely used process in the pharmaceutical area for three decades. In this field, it is important to optimize the formulation in order to meet specific requirements. However, the process parameters of the extruder should be as much investigated as the formulation since they have a major impact on the final product characteristics. Moreover, a design space should be defined in order to obtain the expected product within the defined limits. This gives some freedom to operate as long as the processing parameters stay within the limits of the design space. Those limits can be investigated by varying randomly the process parameters but it is recommended to use design of experiments. An examination of the literature is reported in this review to summarize the impact of the variation of the process parameters on the final product properties. Indeed, the homogeneity of the mixing, the state of the drug (crystalline or amorphous), the dissolution rate, the residence time, can be influenced by variations in the process parameters. In particular, the impact of the following process parameters: temperature, screw design, screw speed and feeding, on the final product, has been reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Bioprocess development workflow: Transferable physiological knowledge instead of technological correlations.

    PubMed

    Reichelt, Wieland N; Haas, Florian; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    Microbial bioprocesses need to be designed to be transferable from lab scale to production scale as well as between setups. Although substantial effort is invested to control technological parameters, usually the only true constant parameter is the actual producer of the product: the cell. Hence, instead of solely controlling technological process parameters, the focus should be increasingly laid on physiological parameters. This contribution aims at illustrating a workflow of data life cycle management with special focus on physiology. Information processing condenses the data into physiological variables, while information mining condenses the variables further into physiological descriptors. This basis facilitates data analysis for a physiological explanation for observed phenomena in productivity. Targeting transferability, we demonstrate this workflow using an industrially relevant Escherichia coli process for recombinant protein production and substantiate the following three points: (1) The postinduction phase is independent in terms of productivity and physiology from the preinduction variables specific growth rate and biomass at induction. (2) The specific substrate uptake rate during induction phase was found to significantly impact the maximum specific product titer. (3) The time point of maximum specific titer can be predicted by an easy accessible physiological variable: while the maximum specific titers were reached at different time points (19.8 ± 7.6 h), those maxima were reached all within a very narrow window of cumulatively consumed substrate dSn (3.1 ± 0.3 g/g). Concluding, this contribution provides a workflow on how to gain a physiological view on the process and illustrates potential benefits. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:261-270, 2017. © 2016 American Institute of Chemical Engineers.

  17. Reason, emotion and decision-making: risk and reward computation with feeling.

    PubMed

    Quartz, Steven R

    2009-05-01

    Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.

  18. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  19. End-of-fabrication CMOS process monitor

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Allen, R. A.; Blaes, B. R.; Hannaman, D. J.; Lieneweg, U.; Lin, Y.-S.; Sayah, H. R.

    1990-01-01

    A set of test 'modules' for verifying the quality of a complementary metal oxide semiconductor (CMOS) process at the end of the wafer fabrication is documented. By electrical testing of specific structures, over thirty parameters are collected characterizing interconnects, dielectrics, contacts, transistors, and inverters. Each test module contains a specification of its purpose, the layout of the test structure, the test procedures, the data reduction algorithms, and exemplary results obtained from 3-, 2-, or 1.6-micrometer CMOS/bulk processes. The document is intended to establish standard process qualification procedures for Application Specific Integrated Circuits (ASIC's).

  20. ASRM test report: Autoclave cure process development

    NASA Technical Reports Server (NTRS)

    Nachbar, D. L.; Mitchell, Suzanne

    1992-01-01

    ASRM insulated segments will be autoclave cured following insulation pre-form installation and strip wind operations. Following competitive bidding, Aerojet ASRM Division (AAD) Purchase Order 100142 was awarded to American Fuel Cell and Coated Fabrics Company, Inc. (Amfuel), Magnolia, AR, for subcontracted insulation autoclave cure process development. Autoclave cure process development test requirements were included in Task 3 of TM05514, Manufacturing Process Development Specification for Integrated Insulation Characterization and Stripwind Process Development. The test objective was to establish autoclave cure process parameters for ASRM insulated segments. Six tasks were completed to: (1) evaluate cure parameters that control acceptable vulcanization of ASRM Kevlar-filled EPDM insulation material; (2) identify first and second order impact parameters on the autoclave cure process; and (3) evaluate insulation material flow-out characteristics to support pre-form configuration design.

  1. Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Anderson, B. J.; Justus, C. G.; Batts, G. W.

    2001-01-01

    Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.

  2. INDIVIDUAL DIFFERENCES IN TASK-SPECIFIC PAIRED ASSOCIATES LEARNING IN OLDER ADULTS: THE ROLE OF PROCESSING SPEED AND WORKING MEMORY

    PubMed Central

    Kurtz, Tanja; Mogle, Jacqueline; Sliwinski, Martin J.; Hofer, Scott M.

    2013-01-01

    Background The role of processing speed and working memory was investigated in terms of individual differences in task-specific paired associates learning in a sample of older adults. Task-specific learning, as distinct from content-oriented item-specific learning, refers to gains in performance due to repeated practice on a learning task in which the to-be-learned material changes over trials. Methods Learning trajectories were modeled within an intensive repeated-measures design based on participants obtained from an opt-in internet-based sampling service (Mage = 65.3, SD = 4.81). Participants completed an eight-item paired associates task daily over a seven-day period. Results Results indicated that a three-parameter hyperbolic model (i.e., initial level, learning rate, and asymptotic performance) best described learning trajectory. After controlling for age-related effects, both higher working memory and higher processing speed had a positive effect on all three learning parameters. Conclusion These results emphasize the role of cognitive abilities for individual differences in task-specific learning of older adults. PMID:24151913

  3. An alternative respiratory sounds classification system utilizing artificial neural networks.

    PubMed

    Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen

    2015-01-01

    Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  4. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    PubMed

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  5. Geophysical study of the structure and processes of the continental convergence zones: Alpine-Himalayan Belt

    NASA Technical Reports Server (NTRS)

    Toksoz, M. Nafi; Molnar, Peter

    1988-01-01

    Intracontinental deformation occurrence and the processes and physical parameters that control the rates and styles of deformation were examined. Studies addressing specific mechanical aspects of deformation were reviewed and the studies of deformation and of the structure of specific areas were studied considering the strength of the material and the gravitational effect.

  6. Sensitivity analysis of the add-on price estimate for the silicon web growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1981-01-01

    The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.

  7. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  8. BAIAP2 is related to emotional modulation of human memory strength.

    PubMed

    Luksys, Gediminas; Ackermann, Sandra; Coynel, David; Fastenrath, Matthias; Gschwind, Leo; Heck, Angela; Rasch, Bjoern; Spalek, Klara; Vogler, Christian; Papassotiropoulos, Andreas; de Quervain, Dominique

    2014-01-01

    Memory performance is the result of many distinct mental processes, such as memory encoding, forgetting, and modulation of memory strength by emotional arousal. These processes, which are subserved by partly distinct molecular profiles, are not always amenable to direct observation. Therefore, computational models can be used to make inferences about specific mental processes and to study their genetic underpinnings. Here we combined a computational model-based analysis of memory-related processes with high density genetic information derived from a genome-wide study in healthy young adults. After identifying the best-fitting model for a verbal memory task and estimating the best-fitting individual cognitive parameters, we found a common variant in the gene encoding the brain-specific angiogenesis inhibitor 1-associated protein 2 (BAIAP2) that was related to the model parameter reflecting modulation of verbal memory strength by negative valence. We also observed an association between the same genetic variant and a similar emotional modulation phenotype in a different population performing a picture memory task. Furthermore, using functional neuroimaging we found robust genotype-dependent differences in activity of the parahippocampal cortex that were specifically related to successful memory encoding of negative versus neutral information. Finally, we analyzed cortical gene expression data of 193 deceased subjects and detected significant BAIAP2 genotype-dependent differences in BAIAP2 mRNA levels. Our findings suggest that model-based dissociation of specific cognitive parameters can improve the understanding of genetic underpinnings of human learning and memory.

  9. Review & Peer Review of “Parameters for Properly Designed and Operated Flares” Documents

    EPA Pesticide Factsheets

    This page contains two 2012 memoranda on the review of EPA's parameters for properly designed and operated flares. One details the process of peer review, and the other provides background information and specific charge questions to the panel.

  10. Multirate state and parameter estimation in an antibiotic fermentation with delayed measurements.

    PubMed

    Gudi, R D; Shah, S L; Gray, M R

    1994-12-01

    This article discusses issues related to estimation and monitoring of fermentation processes that exhibit endogenous metabolism and time-varying maintenance activity. Such culture-related activities hamper the use of traditional, software sensor-based algorithms, such as the extended kalman filter (EKF). In the approach presented here, the individual effects of the endogenous decay and the true maintenance processes have been lumped to represent a modified maintenance coefficient, m(c). Model equations that relate measurable process outputs, such as the carbon dioxide evolution rate (CER) and biomass, to the observable process parameters (such as net specific growth rate and the modified maintenance coefficient) are proposed. These model equations are used in an estimator that can formally accommodate delayed, infrequent measurements of the culture states (such as the biomass) as well as frequent, culture-related secondary measurements (such as the CER). The resulting multirate software sensor-based estimation strategy is used to monitor biomass profiles as well as profiles of critical fermentation parameters, such as the specific growth for a fed-batch fermentation of Streptomyces clavuligerus.

  11. Process Development of Porcelain Ceramic Material with Binder Jetting Process for Dental Applications

    NASA Astrophysics Data System (ADS)

    Miyanaji, Hadi; Zhang, Shanshan; Lassell, Austin; Zandinejad, Amirali; Yang, Li

    2016-03-01

    Custom ceramic structures possess significant potentials in many applications such as dentistry and aerospace where extreme environments are present. Specifically, highly customized geometries with adequate performance are needed for various dental prostheses applications. This paper demonstrates the development of process and post-process parameters for a dental porcelain ceramic material using binder jetting additive manufacturing (AM). Various process parameters such as binder amount, drying power level, drying time and powder spread speed were studied experimentally for their effect on geometrical and mechanical characteristics of green parts. In addition, the effects of sintering and printing parameters on the qualities of the densified ceramic structures were also investigated experimentally. The results provide insights into the process-property relationships for the binder jetting AM process, and some of the challenges of the process that need to be further characterized for the successful adoption of the binder jetting technology in high quality ceramic fabrications are discussed.

  12. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  13. Mining manufacturing data for discovery of high productivity process characteristics.

    PubMed

    Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou

    2010-06-01

    Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.

  14. HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.

    PubMed

    Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C

    2004-07-01

    A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.

  15. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  16. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  17. A holistic approach towards defined product attributes by Maillard-type food processing.

    PubMed

    Davidek, Tomas; Illmann, Silke; Rytz, Andreas; Blank, Imre

    2013-07-01

    A fractional factorial experimental design was used to quantify the impact of process and recipe parameters on selected product attributes of extruded products (colour, viscosity, acrylamide, and the flavour marker 4-hydroxy-2,5-dimethyl-3(2H)-furanone, HDMF). The study has shown that recipe parameters (lysine, phosphate) can be used to modulate the HDMF level without changing the specific mechanical energy (SME) and consequently the texture of the product, while processing parameters (temperature, moisture) impact both HDMF and SME in parallel. Similarly, several parameters, including phosphate level, temperature and moisture, simultaneously impact both HDMF and acrylamide formation, while pH and addition of lysine showed different trends. Therefore, the latter two options can be used to mitigate acrylamide without a negative impact on flavour. Such a holistic approach has been shown as a powerful tool to optimize various product attributes upon food processing.

  18. VIP: A knowledge-based design aid for the engineering of space systems

    NASA Technical Reports Server (NTRS)

    Lewis, Steven M.; Bellman, Kirstie L.

    1990-01-01

    The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.

  19. Process-based soil erodibility estimation for empirical water erosion models

    USDA-ARS?s Scientific Manuscript database

    A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...

  20. Scatterometry-based metrology for SAQP pitch walking using virtual reference

    NASA Astrophysics Data System (ADS)

    Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel

    2016-03-01

    Advanced technology nodes, 10nm and beyond, employing multi-patterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. Self-Aligned Quadruple Patterning (SAQP) process is used to create the Fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bares compounding effects from successive Reactive Ion Etch (RIE) and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes which work on an assumption that there is consistent spacing between fins. In SAQP there are 3 pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology such as Transmission Electron Microscopy (TEM). In this paper we will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.

  1. Measuring self-aligned quadruple patterning pitch walking with scatterometry-based metrology utilizing virtual reference

    NASA Astrophysics Data System (ADS)

    Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel

    2016-10-01

    Advanced technology nodes, 10 nm and beyond, employing multipatterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. A self-aligned quadruple patterning (SAQP) process is used to create the fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bears the compounding effects from successive reactive ion etch and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes, which work on an assumption that there is consistent spacing between fins. In SAQP, there are three pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology, such as transmission electron microscopy. We will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.

  2. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  3. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  4. Computational Simulation of Containment Influence on Defect Generation During Growth of GeSi

    NASA Technical Reports Server (NTRS)

    Motakef, Shariar; Yesilyurt, S.; Vujisic, L.

    2001-01-01

    This report contains results of theoretical work in conjunction with the NASA RDGS program. It is specifically focused on factors controlling the stability of detachment and the sensitivity of the detachment process to the processing and geometric parameters of the crystal growth process.

  5. Computer Optimization of Biodegradable Nanoparticles Fabricated by Dispersion Polymerization.

    PubMed

    Akala, Emmanuel O; Adesina, Simeon; Ogunwuyi, Oluwaseun

    2015-12-22

    Quality by design (QbD) in the pharmaceutical industry involves designing and developing drug formulations and manufacturing processes which ensure predefined drug product specifications. QbD helps to understand how process and formulation variables affect product characteristics and subsequent optimization of these variables vis-à-vis final specifications. Statistical design of experiments (DoE) identifies important parameters in a pharmaceutical dosage form design followed by optimizing the parameters with respect to certain specifications. DoE establishes in mathematical form the relationships between critical process parameters together with critical material attributes and critical quality attributes. We focused on the fabrication of biodegradable nanoparticles by dispersion polymerization. Aided by a statistical software, d-optimal mixture design was used to vary the components (crosslinker, initiator, stabilizer, and macromonomers) to obtain twenty nanoparticle formulations (PLLA-based nanoparticles) and thirty formulations (poly-ɛ-caprolactone-based nanoparticles). Scheffe polynomial models were generated to predict particle size (nm), zeta potential, and yield (%) as functions of the composition of the formulations. Simultaneous optimizations were carried out on the response variables. Solutions were returned from simultaneous optimization of the response variables for component combinations to (1) minimize nanoparticle size; (2) maximize the surface negative zeta potential; and (3) maximize percent yield to make the nanoparticle fabrication an economic proposition.

  6. Microstructural Influence on Mechanical Properties in Plasma Microwelding of Ti6Al4V Alloy

    NASA Astrophysics Data System (ADS)

    Baruah, M.; Bag, S.

    2016-11-01

    The complexity of joining Ti6Al4V alloy enhances with reduction in sheet thickness. The present work puts emphasis on microplasma arc welding (MPAW) of 500-μm-thick Ti6Al4V alloy in butt joint configuration. Using controlled and regulated arc current, the MPAW process is specifically designed to use in joining of thin sheet components over a wide range of process parameters. The weld quality is assessed by carefully controlling the process parameters and by reducing the formation of oxides. The combined effect of welding speed and current on the weld joint properties is evaluated for joining of Ti6Al4V alloy. The macro- and microstructural characterizations of the weldment by optical microscopy as well as the analysis of mechanical properties by microtensile and microhardness test have been performed. The weld joint quality is affected by specifically designed fixture that controls the oxidation of the joint and introduces high cooling rate. Hence, the solidified microstructure of welded specimen influences the mechanical properties of the joint. The butt joint of titanium alloy by MPAW at optimal process parameters is of very high quality, without any internal defects and with minimum residual distortion.

  7. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  8. Case studies on the physical-chemical parameters' variation during three different purification approaches destined to treat wastewaters from food industry.

    PubMed

    Ghimpusan, Marieta; Nechifor, Gheorghe; Nechifor, Aurelia-Cristina; Dima, Stefan-Ovidiu; Passeri, Piero

    2017-12-01

    The paper presents a set of three interconnected case studies on the depuration of food processing wastewaters by using aeration & ozonation and two types of hollow-fiber membrane bioreactor (MBR) approaches. A secondary and more extensive objective derived from the first one is to draw a clearer, broader frame on the variation of physical-chemical parameters during the purification of wastewaters from food industry through different operating modes with the aim of improving the management of water purification process. Chemical oxygen demand (COD), pH, mixed liquor suspended solids (MLSS), total nitrogen, specific nitrogen (NH 4 + , NO 2 - , NO 3 - ) total phosphorous, and total surfactants were the measured parameters, and their influence was discussed in order to establish the best operating mode to achieve the purification performances. The integrated air-ozone aeration process applied in the second operating mode lead to a COD decrease by up to 90%, compared to only 75% obtained in a conventional biological activated sludge process. The combined purification process of MBR and ozonation produced an additional COD decrease of 10-15%, and made the Total Surfactants values to comply to the specific legislation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. The functional significance of EEG microstates--Associations with modalities of thinking.

    PubMed

    Milz, P; Faber, P L; Lehmann, D; Koenig, T; Kochi, K; Pascual-Marqui, R D

    2016-01-15

    The momentary, global functional state of the brain is reflected by its electric field configuration. Cluster analytical approaches consistently extracted four head-surface brain electric field configurations that optimally explain the variance of their changes across time in spontaneous EEG recordings. These four configurations are referred to as EEG microstate classes A, B, C, and D and have been associated with verbal/phonological, visual, subjective interoceptive-autonomic processing, and attention reorientation, respectively. The present study tested these associations via an intra-individual and inter-individual analysis approach. The intra-individual approach tested the effect of task-induced increased modality-specific processing on EEG microstate parameters. The inter-individual approach tested the effect of personal modality-specific parameters on EEG microstate parameters. We obtained multichannel EEG from 61 healthy, right-handed, male students during four eyes-closed conditions: object-visualization, spatial-visualization, verbalization (6 runs each), and resting (7 runs). After each run, we assessed participants' degrees of object-visual, spatial-visual, and verbal thinking using subjective reports. Before and after the recording, we assessed modality-specific cognitive abilities and styles using nine cognitive tests and two questionnaires. The EEG of all participants, conditions, and runs was clustered into four classes of EEG microstates (A, B, C, and D). RMANOVAs, ANOVAs and post-hoc paired t-tests compared microstate parameters between conditions. TANOVAs compared microstate class topographies between conditions. Differences were localized using eLORETA. Pearson correlations assessed interrelationships between personal modality-specific parameters and EEG microstate parameters during no-task resting. As hypothesized, verbal as opposed to visual conditions consistently affected the duration, occurrence, and coverage of microstate classes A and B. Contrary to associations suggested by previous reports, parameters were increased for class A during visualization, and class B during verbalization. In line with previous reports, microstate D parameters were increased during no-task resting compared to the three internal, goal-directed tasks. Topographic differences between conditions included particular sub-regions of components of the metabolic default mode network. Modality-specific personal parameters did not consistently correlate with microstate parameters except verbal cognitive style which correlated negatively with microstate class A duration and positively with class C occurrence. This is the first study that aimed to induce EEG microstate class parameter changes based on their hypothesized functional significance. Beyond the associations of microstate classes A and B with visual and verbal processing, respectively, our results suggest that a finely-tuned interplay between all four EEG microstate classes is necessary for the continuous formation of visual and verbal thoughts. Our results point to the possibility that the EEG microstate classes may represent the head-surface measured activity of intra-cortical sources primarily exhibiting inhibitory functions. However, additional studies are needed to verify and elaborate on this hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Additive Manufacturing of Fuel Injectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadek Tadros, Dr. Alber Alphonse; Ritter, Dr. George W.; Drews, Charles Donald

    Additive manufacturing (AM), also known as 3D-printing, has been shifting from a novelty prototyping paradigm to a legitimate manufacturing tool capable of creating components for highly complex engineered products. An emerging AM technology for producing metal parts is the laser powder bed fusion (L-PBF) process; however, industry manufacturing specifications and component design practices for L-PBF have not yet been established. Solar Turbines Incorporated (Solar), an industrial gas turbine manufacturer, has been evaluating AM technology for development and production applications with the desire to enable accelerated product development cycle times, overall turbine efficiency improvements, and supply chain flexibility relative to conventionalmore » manufacturing processes (casting, brazing, welding). Accordingly, Solar teamed with EWI on a joint two-and-a-half-year project with the goal of developing a production L-PBF AM process capable of consistently producing high-nickel alloy material suitable for high temperature gas turbine engine fuel injector components. The project plan tasks were designed to understand the interaction of the process variables and their combined impact on the resultant AM material quality. The composition of the high-nickel alloy powders selected for this program met the conventional cast Hastelloy X compositional limits and were commercially available in different particle size distributions (PSD) from two suppliers. Solar produced all the test articles and both EWI and Solar shared responsibility for analyzing them. The effects of powder metal input stock, laser parameters, heat treatments, and post-finishing methods were evaluated. This process knowledge was then used to generate tensile, fatigue, and creep material properties data curves suitable for component design activities. The key process controls for ensuring consistent material properties were documented in AM powder and process specifications. The basic components of the project were: • Powder metal input stock: Powder characterization, dimensional accuracy, metallurgical characterization, and mechanical properties evaluation. • Process parameters: Laser parameter effects, post-printing heat-treatment development, mechanical properties evaluation, and post-finishing technique. • Material design curves: Room and elevated temperature tensiles, low cycle fatigue, and creep rupture properties curves generated. • AM specifications: Key metal powder characteristics, laser parameters, and heat-treatment controls identified.« less

  11. The potential of multiparametric MRI of the breast

    PubMed Central

    Pinker, Katja; Helbich, Thomas H

    2017-01-01

    MRI is an essential tool in breast imaging, with multiple established indications. Dynamic contrast-enhanced MRI (DCE-MRI) is the backbone of any breast MRI protocol and has an excellent sensitivity and good specificity for breast cancer diagnosis. DCE-MRI provides high-resolution morphological information, as well as some functional information about neoangiogenesis as a tumour-specific feature. To overcome limitations in specificity, several other functional MRI parameters have been investigated and the application of these combined parameters is defined as multiparametric MRI (mpMRI) of the breast. MpMRI of the breast can be performed at different field strengths (1.5–7 T) and includes both established (diffusion-weighted imaging, MR spectroscopic imaging) and novel MRI parameters (sodium imaging, chemical exchange saturation transfer imaging, blood oxygen level-dependent MRI), as well as hybrid imaging with positron emission tomography (PET)/MRI and different radiotracers. Available data suggest that multiparametric imaging using different functional MRI and PET parameters can provide detailed information about the underlying oncogenic processes of cancer development and progression and can provide additional specificity. This article will review the current and emerging functional parameters for mpMRI of the breast for improved diagnostic accuracy in breast cancer. PMID:27805423

  12. Parameter Resetting in Second Language Acquisition. University Research Institute Final Project Report, 1987-88.

    ERIC Educational Resources Information Center

    Phinney-Liapis, Marianne

    Analyses of the Null Subject Parameter (NSP) suggest that several factors may influence the resetting process for second language acquisition, such as specific "trigger" data, awareness of agreement as a part of awareness of agreement (INFL), and stylistic rules such as subject postposing and anaphoric reference. Four tests were…

  13. Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model

    EPA Science Inventory

    This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...

  14. EVALUATION OF BIOMASS REACTIVITY IN HYDROGASIFICATION FOR THE HYNOL PROCESS

    EPA Science Inventory

    The report gives results of an evaluation of the reactivity of poplar wood in hydrogasification under the operating conditions specific for the Hynol process, using a thermobalance reactor. Parameters affecting gasification behavior (e.g., gas velocity, particle size, system pres...

  15. Investigation on Effect of Material Hardness in High Speed CNC End Milling Process.

    PubMed

    Dhandapani, N V; Thangarasu, V S; Sureshkannan, G

    2015-01-01

    This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results.

  16. Influence of Thrust Level on the Architecture and Optimal Working Process Parameters of a Small-scale Turbojet for UAV

    NASA Astrophysics Data System (ADS)

    Kuz`michev, V. S.; Filinov, E. P.; Ostapyuk, Ya A.

    2018-01-01

    This article describes how the thrust level influences the turbojet architecture (types of turbomachines that provide the maximum efficiency) and its working process parameters (turbine inlet temperature (TIT) and overall pressure ratio (OPR)). Functional gasdynamic and strength constraints were included, total mass of fuel and the engine required for mission and the specific fuel consumption (SFC) were considered optimization criteria. Radial and axial turbines and compressors were considered. The results show that as the engine thrust decreases, optimal values of working process parameters decrease too, and the regions of compromise shrink. Optimal engine architecture and values of working process parameters are suggested for turbojets with thrust varying from 100N to 100kN. The results show that for the thrust below 25kN the engine scale factor should be taken into the account, as the low flow rates begin to influence the efficiency of engine elements substantially.

  17. Investigation on Effect of Material Hardness in High Speed CNC End Milling Process

    PubMed Central

    Dhandapani, N. V.; Thangarasu, V. S.; Sureshkannan, G.

    2015-01-01

    This research paper analyzes the effects of material properties on surface roughness, material removal rate, and tool wear on high speed CNC end milling process with various ferrous and nonferrous materials. The challenge of material specific decision on the process parameters of spindle speed, feed rate, depth of cut, coolant flow rate, cutting tool material, and type of coating for the cutting tool for required quality and quantity of production is addressed. Generally, decision made by the operator on floor is based on suggested values of the tool manufacturer or by trial and error method. This paper describes effect of various parameters on the surface roughness characteristics of the precision machining part. The prediction method suggested is based on various experimental analysis of parameters in different compositions of input conditions which would benefit the industry on standardization of high speed CNC end milling processes. The results show a basis for selection of parameters to get better results of surface roughness values as predicted by the case study results. PMID:26881267

  18. CHAM: weak signals detection through a new multivariate algorithm for process control

    NASA Astrophysics Data System (ADS)

    Bergeret, François; Soual, Carole; Le Gratiet, B.

    2016-10-01

    Derivatives technologies based on core CMOS processes are significantly aggressive in term of design rules and process control requirements. Process control plan is a derived from Process Assumption (PA) calculations which result in a design rule based on known process variability capabilities, taking into account enough margin to be safe not only for yield but especially for reliability. Even though process assumptions are calculated with a 4 sigma known process capability margin, efficient and competitive designs are challenging the process especially for derivatives technologies in 40 and 28nm nodes. For wafer fab process control, PA are declined in monovariate (layer1 CD, layer2 CD, layer2 to layer1 overlay, layer3 CD etc….) control charts with appropriated specifications and control limits which all together are securing the silicon. This is so far working fine but such system is not really sensitive to weak signals coming from interactions of multiple key parameters (high layer2 CD combined with high layer3 CD as an example). CHAM is a software using an advanced statistical algorithm specifically designed to detect small signals, especially when there are many parameters to control and when the parameters can interact to create yield issues. In this presentation we will first present the CHAM algorithm, then the case-study on critical dimensions, with the results, and we will conclude on future work. This partnership between Ippon and STM is part of E450LMDAP, European project dedicated to metrology and lithography development for future technology nodes, especially 10nm.

  19. Support-vector-machines-based multidimensional signal classification for fetal activity characterization

    NASA Astrophysics Data System (ADS)

    Ribes, S.; Voicu, I.; Girault, J. M.; Fournier, M.; Perrotin, F.; Tranquart, F.; Kouamé, D.

    2011-03-01

    Electronic fetal monitoring may be required during the whole pregnancy to closely monitor specific fetal and maternal disorders. Currently used methods suffer from many limitations and are not sufficient to evaluate fetal asphyxia. Fetal activity parameters such as movements, heart rate and associated parameters are essential indicators of the fetus well being, and no current device gives a simultaneous and sufficient estimation of all these parameters to evaluate the fetus well-being. We built for this purpose, a multi-transducer-multi-gate Doppler system and developed dedicated signal processing techniques for fetal activity parameter extraction in order to investigate fetus's asphyxia or well-being through fetal activity parameters. To reach this goal, this paper shows preliminary feasibility of separating normal and compromised fetuses using our system. To do so, data set consisting of two groups of fetal signals (normal and compromised) has been established and provided by physicians. From estimated parameters an instantaneous Manning-like score, referred to as ultrasonic score was introduced and was used together with movements, heart rate and associated parameters in a classification process using Support Vector Machines (SVM) method. The influence of the fetal activity parameters and the performance of the SVM were evaluated using the computation of sensibility, specificity, percentage of support vectors and total classification accuracy. We showed our ability to separate the data into two sets : normal fetuses and compromised fetuses and obtained an excellent matching with the clinical classification performed by physician.

  20. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  1. Determination of rational parameters for process of grinding materials pre-crushed by pressure in ball mill

    NASA Astrophysics Data System (ADS)

    Romanovich, A. A.; Romanovich, L. G.; Chekhovskoy, E. I.

    2018-03-01

    The article presents the results of experimental studies on the grinding process of a clinker preliminarily ground in press roller mills in a ball mill equipped with energy exchange devices. The authors studied the influence of the coefficients of loading for grinding bodies of the first and second mill chambers, their lengths, angles of inclination, and the mutual location of energy exchange devices (the ellipse segment and the double-acting blade) on the output parameters of the grinding process (productivity, drive power consumption and specific energy consumption). It is clarified that the best results of the disaggregation and grinding process, judging by the minimum specific energy consumption in the grinding of clinker with an anisotropic texture after force deformation between the rolls of a press roller shredder, are achieved at a certain angle of ellipse segment inclination; the length of the first chamber and the coefficients of loading the chambers with grinding bodies.

  2. Refinement of determination of critical thresholds of stress-strain behaviour by using AE data: potential for evaluation of durability of natural stone

    NASA Astrophysics Data System (ADS)

    Prikryl, Richard; Lokajíček, Tomáš

    2017-04-01

    According to previous studies, evaluation of stress-strain behaviour (in uniaxial compression) of various rocks appears to be effective tool allowing for prediction of resistance of natural stone to some physical weathering processes. Precise determination of critical thresholds, specifically of 'crack initiation' and 'crack damage' is fundamental issue in this approach. In contrast to 'crack damage stress/strain threshold', which can be easily read from deflection point on volumetric curve, detection of 'crack initiation' is much more difficult. Besides previously proposed mathematical processing of axial stress-strain curve, recording of acoustic emission (AE) data and their processing provide direct measure of various stress/strain thresholds, specifically of 'crack initiation'. This specific parameter is required during successive computation of energetic parameters (mechanical work), that can be stored by a material without formation of new defects (microcracks) due to acting stress. Based on our experimental data, this mechanical work seems to be proportional to the resistance of a material to formation of mode I (tensile) cracks that are responsible for destruction of subsurface below exposed faces of natural stone.

  3. Design and performance study of an orthopaedic surgery robotized module for automatic bone drilling.

    PubMed

    Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Kotev, Vladimir; Delchev, Kamen; Zagurski, Kazimir; Vitkov, Vladimir

    2013-12-01

    Many orthopaedic operations involve drilling and tapping before the insertion of screws into a bone. This drilling is usually performed manually, thus introducing many problems. These include attaining a specific drilling accuracy, preventing blood vessels from breaking, and minimizing drill oscillations that would widen the hole. Bone overheating is the most important problem. To avoid such problems and reduce the subjective factor, automated drilling is recommended. Because numerous parameters influence the drilling process, this study examined some experimental methods. These concerned the experimental identification of technical drilling parameters, including the bone resistance force and temperature in the drilling process. During the drilling process, the following parameters were monitored: time, linear velocity, angular velocity, resistance force, penetration depth, and temperature. Specific drilling effects were revealed during the experiments. The accuracy was improved at the starting point of the drilling, and the error for the entire process was less than 0.2 mm. The temperature deviations were kept within tolerable limits. The results of various experiments with different drilling velocities, drill bit diameters, and penetration depths are presented in tables, as well as the curves of the resistance force and temperature with respect to time. Real-time digital indications of the progress of the drilling process are shown. Automatic bone drilling could entirely solve the problems that usually arise during manual drilling. An experimental setup was designed to identify bone drilling parameters such as the resistance force arising from variable bone density, appropriate mechanical drilling torque, linear speed of the drill, and electromechanical characteristics of the motors, drives, and corresponding controllers. Automatic drilling guarantees greater safety for the patient. Moreover, the robot presented is user-friendly because it is simple to set robot tasks, and process data are collected in real time. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Invariant polarimetric contrast parameters of coherent light.

    PubMed

    Réfrégier, Philippe; Goudail, François

    2002-06-01

    Many applications use an active coherent illumination and analyze the variation of the polarization state of optical signals. However, as a result of the use of coherent light, these signals are generally strongly perturbed with speckle noise. This is the case, for example, for active polarimetric imaging systems that are useful for enhancing contrast between different elements in a scene. We propose a rigorous definition of the minimal set of parameters that characterize the difference between two coherent and partially polarized states. Indeed, two states of partially polarized light are a priori defined by eight parameters, for example, their two Stokes vectors. We demonstrate that the processing performance for such signal processing tasks as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by two scalar functions of these eight parameters. These two scalar functions are the invariant parameters that define the polarimetric contrast between two polarized states of coherent light. Different polarization configurations with the same invariant contrast parameters will necessarily lead to the same performance for a given task, which is a desirable quality for a rigorous contrast measure. The definition of these polarimetric contrast parameters simplifies the analysis and the specification of processing techniques for coherent polarimetric signals.

  5. Application of parameter estimation to aircraft stability and control: The output-error approach

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.; Iliff, Kenneth W.

    1986-01-01

    The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.

  6. B827 Chemical Synthhesis Project - Industrial Control System Integration - Statement of Work & Specification with Attachments 1-14

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wade, F. E.

    The Chemical Synthesis Pilot Process at the Lawrence Livermore National Laboratory (LLNL) Site 300 827 Complex will be used to synthesize small quantities of material to support research and development. The project will modernize and increase current capabilities for chemical synthesis at LLNL. The primary objective of this project is the conversion of a non-automated hands-on process to a remoteoperation process, while providing enhanced batch process step control, stored recipe-specific parameter sets, process variable visibility, monitoring, alarm and warning handling, and comprehensive batch record data logging. This Statement of Work and Specification provides the industrial-grade process control requirements for themore » chemical synthesis batching control system, hereafter referred to as the “Control System” to be delivered by the System Integrator.« less

  7. Detecting the Extent of Cellular Decomposition after Sub-Eutectoid Annealing in Rolled UMo Foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kautz, Elizabeth J.; Jana, Saumyadeep; Devaraj, Arun

    2017-07-31

    This report presents an automated image processing approach to quantifying microstructure image data, specifically the extent of eutectoid (cellular) decomposition in rolled U-10Mo foils. An image processing approach is used here to be able to quantitatively describe microstructure image data in order to relate microstructure to processing parameters (time, temperature, deformation).

  8. Optimal nonlinear information processing capacity in delay-based reservoir computers

    NASA Astrophysics Data System (ADS)

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-09-01

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  9. Optimal nonlinear information processing capacity in delay-based reservoir computers.

    PubMed

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-09-11

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature.

  10. Optimal nonlinear information processing capacity in delay-based reservoir computers

    PubMed Central

    Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo

    2015-01-01

    Reservoir computing is a recently introduced brain-inspired machine learning paradigm capable of excellent performances in the processing of empirical data. We focus in a particular kind of time-delay based reservoir computers that have been physically implemented using optical and electronic systems and have shown unprecedented data processing rates. Reservoir computing is well-known for the ease of the associated training scheme but also for the problematic sensitivity of its performance to architecture parameters. This article addresses the reservoir design problem, which remains the biggest challenge in the applicability of this information processing scheme. More specifically, we use the information available regarding the optimal reservoir working regimes to construct a functional link between the reservoir parameters and its performance. This function is used to explore various properties of the device and to choose the optimal reservoir architecture, thus replacing the tedious and time consuming parameter scannings used so far in the literature. PMID:26358528

  11. An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography

    NASA Technical Reports Server (NTRS)

    Ivancic, Will (Technical Monitor); Eddy, Wesley M.

    2005-01-01

    Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.

  12. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  13. Utilisation of chip thickness models in grinding

    NASA Astrophysics Data System (ADS)

    Singleton, Roger

    Grinding is now a well established process utilised for both stock removal and finish applications. Although significant research is performed in this field, grinding still experiences problems with burn and high forces which can lead to poor quality components and damage to equipment. This generally occurs in grinding when the process deviates from its safe working conditions. In milling, chip thickness parameters are utilised to predict and maintain process outputs leading to improved control of the process. This thesis looks to further the knowledge of the relationship between chip thickness and the grinding process outputs to provide an increased predictive and maintenance modelling capability. Machining trials were undertaken using different chip thickness parameters to understand how these affect the process outputs. The chip thickness parameters were maintained at different grinding wheel diameters for a constant productivity process to determine the impact of chip thickness at a constant material removal rate.. Additional testing using a modified pin on disc test rig was performed to provide further information on process variables. The different chip thickness parameters provide control of different process outputs in the grinding process. These relationships can be described using contact layer theory and heat flux partitioning. The contact layer is defined as the immediate layer beneath the contact arc at the wheel workpiece interface. The size of the layer governs the force experienced during the process. The rate of contact layer removal directly impacts the net power required from the system. It was also found that the specific grinding energy of a process is more dependent on the productivity of a grinding process rather than the value of chip thickness. Changes in chip thickness at constant material removal rate result in microscale changes in the rate of contact layer removal when compared to changes in process productivity. This is a significant piece of information in relation to specific grinding energy where conventional theory states it is primarily dependent on chip thickness..

  14. Development of a parameter optimization technique for the design of automatic control systems

    NASA Technical Reports Server (NTRS)

    Whitaker, P. H.

    1977-01-01

    Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.

  15. Making Ternary Quantum Dots From Single-Source Precursors

    NASA Technical Reports Server (NTRS)

    Bailey, Sheila; Banger, Kulbinder; Castro, Stephanie; Hepp, Aloysius

    2007-01-01

    A process has been devised for making ternary (specifically, CuInS2) nanocrystals for use as quantum dots (QDs) in a contemplated next generation of high-efficiency solar photovoltaic cells. The process parameters can be chosen to tailor the sizes (and, thus, the absorption and emission spectra) of the QDs.

  16. Planning for Higher Education.

    ERIC Educational Resources Information Center

    Lindstrom, Caj-Gunnar

    1984-01-01

    Decision processes for strategic planning for higher education institutions are outlined using these parameters: institutional goals and power structure, organizational climate, leadership attitudes, specific problem type, and problem-solving conditions and alternatives. (MSE)

  17. Theoretical performance of liquid hydrogen and liquid fluorine as a rocket propellant

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Huff, Vearl N

    1953-01-01

    Theoretical values of performance parameters for liquid hydrogen and liquid fluorine as a rocket propellant were calculated on the assumption of equilibrium composition during the expansion process for a wide range of fuel-oxidant and expansion ratios. The parameters included were specific impulse, combustion-chamber temperature, nozzle-exit temperature, equilibrium composition, mean molecular weight, characteristic velocity, coefficient of thrust, ration of nozzle-exit area to throat area, specific heat at constant pressure, coefficient of viscosity, and coefficient of thermal conductivity. The maximum value of specific impulse was 364.6 pound-seconds per pound for a chamber pressure of 300 pounds per square inch absolute (20.41 atm) and an exit pressure of 1 atmosphere.

  18. Theoretical performance of liquid ammonia and liquid fluorine as a rocket propellant

    NASA Technical Reports Server (NTRS)

    Gordon, Sanford; Huff, Vearl N

    1953-01-01

    Theoretical values of performance parameters for liquid ammonia and liquid fluorine as a rocket propellant were calculated on the assumption of equilibrium composition during the expansion process for a wide range of fuel-oxidant and expansion ratios. The parameters included were specific impulse, combustion chamber temperature, nozzle-exit temperature, equilibrium composition, mean molecular weight, characteristic velocity, coefficient of thrust, ratio of nozzle-exit area to throat area, specific heat at constant pressure, coefficient of viscosity, and coefficient of thermal conductivity. The maximum value of specific impulse was 311.5 pound-seconds per pound for a chamber pressure of 300 pounds per square inch absolute (20.41 atm) and an exit pressure of 1 atmosphere.

  19. Design of a Data Catalogue for Perdigão-2017 Field Experiment: Establishing the Relevant Parameters, Post-Processing Techniques and Users Access

    NASA Astrophysics Data System (ADS)

    Palma, J. L.; Belo-Pereira, M.; Leo, L. S.; Fernando, J.; Wildmann, N.; Gerz, T.; Rodrigues, C. V.; Lopes, A. S.; Lopes, J. C.

    2017-12-01

    Perdigão is the largest of a series of wind-mapping studies embedded in the on-going NEWA (New European Wind Atlas) Project. The intensive observational period of the Perdigão field experiment resulted in an unprecedented volume of data, covering several wind conditions through 46 consecutive days between May and June 2017. For researchers looking into specific events, it is time consuming to scrutinise the datasets looking for appropriate conditions. Such task becomes harder if the parameters of interest were not measured directly, instead requiring their computation from the raw datasets. This work will present the e-Science platform developed by University of Porto for the Perdigao dataset. The platform will assist scientists of Perdigao and the larger scientific community in extrapolating the datasets associated to specific flow regimes of interest as well as automatically performing post-processing/filtering operations internally in the platform. We will illustrate the flow regime categories identified in Perdigao based on several parameters such as weather type classification, cloud characteristics, as well as stability regime indicators (Brunt-Väisälä frequency, Scorer parameter, potential temperature inversion heights, dimensionless Richardson and Froude numbers) and wind regime indicators. Examples of some of the post-processing techniques available in the e-Science platform, such as the Savitzky-Golay low-pass filtering technique, will be also presented.

  20. An Introduction to Data Analysis in Asteroseismology

    NASA Astrophysics Data System (ADS)

    Campante, Tiago L.

    A practical guide is presented to some of the main data analysis concepts and techniques employed contemporarily in the asteroseismic study of stars exhibiting solar-like oscillations. The subjects of digital signal processing and spectral analysis are introduced first. These concern the acquisition of continuous physical signals to be subsequently digitally analyzed. A number of specific concepts and techniques relevant to asteroseismology are then presented as we follow the typical workflow of the data analysis process, namely, the extraction of global asteroseismic parameters and individual mode parameters (also known as peak-bagging) from the oscillation spectrum.

  1. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  2. Artificial neuron-glia networks learning approach based on cooperative coevolution.

    PubMed

    Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B

    2015-06-01

    Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.

  3. Computer model for economic study of unbleached kraft paperboard production

    Treesearch

    Peter J. Ince

    1984-01-01

    Unbleached kraft paperboard is produced from wood fiber in an industrial papermaking process. A highly specific and detailed model of the process is presented. The model is also presented as a working computer program. A user of the computer program will provide data on physical parameters of the process and on prices of material inputs and outputs. The program is then...

  4. Monitoring the injured brain: registered, patient specific atlas models to improve accuracy of recovered brain saturation values

    NASA Astrophysics Data System (ADS)

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid

    2015-07-01

    The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.

  5. Ablation dynamics - from absorption to heat accumulation/ultra-fast laser matter interaction

    NASA Astrophysics Data System (ADS)

    Kramer, Thorsten; Remund, Stefan; Jäggi, Beat; Schmid, Marc; Neuenschwander, Beat

    2018-05-01

    Ultra-short laser radiation is used in manifold industrial applications today. Although state-of-the-art laser sources are providing an average power of 10-100 W with repetition rates of up to several megahertz, most applications do not benefit from it. On the one hand, the processing speed is limited to some hundred millimeters per second by the dynamics of mechanical axes or galvanometric scanners. On the other hand, high repetition rates require consideration of new physical effects such as heat accumulation and shielding that might reduce the process efficiency. For ablation processes, process efficiency can be expressed by the specific removal rate, ablated volume per time, and average power. The analysis of the specific removal rate for different laser parameters, like average power, repetition rate or pulse duration, and process parameters, like scanning speed or material, can be used to find the best operation point for microprocessing applications. Analytical models and molecular dynamics simulations based on the so-called two-temperature model reveal the causes for the appearance of limiting physical effects. The findings of models and simulations can be used to take advantage and optimize processing strategies.

  6. Systems for monitoring and digitally recording water-quality parameters

    USGS Publications Warehouse

    Smoot, George F.; Blakey, James F.

    1966-01-01

    Digital recording of water-quality parameters is a link in the automated data collection and processing system of the U.S. Geological Survey. The monitoring and digital recording systems adopted by the Geological Survey, while punching all measurements on a standard paper tape, provide a choice of compatible components to construct a system to meet specific physical problems and data needs. As many as 10 parameters can be recorded by an Instrument, with the only limiting criterion being that measurements are expressed as electrical signals.

  7. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  8. Technical variables in high-throughput miRNA expression profiling: much work remains to be done.

    PubMed

    Nelson, Peter T; Wang, Wang-Xia; Wilfred, Bernard R; Tang, Guiliang

    2008-11-01

    MicroRNA (miRNA) gene expression profiling has provided important insights into plant and animal biology. However, there has not been ample published work about pitfalls associated with technical parameters in miRNA gene expression profiling. One source of pertinent information about technical variables in gene expression profiling is the separate and more well-established literature regarding mRNA expression profiling. However, many aspects of miRNA biochemistry are unique. For example, the cellular processing and compartmentation of miRNAs, the differential stability of specific miRNAs, and aspects of global miRNA expression regulation require specific consideration. Additional possible sources of systematic bias in miRNA expression studies include the differential impact of pre-analytical variables, substrate specificity of nucleic acid processing enzymes used in labeling and amplification, and issues regarding new miRNA discovery and annotation. We conclude that greater focus on technical parameters is required to bolster the validity, reliability, and cultural credibility of miRNA gene expression profiling studies.

  9. Computer-Assisted Sperm Analysis (CASA) parameters and their evolution during preparation as predictors of pregnancy in intrauterine insemination with frozen-thawed donor semen cycles.

    PubMed

    Fréour, Thomas; Jean, Miguel; Mirallié, Sophie; Dubourdieu, Sophie; Barrière, Paul

    2010-04-01

    To study the potential of CASA parameters in frozen-thawed donor semen before and after preparation on silica gradient as predictors of pregnancy in IUI with donor semen cycles. CASA parameters were measured in thawed donor semen before and after preparation on a silica gradient in 132 couples undergoing 168 IUI cycles with donor semen. The evolution of these parameters throughout this process was calculated. The relationship with cycle outcome was then studied. Clinical pregnancy rate was 18.4% per cycle. CASA parameters on donor semen before or after preparation were not significantly different between pregnancy and failure groups. However, amplitude of lateral head displacement (ALH) of spermatozoa improved in all cycles where pregnancy occurred, thus predicting pregnancy with a sensitivity of 100% and a specificity of 20%. Even if CASA parameters do not seem to predict pregnancy in IUI with donor semen cycles, their evolution during the preparation process should be evaluated, especially for ALH. However, the link between ALH improvement during preparation process and pregnancy remains to be explored. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  10. Long term pavement performance computed parameter : frost penetration

    DOT National Transportation Integrated Search

    2008-11-01

    As the pavement design process moves toward mechanistic-empirical techniques, knowledge of seasonal changes in pavement structural characteristics becomes critical. Specifically, frost penetration information is necessary for determining the effect o...

  11. Simulation Modeling of Software Development Processes

    NASA Technical Reports Server (NTRS)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  12. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  13. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan

    2016-09-01

    Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  14. An improved probit method for assessment of domino effect to chemical process equipment caused by overpressure.

    PubMed

    Mingguang, Zhang; Juncheng, Jiang

    2008-10-30

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.

  15. Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Negro Maggio, Valentina; Iocchi, Luca

    2015-02-01

    Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.

  16. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  17. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  18. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    NASA Astrophysics Data System (ADS)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.

  19. Neural networks with fuzzy Petri nets for modeling a machining process

    NASA Astrophysics Data System (ADS)

    Hanna, Moheb M.

    1998-03-01

    The paper presents an intelligent architecture based a feedforward neural network with fuzzy Petri nets for modeling product quality in a CNC machining center. It discusses how the proposed architecture can be used for modeling, monitoring and control a product quality specification such as surface roughness. The surface roughness represents the output quality specification manufactured by a CNC machining center as a result of a milling process. The neural network approach employed the selected input parameters which defined by the machine operator via the CNC code. The fuzzy Petri nets approach utilized the exact input milling parameters, such as spindle speed, feed rate, tool diameter and coolant (off/on), which can be obtained via the machine or sensors system. An aim of the proposed architecture is to model the demanded quality of surface roughness as high, medium or low.

  20. Rapid permeation measurement system for the production control of monolayer and multilayer films

    NASA Astrophysics Data System (ADS)

    Botos, J.; Müller, K.; Heidemeyer, P.; Kretschmer, K.; Bastian, M.; Hochrein, T.

    2014-05-01

    Plastics have been used for packaging films for a long time. Until now the development of new formulations for film applications, including process optimization, has been a time-consuming and cost-intensive process for gases like oxygen (O2) or carbon dioxide (CO2). By using helium (He) the permeation measurement can be accelerated from hours or days to a few minutes. Therefore a manometric measuring system for tests according to ISO 15105-1 is coupled with a mass spectrometer to determine the helium flow rate and to calculate the helium permeation rate. Due to the accelerated determination the permeation quality of monolayer and multilayer films can be measured atline. Such a system can be used to predict for example the helium permeation rate of filled polymer films. Defined quality limits for the permeation rate can be specified as well as the prompt correction of process parameters if the results do not meet the specification. This method for process control was tested on a pilot line with a corotating twin-screw extruder for monolayer films. Selected process parameters were varied iteratively without changing the material formulation to obtain the best process parameter set and thus the lowest permeation rate. Beyond that the influence of different parameters on the helium permeation rate was examined on monolayer films. The results were evaluated conventional as well as with artificial neuronal networks in order to determine the non-linear correlation between all process parameters.

  1. Solid-Liquid and Liquid-Liquid Mixing Laboratory for Chemical Engineering Undergraduates

    ERIC Educational Resources Information Center

    Pour, Sanaz Barar; Norca, Gregory Benoit; Fradette, Louis; Legros, Robert; Tanguy, Philippe A.

    2007-01-01

    Solid-liquid and liquid-liquid mixing experiments have been developed to provide students with a practical experience on suspension and emulsification processes. The laboratory focuses on the characterization of the process efficiency, specifically the influence of the main operating parameters and the effect of the impeller type. (Contains 2…

  2. Quality Control Analysis of Selected Aspects of Programs Administered by the Bureau of Student Financial Assistance. Task 1 and Quality Control Sample; Error-Prone Modeling Analysis Plan.

    ERIC Educational Resources Information Center

    Saavedra, Pedro; And Others

    Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…

  3. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, Noah G.; Veirs, Douglas K.; Claytor, Thomas N.

    1994-01-01

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure.

  4. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, N.G.; Veirs, D.K.; Claytor, T.N.

    1994-10-25

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast Fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure. 7 figs.

  5. Invariant polarimetric contrast parameters of light with Gaussian fluctuations in three dimensions.

    PubMed

    Réfrégier, Philippe; Roche, Muriel; Goudail, François

    2006-01-01

    We propose a rigorous definition of the minimal set of parameters that characterize the difference between two partially polarized states of light whose electric fields vary in three dimensions with Gaussian fluctuations. Although two such states are a priori defined by eighteen parameters, we demonstrate that the performance of processing tasks such as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by three scalar functions of these parameters. These functions define a "polarimetric contrast" that simplifies the analysis and the specification of processing techniques on polarimetric signals and images. This result can also be used to analyze the definition of the degree of polarization of a three-dimensional state of light with Gaussian fluctuations in comparison, with respect to its polarimetric contrast parameters, with a totally depolarized light. We show that these contrast parameters are a simple function of the degrees of polarization previously proposed by Barakat [Opt. Acta 30, 1171 (1983)] and Setälä et al. [Phys. Rev. Lett. 88, 123902 (2002)]. Finally, we analyze the dimension of the set of contrast parameters in different particular situations.

  6. SU-C-BRD-03: Analysis of Accelerator Generated Text Logs for Preemptive Maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Able, CM; Baydush, AH; Nguyen, C

    2014-06-15

    Purpose: To develop a model to analyze medical accelerator generated parameter and performance data that will provide an early warning of performance degradation and impending component failure. Methods: A robust 6 MV VMAT quality assurance treatment delivery was used to test the constancy of accelerator performance. The generated text log files were decoded and analyzed using statistical process control (SPC) methodology. The text file data is a single snapshot of energy specific and overall systems parameters. A total of 36 system parameters were monitored which include RF generation, electron gun control, energy control, beam uniformity control, DC voltage generation, andmore » cooling systems. The parameters were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and the parameter/system specification. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: the value of 1 standard deviation from the mean operating parameter of 483 TB systems, a small fraction (≤ 5%) of the operating range, or a fraction of the minor fault deviation. Results: There were 34 parameters in which synthetic errors were introduced. There were 2 parameters (radial position steering coil, and positive 24V DC) in which the errors did not exceed the limit of the I/MR chart. The I chart limit was exceeded for all of the remaining parameters (94.2%). The MR chart limit was exceeded in 29 of the 32 parameters (85.3%) in which the I chart limit was exceeded. Conclusion: Statistical process control I/MR evaluation of text log file parameters may be effective in providing an early warning of performance degradation or component failure for digital medical accelerator systems. Research is Supported by Varian Medical Systems, Inc.« less

  7. Friction spinning - Twist phenomena and the capability of influencing them

    NASA Astrophysics Data System (ADS)

    Lossen, Benjamin; Homberg, Werner

    2016-10-01

    The friction spinning process can be allocated to the incremental forming techniques. The process consists of process elements from both metal spinning and friction welding. The selective combination of process elements from these two processes results in the integration of friction sub-processes in a spinning process. This implies self-induced heat generation with the possibility of manufacturing functionally graded parts from tube and sheets. Compared with conventional spinning processes, this in-process heat treatment permits the extension of existing forming limits and also the production of more complex geometries. Furthermore, the defined adjustment of part properties like strength, grain size/orientation and surface conditions can be achieved through the appropriate process parameter settings and consequently by setting a specific temperature profile in combination with the degree of deformation. The results presented from tube forming start with an investigation into the resulting twist phenomena in flange processing. In this way, the influence of the main parameters, such as rotation speed, feed rate, forming paths and tool friction surface, and their effects on temperature, forces and finally the twist behavior are analyzed. Following this, the significant correlations with the parameters and a new process strategy are set out in order to visualize the possibility of achieving a defined grain texture orientation.

  8. Description, characteristics and testing of the NASA airborne radar

    NASA Technical Reports Server (NTRS)

    Jones, W. R.; Altiz, O.; Schaffner, P.; Schrader, J. H.; Blume, H. J. C.

    1991-01-01

    Presented here is a description of a coherent radar scattermeter and its associated signal processing hardware, which have been specifically designed to detect microbursts and record their radar characteristics. Radar parameters, signal processing techniques and detection algorithms, all under computer control, combine to sense and process reflectivity, clutter, and microburst data. Also presented is the system's high density, high data rate recording system. This digital system is capable of recording many minutes of the in-phase and quadrature components and corresponding receiver gains of the scattered returns for selected spatial regions, as well as other aircraft and hardware related parameters of interest for post-flight analysis. Information is given in viewgraph form.

  9. Retrofitting activated sludge systems to intermittent aeration for nitrogen removal.

    PubMed

    Hanhan, O; Artan, N; Orhon, D

    2002-01-01

    The paper provides the basis and the conceptual approach of applying process kinetics and modelling to the design of alternating activated sludge systems for retrofitting existing activated sludge plants to intermittent aeration for nitrogen removal. It shows the significant role of the two specific parameters, namely, the aerated fraction and the cycle time ratio on process performance through model simulations and proposes a way to incorporate them into a design procedure using process stoichiometry and mass balance. It illustrates the effect of these parameters, together with the sludge age, in establishing the balance between the denitrification potential and the available nitrogen created in the anoxic/aerobic sequences of system operation.

  10. Analytical and regression models of glass rod drawing process

    NASA Astrophysics Data System (ADS)

    Alekseeva, L. B.

    2018-03-01

    The process of drawing glass rods (light guides) is being studied. The parameters of the process affecting the quality of the light guide have been determined. To solve the problem, mathematical models based on general equations of continuum mechanics are used. The conditions for the stable flow of the drawing process have been found, which are determined by the stability of the motion of the glass mass in the formation zone to small uncontrolled perturbations. The sensitivity of the formation zone to perturbations of the drawing speed and viscosity is estimated. Experimental models of the drawing process, based on the regression analysis methods, have been obtained. These models make it possible to customize a specific production process to obtain light guides of the required quality. They allow one to find the optimum combination of process parameters in the chosen area and to determine the required accuracy of maintaining them at a specified level.

  11. Modulation of CD4(+) T cell-dependent specific cytotoxic CD8(+) T cells differentiation and proliferation by the timing of increase in the pathogen load.

    PubMed

    Tzelepis, Fanny; Persechini, Pedro M; Rodrigues, Mauricio M

    2007-04-25

    Following infection with viruses, bacteria or protozoan parasites, naïve antigen-specific CD8(+) T cells undergo a process of differentiation and proliferation to generate effector cells. Recent evidences suggest that the timing of generation of specific effector CD8(+) T cells varies widely according to different pathogens. We hypothesized that the timing of increase in the pathogen load could be a critical parameter governing this process. Using increasing doses of the protozoan parasite Trypanosoma cruzi to infect C57BL/6 mice, we observed a significant acceleration in the timing of parasitemia without an increase in mouse susceptibility. In contrast, in CD8 deficient mice, we observed an inverse relationship between the parasite inoculum and the timing of death. These results suggest that in normal mice CD8(+) T cells became protective earlier, following the accelerated development of parasitemia. The evaluation of specific cytotoxic responses in vivo to three distinct epitopes revealed that increasing the parasite inoculum hastened the expansion of specific CD8(+) cytotoxic T cells following infection. The differentiation and expansion of T. cruzi-specific CD8(+) cytotoxic T cells is in fact dependent on parasite multiplication, as radiation-attenuated parasites were unable to activate these cells. We also observed that, in contrast to most pathogens, the activation process of T. cruzi-specific CD8(+) cytotoxic T cells was dependent on MHC class II restricted CD4(+) T cells. Our results are compatible with our initial hypothesis that the timing of increase in the pathogen load can be a critical parameter governing the kinetics of CD4(+) T cell-dependent expansion of pathogen-specific CD8(+) cytotoxic T cells.

  12. Process to evaluate hematological parameters that reflex to manual differential cell counts in a pediatric institution.

    PubMed

    Guarner, Jeannette; Atuan, Maria Ana; Nix, Barbara; Mishak, Christopher; Vejjajiva, Connie; Curtis, Cheri; Park, Sunita; Mullins, Richard

    2010-01-01

    Each institution sets specific parameters obtained by automated hematology analyzers to trigger manual counts. We designed a process to decrease the number of manual differential cell counts without impacting patient care. We selected new criteria that prompt manual counts and studied the impact these changes had in 2 days of work and in samples of patients with newly diagnosed leukemia, sickle cell disease, and presence of left shift. By using fewer parameters and expanding our ranges we decreased the number of manual counts by 20%. The parameters that prompted manual counts most frequently were the presence of blast flags and nucleated red blood cells, 2 parameters that were not changed. The parameters that accounted for a decrease in the number of manual counts were the white blood cell count and large unstained cells. Eight of 32 patients with newly diagnosed leukemia did not show blast flags; however, other parameters triggered manual counts. In 47 patients with sickle cell disease, nucleated red cells and red cell variability prompted manual review. Bands were observed in 18% of the specimens and 4% would not have been counted manually with the new criteria, for the latter the mean band count was 2.6%. The process we followed to evaluate hematological parameters that reflex to manual differential cell counts increased efficiency without compromising patient care in our hospital system.

  13. Optimum surface roughness prediction for titanium alloy by adopting response surface methodology

    NASA Astrophysics Data System (ADS)

    Yang, Aimin; Han, Yang; Pan, Yuhang; Xing, Hongwei; Li, Jinze

    Titanium alloy has been widely applied in industrial engineering products due to its advantages of great corrosion resistance and high specific strength. This paper investigated the processing parameters for finish turning of titanium alloy TC11. Firstly, a three-factor central composite design of experiment, considering the cutting speed, feed rate and depth of cut, are conducted in titanium alloy TC11 and the corresponding surface roughness are obtained. Then a mathematic model is constructed by the response surface methodology to fit the relationship between the process parameters and the surface roughness. The prediction accuracy was verified by the one-way ANOVA. Finally, the contour line of the surface roughness under different combination of process parameters are obtained and used for the optimum surface roughness prediction. Verification experimental results demonstrated that material removal rate (MRR) at the obtained optimum can be significantly improved without sacrificing the surface roughness.

  14. 40 CFR 270.24 - Specific part B information requirements for process vents.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...

  15. 40 CFR 270.24 - Specific part B information requirements for process vents.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...

  16. 40 CFR 270.24 - Specific part B information requirements for process vents.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...

  17. 40 CFR 270.24 - Specific part B information requirements for process vents.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... emission reductions must be made using operating parameter values (e.g., temperatures, flow rates, or..., schematics, and piping and instrumentation diagrams based on the appropriate sections of “APTI Course 415...

  18. Numerical modeling of heat-transfer and the influence of process parameters on tailoring the grain morphology of IN718 in electron beam additive manufacturing

    DOE PAGES

    Raghavan, Narendran; Dehoff, Ryan; Pannala, Sreekanth; ...

    2016-04-26

    The fabrication of 3-D parts from CAD models by additive manufacturing (AM) is a disruptive technology that is transforming the metal manufacturing industry. The correlation between solidification microstructure and mechanical properties has been well understood in the casting and welding processes over the years. This paper focuses on extending these principles to additive manufacturing to understand the transient phenomena of repeated melting and solidification during electron beam powder melting process to achieve site-specific microstructure control within a fabricated component. In this paper, we have developed a novel melt scan strategy for electron beam melting of nickel-base superalloy (Inconel 718) andmore » also analyzed 3-D heat transfer conditions using a parallel numerical solidification code (Truchas) developed at Los Alamos National Laboratory. The spatial and temporal variations of temperature gradient (G) and growth velocity (R) at the liquid-solid interface of the melt pool were calculated as a function of electron beam parameters. By manipulating the relative number of voxels that lie in the columnar or equiaxed region, the crystallographic texture of the components can be controlled to an extent. The analysis of the parameters provided optimum processing conditions that will result in columnar to equiaxed transition (CET) during the solidification. Furthermore, the results from the numerical simulations were validated by experimental processing and characterization thereby proving the potential of additive manufacturing process to achieve site-specific crystallographic texture control within a fabricated component.« less

  19. Development of a Premium Quality Plasma-derived IVIg (IQYMUNE®) Utilizing the Principles of Quality by Design-A Worked-through Case Study.

    PubMed

    Paolantonacci, Philippe; Appourchaux, Philippe; Claudel, Béatrice; Ollivier, Monique; Dennett, Richard; Siret, Laurent

    2018-01-01

    Polyvalent human normal immunoglobulins for intravenous use (IVIg), indicated for rare and often severe diseases, are complex plasma-derived protein preparations. A quality by design approach has been used to develop the Laboratoire Français du Fractionnement et des Biotechnologies new-generation IVIg, targeting a high level of purity to generate an enhanced safety profile while maintaining a high level of efficacy. A modular approach of quality by design was implemented consisting of five consecutive steps to cover all the stages from the product design to the final product control strategy.A well-defined target product profile was translated into 27 product quality attributes that formed the basis of the process design. In parallel, a product risk analysis was conducted and identified 19 critical quality attributes among the product quality attributes. Process risk analysis was carried out to establish the links between process parameters and critical quality attributes. Twelve critical steps were identified, and for each of these steps a risk mitigation plan was established.Among the different process risk mitigation exercises, five process robustness studies were conducted at qualified small scale with a design of experiment approach. For each process step, critical process parameters were identified and, for each critical process parameter, proven acceptable ranges were established. The quality risk management and risk mitigation outputs, including verification of proven acceptable ranges, were used to design the process verification exercise at industrial scale.Finally, the control strategy was established using a mix, or hybrid, of the traditional approach plus elements of the quality by design enhanced approach, as illustrated, to more robustly assign material and process controls and in order to securely meet product specifications.The advantages of this quality by design approach were improved process knowledge for industrial design and process validation and a clear justification of the process and product specifications as a basis for control strategy and future comparability exercises. © PDA, Inc. 2018.

  20. Simulation of the microwave heating of a thin multilayered composite material: A parameter analysis

    NASA Astrophysics Data System (ADS)

    Tertrais, Hermine; Barasinski, Anaïs; Chinesta, Francisco

    2018-05-01

    Microwave (MW) technology relies on volumetric heating. Thermal energy is transferred to the material that can absorb it at specific frequencies. The complex physics involved in this process is far from being understood and that is why a simulation tool has been developed in order to solve the electromagnetic and thermal equations in such a complex material as a multilayered composite part. The code is based on the in-plane-out-of-plane separated representation within the Proper Generalized Decomposition framework. To improve the knowledge on the process, a parameter study in carried out in this paper.

  1. Steps Towards Industrialization of Cu–III–VI2Thin‐Film Solar Cells:Linking Materials/Device Designs to Process Design For Non‐stoichiometric Photovoltaic Materials

    PubMed Central

    Chang, Hsueh‐Hsin; Sharma, Poonam; Letha, Arya Jagadhamma; Shao, Lexi; Zhang, Yafei; Tseng, Bae‐Heng

    2016-01-01

    The concept of in‐line sputtering and selenization become industrial standard for Cu–III–VI2 solar cell fabrication, but still it's very difficult to control and predict the optical and electrical parameters, which are closely related to the chemical composition distribution of the thin film. The present review article addresses onto the material design, device design and process design using parameters closely related to the chemical compositions. Its variation leads to change in the Poisson equation, current equation, and continuity equation governing the device design. To make the device design much realistic and meaningful, we need to build a model that relates the opto‐electrical properties to the chemical composition. The material parameters as well as device structural parameters are loaded into the process simulation to give a complete set of process control parameters. The neutral defect concentrations of non‐stoichiometric CuMSe2 (M = In and Ga) have been calculated under the specific atomic chemical potential conditions using this methodology. The optical and electrical properties have also been investigated for the development of a full‐function analytical solar cell simulator. The future prospects regarding the development of copper–indium–gallium–selenide thin film solar cells have also been discussed. PMID:27840790

  2. Steps Towards Industrialization of Cu-III-VI2Thin-Film Solar Cells:Linking Materials/Device Designs to Process Design For Non-stoichiometric Photovoltaic Materials.

    PubMed

    Hwang, Huey-Liang; Chang, Hsueh-Hsin; Sharma, Poonam; Letha, Arya Jagadhamma; Shao, Lexi; Zhang, Yafei; Tseng, Bae-Heng

    2016-10-01

    The concept of in-line sputtering and selenization become industrial standard for Cu-III-VI 2 solar cell fabrication, but still it's very difficult to control and predict the optical and electrical parameters, which are closely related to the chemical composition distribution of the thin film. The present review article addresses onto the material design, device design and process design using parameters closely related to the chemical compositions. Its variation leads to change in the Poisson equation, current equation, and continuity equation governing the device design. To make the device design much realistic and meaningful, we need to build a model that relates the opto-electrical properties to the chemical composition. The material parameters as well as device structural parameters are loaded into the process simulation to give a complete set of process control parameters. The neutral defect concentrations of non-stoichiometric CuMSe 2 (M = In and Ga) have been calculated under the specific atomic chemical potential conditions using this methodology. The optical and electrical properties have also been investigated for the development of a full-function analytical solar cell simulator. The future prospects regarding the development of copper-indium-gallium-selenide thin film solar cells have also been discussed.

  3. Developing a CD-CBM Anticipatory Approach for Cavitation - Defining a Model Descriptor Consistent Between Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allgood, G.O.; Dress, W.B.; Kercel, S.W.

    1999-05-10

    A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used asmore » a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.« less

  4. Two-degree-of-freedom fractional order-PID controllers design for fractional order processes with dead-time.

    PubMed

    Li, Mingjie; Zhou, Ping; Zhao, Zhicheng; Zhang, Jinggang

    2016-03-01

    Recently, fractional order (FO) processes with dead-time have attracted more and more attention of many researchers in control field, but FO-PID controllers design techniques available for the FO processes with dead-time suffer from lack of direct systematic approaches. In this paper, a simple design and parameters tuning approach of two-degree-of-freedom (2-DOF) FO-PID controller based on internal model control (IMC) is proposed for FO processes with dead-time, conventional one-degree-of-freedom control exhibited the shortcoming of coupling of robustness and dynamic response performance. 2-DOF control can overcome the above weakness which means it realizes decoupling of robustness and dynamic performance from each other. The adjustable parameter η2 of FO-PID controller is directly related to the robustness of closed-loop system, and the analytical expression is given between the maximum sensitivity specification Ms and parameters η2. In addition, according to the dynamic performance requirement of the practical system, the parameters η1 can also be selected easily. By approximating the dead-time term of the process model with the first-order Padé or Taylor series, the expressions for 2-DOF FO-PID controller parameters are derived for three classes of FO processes with dead-time. Moreover, compared with other methods, the proposed method is simple and easy to implement. Finally, the simulation results are given to illustrate the effectiveness of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Parametric study of two planar high power flexible solar array concepts

    NASA Technical Reports Server (NTRS)

    Garba, J. A.; Kudija, D. A.; Zeldin, B.; Costogue, E. N.

    1978-01-01

    The design parameters examined were: frequency, aspect ratio, packaging constraints, and array blanket flatness. Specific power-to-mass ratios for both solar arrays as a function of array frequency and array width were developed and plotted. Summaries of the baseline design data, developed equations, the computer program operation, plots of the parameters, and the process for using the information as a design manual are presented.

  6. United States Air Force Summer Research Program -- 1993 Summer Research Program Final Reports. Volume 11. Arnold Engineering Development Center, Frank J. Seiler Research Laboratory, Wilford Hall Medical Center

    DTIC Science & Technology

    1993-01-01

    external parameters such as airflow, temperature, pressure, etc, are measured. Turbine Engine testing generates massive volumes of data at very high...a form that describes the signal flow graph topology as well as specific parameters of the processing blocks in the diagram. On multiprocessor...provides an interface to the symbolic builder and control functions such that parameters may be set during the build operation that will affect the

  7. Castor Oil: Properties, Uses, and Optimization of Processing Parameters in Commercial Production

    PubMed Central

    Patel, Vinay R.; Dumancas, Gerard G.; Kasi Viswanath, Lakshmi C.; Maples, Randall; Subong, Bryan John J.

    2016-01-01

    Castor oil, produced from castor beans, has long been considered to be of important commercial value primarily for the manufacturing of soaps, lubricants, and coatings, among others. Global castor oil production is concentrated primarily in a small geographic region of Gujarat in Western India. This region is favorable due to its labor-intensive cultivation method and subtropical climate conditions. Entrepreneurs and castor processors in the United States and South America also cultivate castor beans but are faced with the challenge of achieving high castor oil production efficiency, as well as obtaining the desired oil quality. In this manuscript, we provide a detailed analysis of novel processing methods involved in castor oil production. We discuss novel processing methods by explaining specific processing parameters involved in castor oil production. PMID:27656091

  8. Castor Oil: Properties, Uses, and Optimization of Processing Parameters in Commercial Production.

    PubMed

    Patel, Vinay R; Dumancas, Gerard G; Kasi Viswanath, Lakshmi C; Maples, Randall; Subong, Bryan John J

    2016-01-01

    Castor oil, produced from castor beans, has long been considered to be of important commercial value primarily for the manufacturing of soaps, lubricants, and coatings, among others. Global castor oil production is concentrated primarily in a small geographic region of Gujarat in Western India. This region is favorable due to its labor-intensive cultivation method and subtropical climate conditions. Entrepreneurs and castor processors in the United States and South America also cultivate castor beans but are faced with the challenge of achieving high castor oil production efficiency, as well as obtaining the desired oil quality. In this manuscript, we provide a detailed analysis of novel processing methods involved in castor oil production. We discuss novel processing methods by explaining specific processing parameters involved in castor oil production.

  9. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  10. Methodology for the systems engineering process. Volume 3: Operational availability

    NASA Technical Reports Server (NTRS)

    Nelson, J. H.

    1972-01-01

    A detailed description and explanation of the operational availability parameter is presented. The fundamental mathematical basis for operational availability is developed, and its relationship to a system's overall performance effectiveness is illustrated within the context of identifying specific availability requirements. Thus, in attempting to provide a general methodology for treating both hypothetical and existing availability requirements, the concept of an availability state, in conjunction with the more conventional probability-time capability, is investigated. In this respect, emphasis is focused upon a balanced analytical and pragmatic treatment of operational availability within the system design process. For example, several applications of operational availability to typical aerospace systems are presented, encompassing the techniques of Monte Carlo simulation, system performance availability trade-off studies, analytical modeling of specific scenarios, as well as the determination of launch-on-time probabilities. Finally, an extensive bibliography is provided to indicate further levels of depth and detail of the operational availability parameter.

  11. Robust functional regression model for marginal mean and subject-specific inferences.

    PubMed

    Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo

    2017-01-01

    We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.

  12. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less

  13. A novel membrane-based process to isolate peroxidase from horseradish roots: optimization of operating parameters.

    PubMed

    Liu, Jianguo; Yang, Bo; Chen, Changzhen

    2013-02-01

    The optimization of operating parameters for the isolation of peroxidase from horseradish (Armoracia rusticana) roots with ultrafiltration (UF) technology was systemically studied. The effects of UF operating conditions on the transmission of proteins were quantified using the parameter scanning UF. These conditions included solution pH, ionic strength, stirring speed and permeate flux. Under optimized conditions, the purity of horseradish peroxidase (HRP) obtained was greater than 84 % after a two-stage UF process and the recovery of HRP from the feedstock was close to 90 %. The resulting peroxidase product was then analysed by isoelectric focusing, SDS-PAGE and circular dichroism, to confirm its isoelectric point, molecular weight and molecular secondary structure. The effects of calcium ion on HRP specific activities were also experimentally determined.

  14. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  15. Multi-step high-throughput conjugation platform for the development of antibody-drug conjugates.

    PubMed

    Andris, Sebastian; Wendeler, Michaela; Wang, Xiangyang; Hubbuch, Jürgen

    2018-07-20

    Antibody-drug conjugates (ADCs) form a rapidly growing class of biopharmaceuticals which attracts a lot of attention throughout the industry due to its high potential for cancer therapy. They combine the specificity of a monoclonal antibody (mAb) and the cell-killing capacity of highly cytotoxic small molecule drugs. Site-specific conjugation approaches involve a multi-step process for covalent linkage of antibody and drug via a linker. Despite the range of parameters that have to be investigated, high-throughput methods are scarcely used so far in ADC development. In this work an automated high-throughput platform for a site-specific multi-step conjugation process on a liquid-handling station is presented by use of a model conjugation system. A high-throughput solid-phase buffer exchange was successfully incorporated for reagent removal by utilization of a batch cation exchange step. To ensure accurate screening of conjugation parameters, an intermediate UV/Vis-based concentration determination was established including feedback to the process. For conjugate characterization, a high-throughput compatible reversed-phase chromatography method with a runtime of 7 min and no sample preparation was developed. Two case studies illustrate the efficient use for mapping the operating space of a conjugation process. Due to the degree of automation and parallelization, the platform is capable of significantly reducing process development efforts and material demands and shorten development timelines for antibody-drug conjugates. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Efficiency of the Inertia Friction Welding Process and Its Dependence on Process Parameters

    NASA Astrophysics Data System (ADS)

    Senkov, O. N.; Mahaffey, D. W.; Tung, D. J.; Zhang, W.; Semiatin, S. L.

    2017-07-01

    It has been widely assumed, but never proven, that the efficiency of the inertia friction welding (IFW) process is independent of process parameters and is relatively high, i.e., 70 to 95 pct. In the present work, the effect of IFW parameters on process efficiency was established. For this purpose, a series of IFW trials was conducted for the solid-state joining of two dissimilar nickel-base superalloys (LSHR and Mar-M247) using various combinations of initial kinetic energy ( i.e., the total weld energy, E o), initial flywheel angular velocity ( ω o), flywheel moment of inertia ( I), and axial compression force ( P). The kinetics of the conversion of the welding energy to heating of the faying sample surfaces ( i.e., the sample energy) vs parasitic losses to the welding machine itself were determined by measuring the friction torque on the sample surfaces ( M S) and in the machine bearings ( M M). It was found that the rotating parts of the welding machine can consume a significant fraction of the total energy. Specifically, the parasitic losses ranged from 28 to 80 pct of the total weld energy. The losses increased (and the corresponding IFW process efficiency decreased) as P increased (at constant I and E o), I decreased (at constant P and E o), and E o (or ω o) increased (at constant P and I). The results of this work thus provide guidelines for selecting process parameters which minimize energy losses and increase process efficiency during IFW.

  17. Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth.

    PubMed

    Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor

    2016-10-05

    In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a "post process" method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of "in situ" monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yieded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid.

  18. Impact of process parameters on the breakage kinetics of poorly water-soluble drugs during wet stirred media milling: a microhydrodynamic view.

    PubMed

    Afolabi, Afolawemi; Akinlabi, Olakemi; Bilgili, Ecevit

    2014-01-23

    Wet stirred media milling has proven to be a robust process for producing nanoparticle suspensions of poorly water-soluble drugs. As the process is expensive and energy-intensive, it is important to study the breakage kinetics, which determines the cycle time and production rate for a desired fineness. Although the impact of process parameters on the properties of final product suspensions has been investigated, scant information is available regarding their impact on the breakage kinetics. Here, we elucidate the impact of stirrer speed, bead concentration, and drug loading on the breakage kinetics via a microhydrodynamic model for the bead-bead collisions. Suspensions of griseofulvin, a model poorly water-soluble drug, were prepared in the presence of two stabilizers: hydroxypropyl cellulose and sodium dodecyl sulfate. Laser diffraction, scanning electron microscopy, and rheometry were used to characterize them. Various microhydrodynamic parameters including a newly defined milling intensity factor was calculated. An increase in either the stirrer speed or the bead concentration led to an increase in the specific energy and the milling intensity factor, consequently faster breakage. On the other hand, an increase in the drug loading led to a decrease in these parameters and consequently slower breakage. While all microhydrodynamic parameters provided significant physical insight, only the milling intensity factor was capable of explaining the influence of all parameters directly through its strong correlation with the process time constant. Besides guiding process optimization, the analysis rationalizes the preparation of a single high drug-loaded batch (20% or higher) instead of multiple dilute batches. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Post-growth process for flexible CdS/CdTe thin film solar cells with high specific power.

    PubMed

    Cho, Eunwoo; Kang, Yoonmook; Kim, Donghwan; Kim, Jihyun

    2016-05-16

    We demonstrated a flexible CdS/CdTe thin film solar cell with high specific power of approximately 254 W/kg. A flexible and ultra-light weight CdS/CdTe cell treated with pre-NP etch process exhibited high conversion efficiency of 13.56% in superstrate configuration. Morphological, structural and optical changes of CdS/CdTe thin films were characterized when pre-NP etch step was incorporated to the conventional post-deposition process. Improvement of photovoltaic parameters can be attributed to the removal of the oxide and the formation of Te-rich layer, which benefit the activation process. Pre-NP etched cell maintained their flexibility and performance under the repeated tensile strain of 0.13%. Our method can pave a way for manufacturing flexible CdS/CdTe thin film solar cells with high specific power for mobile and aerospace applications.

  20. Standardization of domestic frying processes by an engineering approach.

    PubMed

    Franke, K; Strijowski, U

    2011-05-01

    An approach was developed to enable a better standardization of domestic frying of potato products. For this purpose, 5 domestic fryers differing in heating power and oil capacity were used. A very defined frying process using a highly standardized model product and a broad range of frying conditions was carried out in these fryers and the development of browning representing an important quality parameter was measured. Product-to-oil ratio, oil temperature, and frying time were varied. Quite different color changes were measured in the different fryers although the same frying process parameters were applied. The specific energy consumption for water evaporation (spECWE) during frying related to product amount was determined for all frying processes to define an engineering parameter for characterizing the frying process. A quasi-linear regression approach was applied to calculate this parameter from frying process settings and fryer properties. The high significance of the regression coefficients and a coefficient of determination close to unity confirmed the suitability of this approach. Based on this regression equation, curves for standard frying conditions (SFC curves) were calculated which describe the frying conditions required to obtain the same level of spECWE in the different domestic fryers. Comparison of browning results from the different fryers operated at conditions near the SFC curves confirmed the applicability of the approach. © 2011 Institute of Food Technologists®

  1. Upscaling from research watersheds: an essential stage of trustworthy general-purpose hydrologic model building

    NASA Astrophysics Data System (ADS)

    McNamara, J. P.; Semenova, O.; Restrepo, P. J.

    2011-12-01

    Highly instrumented research watersheds provide excellent opportunities for investigating hydrologic processes. A danger, however, is that the processes observed at a particular research watershed are too specific to the watershed and not representative even of the larger scale watershed that contains that particular research watershed. Thus, models developed based on those partial observations may not be suitable for general hydrologic use. Therefore demonstrating the upscaling of hydrologic process from research watersheds to larger watersheds is essential to validate concepts and test model structure. The Hydrograph model has been developed as a general-purpose process-based hydrologic distributed system. In its applications and further development we evaluate the scaling of model concepts and parameters in a wide range of hydrologic landscapes. All models, either lumped or distributed, are based on a discretization concept. It is common practice that watersheds are discretized into so called hydrologic units or hydrologic landscapes possessing assumed homogeneous hydrologic functioning. If a model structure is fixed, the difference in hydrologic functioning (difference in hydrologic landscapes) should be reflected by a specific set of model parameters. Research watersheds provide the possibility for reasonable detailed combining of processes into some typical hydrologic concept such as hydrologic units, hydrologic forms, and runoff formation complexes in the Hydrograph model. And here by upscaling we imply not the upscaling of a single process but upscaling of such unified hydrologic functioning. The simulation of runoff processes for the Dry Creek research watershed, Idaho, USA (27 km2) was undertaken using the Hydrograph model. The information on the watershed was provided by Boise State University and included a GIS database of watershed characteristics and a detailed hydrometeorological observational dataset. The model provided good simulation results in terms of runoff and variable states of soil and snow over a simulation period 2000 - 2009. The parameters of the model were hand-adjusted based on rational sense, observational data and available understanding of underlying processes. For the first run some processes as riparian vegetation impact on runoff and streamflow/groundwater interaction were handled in a conceptual way. It was shown that the use of Hydrograph model which requires modest amount of parameter calibration may serve also as a quality control for observations. Based on the obtained parameters values and process understanding at the research watershed the model was applied to the larger scale watersheds located in similar environment - the Boise River at South Fork (1660 km2) and Twin Springs (2155 km2). The evaluation of the results of such upscaling will be presented.

  2. Simulating Exposure Concentrations of Engineered Nanomaterials in Surface Water Systems: Release of WASP8

    NASA Astrophysics Data System (ADS)

    Knightes, C. D.; Bouchard, D.; Zepp, R. G.; Henderson, W. M.; Han, Y.; Hsieh, H. S.; Avant, B. K.; Acrey, B.; Spear, J.

    2017-12-01

    The unique properties of engineered nanomaterials led to their increased production and potential release into the environment. Currently available environmental fate models developed for traditional contaminants are limited in their ability to simulate nanomaterials' environmental behavior. This is due to an incomplete understanding and representation of the processes governing nanomaterial distribution in the environment and by scarce empirical data quantifying the interaction of nanomaterials with environmental surfaces. The well-known Water Quality Analysis Simulation Program (WASP) was updated to incorporate nanomaterial-specific processes, specifically hetero-aggregation with particulate matter. In parallel with this effort, laboratory studies were used to quantify parameter values parameters necessary for governing processes in surface waters. This presentation will discuss the recent developments in the new architecture for WASP8 and the newly constructed Advanced Toxicant Module. The module includes advanced algorithms for increased numbers of state variables: chemicals, solids, dissolved organic matter, pathogens, temperature, and salinity. This presentation will focus specifically on the incorporation of nanomaterials, with the applications of the fate and transport of hypothetical releases of Multi-Walled Carbon Nanotubes (MWCNT) and Graphene Oxide (GO) into the headwaters of a southeastern US coastal plains river. While this presentation focuses on nanomaterials, the advanced toxicant module can also simulate metals and organic contaminants.

  3. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  4. Individual differences in emotion word processing: A diffusion model analysis.

    PubMed

    Mueller, Christina J; Kuchinke, Lars

    2016-06-01

    The exploratory study investigated individual differences in implicit processing of emotional words in a lexical decision task. A processing advantage for positive words was observed, and differences between happy and fear-related words in response times were predicted by individual differences in specific variables of emotion processing: Whereas more pronounced goal-directed behavior was related to a specific slowdown in processing of fear-related words, the rate of spontaneous eye blinks (indexing brain dopamine levels) was associated with a processing advantage of happy words. Estimating diffusion model parameters revealed that the drift rate (rate of information accumulation) captures unique variance of processing differences between happy and fear-related words, with highest drift rates observed for happy words. Overall emotion recognition ability predicted individual differences in drift rates between happy and fear-related words. The findings emphasize that a significant amount of variance in emotion processing is explained by individual differences in behavioral data.

  5. Detecting Anomalies in Process Control Networks

    NASA Astrophysics Data System (ADS)

    Rrushi, Julian; Kang, Kyoung-Don

    This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.

  6. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  7. Systems Analysis of the Hydrogen Transition with HyTrans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leiby, Paul Newsome; Greene, David L; Bowman, David Charles

    2007-01-01

    The U.S. Federal government is carefully considering the merits and long-term prospects of hydrogen-fueled vehicles. NAS (1) has called for the careful application of systems analysis tools to structure the complex assessment required. Others, raising cautionary notes, question whether a consistent and plausible transition to hydrogen light-duty vehicles can identified (2) and whether that transition would, on balance, be environmentally preferred. Modeling the market transition to hydrogen-powered vehicles is an inherently complex process, encompassing hydrogen production, delivery and retailing, vehicle manufacturing, and vehicle choice and use. We describe the integration of key technological and market factors in a dynamic transitionmore » model, HyTrans. The usefulness of HyTrans and its predictions depends on three key factors: (1) the validity of the economic theories that underpin the model, (2) the authenticity with which the key processes are represented, and (3) the accuracy of specific parameter values used in the process representations. This paper summarizes the theoretical basis of HyTrans, and highlights the implications of key parameter specifications with sensitivity analysis.« less

  8. Two-stage anaerobic digestion of sugar beet silage: The effect of the pH-value on process parameters and process efficiency.

    PubMed

    Kumanowska, Elzbieta; Uruñuela Saldaña, Mariana; Zielonka, Simon; Oechsner, Hans

    2017-12-01

    The study investigated the influence of the target pH-values 4.5, 5, 5.5 and 6 in the acidification reactor on process parameters, such as substrate-specific methane yield and the intermediates, in the two-stage anaerobic digestion of sugar beet silage. The total specific methane yield (Nlkg -1 CODd -1 ) increased with an increase in the pH (pH 4.5: 140.58±70.08, pH 5: 181.21±55.71, pH 5.5: 218.32±51.01, pH 6: 256.47±28.78). The pH-value also had an effect on the dominant intermediate in hydrolysate. At the pH-value of 4.5, almost no acidification and microbial activity was observed. At pH 5 and 5.5, butyric acid production dominated, guided by H 2 production. At pH 6 acetic acid was the main product. The absence of H 2 and the highest SMY makes it favorable under practical aspects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Wave-front propagation in a discrete model of excitable media

    NASA Astrophysics Data System (ADS)

    Feldman, A. B.; Chernyak, Y. B.; Cohen, R. J.

    1998-06-01

    We generalize our recent discrete cellular automata (CA) model of excitable media [Y. B. Chernyak, A. B. Feldman, and R. J. Cohen, Phys. Rev. E 55, 3215 (1997)] to incorporate the effects of inhibitory processes on the propagation of the excitation wave front. In the common two variable reaction-diffusion (RD) models of excitable media, the inhibitory process is described by the v ``controller'' variable responsible for the restoration of the equilibrium state following excitation. In myocardial tissue, the inhibitory effects are mainly due to the inactivation of the fast sodium current. We represent inhibition using a physical model in which the ``source'' contribution of excited elements to the excitation of their neighbors decreases with time as a simple function with a single adjustable parameter (a rate constant). We sought specific solutions of the CA state transition equations and obtained (both analytically and numerically) the dependence of the wave-front speed c on the four model parameters and the wave-front curvature κ. By requiring that the major characteristics of c(κ) in our CA model coincide with those obtained from solutions of a specific RD model, we find a unique set of CA parameter values for a given excitable medium. The basic structure of our CA solutions is remarkably similar to that found in typical RD systems (similar behavior is observed when the analogous model parameters are varied). Most notably, the ``turn-on'' of the inhibitory process is accompanied by the appearance of a solution branch of slow speed, unstable waves. Additionally, when κ is small, we obtain a family of ``eikonal'' relations c(κ) that are suitable for the kinematic analysis of traveling waves in the CA medium. We compared the solutions of the CA equations to CA simulations for the case of plane waves and circular (target) waves and found excellent agreement. We then studied a spiral wave using the CA model adjusted to a specific RD system and found good correspondence between the shapes of the RD and CA spiral arms in the region away from the tip where kinematic theory applies. Our analysis suggests that only four physical parameters control the behavior of wave fronts in excitable media.

  10. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  11. Predicting environmental fate parameters with infrared spectroscopy.

    EPA Science Inventory

    One of the principal uncertainties associated with risk assessments of organic chemicals in the environment is the lack of chemical-specific values that quantify the many processes determining the chemical's transport and transformation. Because it is not feasible to measure the ...

  12. Influence of processing conditions on apparent viscosity and system parameters during extrusion of distiller's dried grains-based snacks.

    PubMed

    Singha, Poonam; Muthukumarappan, Kasiviswanathan; Krishnan, Padmanaban

    2018-01-01

    A combination of different levels of distillers dried grains processed for food application (FDDG), garbanzo flour and corn grits were chosen as a source of high-protein and high-fiber extruded snacks. A four-factor central composite rotatable design was adopted to study the effect of FDDG level, moisture content of blends, extrusion temperature, and screw speed on the apparent viscosity, mass flow rate or MFR, torque, and specific mechanical energy or SME during the extrusion process. With increase in the extrusion temperature from 100 to 140°C, apparent viscosity, specific mechanical energy, and torque value decreased. Increase in FDDG level resulted in increase in apparent viscosity, SME and torque. FDDG had no significant effect (p > .5) on mass flow rate. SME also increased with increase in the screw speed which could be due to the higher shear rates at higher screw speeds. Screw speed and moisture content had significant negative effect ( p  <   .05) on the torque. The apparent viscosity of dough inside the extruder and the system parameters were affected by the processing conditions. This study will be useful for control of extrusion process of blends containing these ingredients for the development of high-protein high-fiber extruded snacks.

  13. Downstream processing from hot-melt extrusion towards tablets: A quality by design approach.

    PubMed

    Grymonpré, W; Bostijn, N; Herck, S Van; Verstraete, G; Vanhoorne, V; Nuhn, L; Rombouts, P; Beer, T De; Remon, J P; Vervaet, C

    2017-10-05

    Since the concept of continuous processing is gaining momentum in pharmaceutical manufacturing, a thorough understanding on how process and formulation parameters can impact the critical quality attributes (CQA) of the end product is more than ever required. This study was designed to screen the influence of process parameters and drug load during HME on both extrudate properties and tableting behaviour of an amorphous solid dispersion formulation using a quality-by-design (QbD) approach. A full factorial experimental design with 19 experiments was used to evaluate the effect of several process variables (barrel temperature: 160-200°C, screw speed: 50-200rpm, throughput: 0.2-0.5kg/h) and drug load (0-20%) as formulation parameter on the hot-melt extrusion (HME) process, extrudate and tablet quality of Soluplus ® -Celecoxib amorphous solid dispersions. A prominent impact of the formulation parameter on the CQA of the extrudates (i.e. solid state properties, moisture content, particle size distribution) and tablets (i.e. tabletability, compactibility, fragmentary behaviour, elastic recovery) was discovered. The resistance of the polymer matrix to thermo-mechanical stress during HME was confirmed throughout the experimental design space. In addition, the suitability of Raman spectroscopy as verification method for the active pharmaceutical ingredient (API) concentration in solid dispersions was evaluated. Incorporation of the Raman spectroscopy data in a PLS model enabled API quantification in the extrudate powders with none of the DOE-experiments resulting in extrudates with a CEL content deviating>3% of the label claim. This research paper emphasized that HME is a robust process throughout the experimental design space for obtaining amorphous glassy solutions and for tabletting of such formulations since only minimal impact of the process parameters was detected on the extrudate and tablet properties. However, the quality of extrudates and tablets can be optimized by adjusting specific formulations parameters (e.g. drug load). Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth

    PubMed Central

    Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor

    2016-01-01

    In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a “post process” method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of “in situ” monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yielded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid. PMID:28773941

  15. Correlation between product purity and process parameters for the synthesis of Cu2ZnSnS4 nanoparticles using microwave irradiation

    NASA Astrophysics Data System (ADS)

    Ahmad, R.; Nicholson, K. S.; Nawaz, Q.; Peukert, W.; Distaso, M.

    2017-07-01

    Kesterites (CZT(S,Se)4) emerged as a favourable photovoltaic material, leading to solar cell efficiencies as high as 12.7%. The development of sustainable roll-to-roll printing processes that make use of Cu2ZnSnS4 (CZTS) nanoparticle inks requires the proper design of synthetic approaches and the understanding of the relation between process parameters and product purity. In the current paper, we developed this relationship by calculating a specific energy factor. A microwave-assisted synthetic method that operates at atmospheric pressure and makes use of eco-friendly solvents is established. Four solvents, i.e. ethylene glycol (EG), diethylene glycol (di-EG), triethylene glycol (tri-EG) and tetraethylene glycol (tet-EG) are compared and the temperature during the reaction is assessed by two different methods. In particular, two by-products have been identified, i.e. Cu2 - x S and a hexagonal phase. We show that the variation of reaction parameters such as power irradiation, type of solvent and precursor concentration influences the nanoparticles' sizes (from 12 to 6 nm) and also the temperature-time profile of reaction which, in turn, can be related to phase purity of CZTS nanoparticles. The results suggest that the product purity scales with the specific energy factor providing a useful tool to a rational design of high-quality CZTS nanoparticles.

  16. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  17. Relaxation Time Distribution (RTD) of Spectral Induced Polarization (SIP) data from environmental studies

    NASA Astrophysics Data System (ADS)

    Ntarlagiannis, D.; Ustra, A.; Slater, L. D.; Zhang, C.; Mendonça, C. A.

    2015-12-01

    In this work we present an alternative formulation of the Debye Decomposition (DD) of complex conductivity spectra, with a new set of parameters that are directly related to the continuous Debye relaxation model. The procedure determines the relaxation time distribution (RTD) and two frequency-independent parameters that modulate the induced polarization spectra. The distribution of relaxation times quantifies the contribution of each distinct relaxation process, which can in turn be associated with specific polarization processes and characterized in terms of electrochemical and interfacial parameters as derived from mechanistic models. Synthetic tests show that the procedure can successfully fit spectral induced polarization (SIP) data and accurately recover the RTD. The procedure was applied to different data sets, focusing on environmental applications. We focus on data of sand-clay mixtures artificially contaminated with toluene, and crude oil-contaminated sands experiencing biodegradation. The results identify characteristic relaxation times that can be associated with distinct polarization processes resulting from either the contaminant itself or transformations associated with biodegradation. The inversion results provide information regarding the relative strength and dominant relaxation time of these polarization processes.

  18. Atomic layer deposition for fabrication of HfO2/Al2O3 thin films with high laser-induced damage thresholds.

    PubMed

    Wei, Yaowei; Pan, Feng; Zhang, Qinghua; Ma, Ping

    2015-01-01

    Previous research on the laser damage resistance of thin films deposited by atomic layer deposition (ALD) is rare. In this work, the ALD process for thin film generation was investigated using different process parameters such as various precursor types and pulse duration. The laser-induced damage threshold (LIDT) was measured as a key property for thin films used as laser system components. Reasons for film damaged were also investigated. The LIDTs for thin films deposited by improved process parameters reached a higher level than previously measured. Specifically, the LIDT of the Al2O3 thin film reached 40 J/cm(2). The LIDT of the HfO2/Al2O3 anti-reflector film reached 18 J/cm(2), the highest value reported for ALD single and anti-reflect films. In addition, it was shown that the LIDT could be improved by further altering the process parameters. All results show that ALD is an effective film deposition technique for fabrication of thin film components for high-power laser systems.

  19. Predictive process simulation of cryogenic implants for leading edge transistor design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh

    2012-11-06

    Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less

  20. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.

  1. Identification of sensitive parameters in the modeling of SVOC reemission processes from soil to atmosphere.

    PubMed

    Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc

    2014-09-15

    Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Comments on an Analytical Thermal Agglomeration for Problems with Surface Growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, N. E.

    2017-03-22

    Up until Dec 2016, the thermal agglomeration was very heuristic, and as such, difficult to define. The lack of predictability became problematic, and the current notes represent the first real attempt to systematize the specification of the agglomerated process parameters.

  3. Genetic algorithm applied to a Soil-Vegetation-Atmosphere system: Sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk

    2010-05-01

    Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For the inversion procedure a genetical algorithm (GA) was used. Specific features such as elitism, roulette-wheel process for selection operator and island theory were implemented. Optimization was based on the water content measurements recorded at several depths. Ten scenarios have been elaborated and applied on the two lysimeters in order to investigate the impact of the conceptual model in terms of processes description (mechanistic or compartmental) and geometry (number of horizons in the profile description) on the calibration accuracy. Calibration leads to a good agreement with the measured water contents. The most critical parameters for improving the goodness of fit are the number of horizons and the type of process description. Best fit are found for a mechanistic model with 5 horizons resulting in absolute differences between observed and simulated water contents less than 0.02 cm3cm-3 in average. Parameter estimate analysis shows that layers thicknesses are poorly constrained whereas hydraulic parameters are much well defined.

  4. Statistical post-processing of seasonal multi-model forecasts: Why is it so hard to beat the multi-model mean?

    NASA Astrophysics Data System (ADS)

    Siegert, Stefan

    2017-04-01

    Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.

  5. Pressure filtration of ceramic pastes. 4: Treatment of experimental data

    NASA Technical Reports Server (NTRS)

    Torrecillas, A. S.; Polo, J. F.; Perez, A. A.

    1984-01-01

    The use of data processing method based on the algorithm proposed by Kalman and its application to the filtration process at constant pressure are described, as well as the advantages of this method. This technique is compared to the least squares method. The operation allows the precise parameter adjustment of the equation in direct relationship to the specific resistance of the cake.

  6. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  7. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  8. Estimation of fundamental kinetic parameters of polyhydroxybutyrate fermentation process of Azohydromonas australica using statistical approach of media optimization.

    PubMed

    Gahlawat, Geeta; Srivastava, Ashok K

    2012-11-01

    Polyhydroxybutyrate or PHB is a biodegradable and biocompatible thermoplastic with many interesting applications in medicine, food packaging, and tissue engineering materials. The present study deals with the enhanced production of PHB by Azohydromonas australica using sucrose and the estimation of fundamental kinetic parameters of PHB fermentation process. The preliminary culture growth inhibition studies were followed by statistical optimization of medium recipe using response surface methodology to increase the PHB production. Later on batch cultivation in a 7-L bioreactor was attempted using optimum concentration of medium components (process variables) obtained from statistical design to identify the batch growth and product kinetics parameters of PHB fermentation. A. australica exhibited a maximum biomass and PHB concentration of 8.71 and 6.24 g/L, respectively in bioreactor with an overall PHB production rate of 0.75 g/h. Bioreactor cultivation studies demonstrated that the specific biomass and PHB yield on sucrose was 0.37 and 0.29 g/g, respectively. The kinetic parameters obtained in the present investigation would be used in the development of a batch kinetic mathematical model for PHB production which will serve as launching pad for further process optimization studies, e.g., design of several bioreactor cultivation strategies to further enhance the biopolymer production.

  9. Modeling of microstructure evolution in direct metal laser sintering: A phase field approach

    NASA Astrophysics Data System (ADS)

    Nandy, Jyotirmoy; Sarangi, Hrushikesh; Sahoo, Seshadev

    2017-02-01

    Direct Metal Laser Sintering (DMLS) is a new technology in the field of additive manufacturing, which builds metal parts in a layer by layer fashion directly from the powder bed. The process occurs within a very short time period with rapid solidification rate. Slight variations in the process parameters may cause enormous change in the final build parts. The physical and mechanical properties of the final build parts are dependent on the solidification rate which directly affects the microstructure of the material. Thus, the evolving of microstructure plays a vital role in the process parameters optimization. Nowadays, the increase in computational power allows for direct simulations of microstructures during materials processing for specific manufacturing conditions. In this study, modeling of microstructure evolution of Al-Si-10Mg powder in DMLS process was carried out by using a phase field approach. A MATLAB code was developed to solve the set of phase field equations, where simulation parameters include temperature gradient, laser scan speed and laser power. The effects of temperature gradient on microstructure evolution were studied and found that with increase in temperature gradient, the dendritic tip grows at a faster rate.

  10. Effects of Processing Parameters on Surface Roughness of Additive Manufactured Ti-6Al-4V via Electron Beam Melting

    PubMed Central

    Sin, Wai Jack; Nai, Mui Ling Sharon; Wei, Jun

    2017-01-01

    As one of the powder bed fusion additive manufacturing technologies, electron beam melting (EBM) is gaining more and more attention due to its near-net-shape production capacity with low residual stress and good mechanical properties. These characteristics also allow EBM built parts to be used as produced without post-processing. However, the as-built rough surface introduces a detrimental influence on the mechanical properties of metallic alloys. Thereafter, understanding the effects of processing parameters on the part’s surface roughness, in turn, becomes critical. This paper has focused on varying the processing parameters of two types of contouring scanning strategies namely, multispot and non-multispot, in EBM. The results suggest that the beam current and speed function are the most significant processing parameters for non-multispot contouring scanning strategy. While for multispot contouring scanning strategy, the number of spots, spot time, and spot overlap have greater effects than focus offset and beam current. The improved surface roughness has been obtained in both contouring scanning strategies. Furthermore, non-multispot contouring scanning strategy gives a lower surface roughness value and poorer geometrical accuracy than the multispot counterpart under the optimized conditions. These findings could be used as a guideline for selecting the contouring type used for specific industrial parts that are built using EBM. PMID:28937638

  11. Processing parameters associated with scale-up of balloon film production

    NASA Technical Reports Server (NTRS)

    Simpson, D. M.; Harrison, I. R.

    1993-01-01

    A method is set forth for assessing strain-rate profiles that can be used to develop a scale-up theory for blown-film extrusion. Strain rates are evaluated by placing four ink dots on the stalk of an extruded bubble to follow the displacements of the dots as a function of time. The instantaneous Hencky strain is obtained with the displacement data and plotted for analysis. Specific attention is given to potential sources of error in the distance measurements and corrections for these complex bubble geometries. The method is shown to be effective for deriving strain-rate data related to different processing parameters for the production of balloon film. The strain rates can be compared to frostline height, blow-up ratio, and take-up ratio to optimize these processing variables.

  12. Material Stream Strategy for Lithium and Inorganics (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safarik, Douglas Joseph; Dunn, Paul Stanton; Korzekwa, Deniece Rochelle

    Design Agency Responsibilities: Manufacturing Support to meet Stockpile Stewardship goals for maintaining the nuclear stockpile through experimental and predictive modeling capability. Development and maintenance of Manufacturing Science expertise to assess material specifications and performance boundaries, and their relationship to processing parameters. Production Engineering Evaluations with competence in design requirements, material specifications, and manufacturing controls. Maintenance and enhancement of Aging Science expertise to support Stockpile Stewardship predictive science capability.

  13. GPU-based RFA simulation for minimally invasive cancer treatment of liver tumours.

    PubMed

    Mariappan, Panchatcharam; Weir, Phil; Flanagan, Ronan; Voglreiter, Philip; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Busse, Harald; Futterer, Jurgen; Portugaller, Horst Rupert; Sequeiros, Roberto Blanco; Kolesnik, Marina

    2017-01-01

    Radiofrequency ablation (RFA) is one of the most popular and well-standardized minimally invasive cancer treatments (MICT) for liver tumours, employed where surgical resection has been contraindicated. Less-experienced interventional radiologists (IRs) require an appropriate planning tool for the treatment to help avoid incomplete treatment and so reduce the tumour recurrence risk. Although a few tools are available to predict the ablation lesion geometry, the process is computationally expensive. Also, in our implementation, a few patient-specific parameters are used to improve the accuracy of the lesion prediction. Advanced heterogeneous computing using personal computers, incorporating the graphics processing unit (GPU) and the central processing unit (CPU), is proposed to predict the ablation lesion geometry. The most recent GPU technology is used to accelerate the finite element approximation of Penne's bioheat equation and a three state cell model. Patient-specific input parameters are used in the bioheat model to improve accuracy of the predicted lesion. A fast GPU-based RFA solver is developed to predict the lesion by doing most of the computational tasks in the GPU, while reserving the CPU for concurrent tasks such as lesion extraction based on the heat deposition at each finite element node. The solver takes less than 3 min for a treatment duration of 26 min. When the model receives patient-specific input parameters, the deviation between real and predicted lesion is below 3 mm. A multi-centre retrospective study indicates that the fast RFA solver is capable of providing the IR with the predicted lesion in the short time period before the intervention begins when the patient has been clinically prepared for the treatment.

  14. Quantifying the sensitivity of feedstock properties and process conditions on hydrochar yield, carbon content, and energy content.

    PubMed

    Li, Liang; Wang, Yiying; Xu, Jiting; Flora, Joseph R V; Hoque, Shamia; Berge, Nicole D

    2018-08-01

    Hydrothermal carbonization (HTC) is a wet, low temperature thermal conversion process that continues to gain attention for the generation of hydrochar. The importance of specific process conditions and feedstock properties on hydrochar characteristics is not well understood. To evaluate this, linear and non-linear models were developed to describe hydrochar characteristics based on data collected from HTC-related literature. A Sobol analysis was subsequently conducted to identify parameters that most influence hydrochar characteristics. Results from this analysis indicate that for each investigated hydrochar property, the model fit and predictive capability associated with the random forest models is superior to both the linear and regression tree models. Based on results from the Sobol analysis, the feedstock properties and process conditions most influential on hydrochar yield, carbon content, and energy content were identified. In addition, a variational process parameter sensitivity analysis was conducted to determine how feedstock property importance changes with process conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Argon-oxygen atmospheric pressure plasma treatment on carbon fiber reinforced polymer for improved bonding

    NASA Astrophysics Data System (ADS)

    Chartosias, Marios

    Acceptance of Carbon Fiber Reinforced Polymer (CFRP) structures requires a robust surface preparation method with improved process controls capable of ensuring high bond quality. Surface preparation in a production clean room environment prior to applying adhesive for bonding would minimize risk of contamination and reduce cost. Plasma treatment is a robust surface preparation process capable of being applied in a production clean room environment with process parameters that are easily controlled and documented. Repeatable and consistent processing is enabled through the development of a process parameter window utilizing techniques such as Design of Experiments (DOE) tailored to specific adhesive and substrate bonding applications. Insight from respective plasma treatment Original Equipment Manufacturers (OEMs) and screening tests determined critical process factors from non-factors and set the associated factor levels prior to execution of the DOE. Results from mode I Double Cantilever Beam (DCB) testing per ASTM D 5528 [1] standard and DOE statistical analysis software are used to produce a regression model and determine appropriate optimum settings for each factor.

  16. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  17. A Knowledge Database on Thermal Control in Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Hirasawa, Shigeki; Satoh, Isao

    A prototype version of a knowledge database on thermal control in manufacturing processes, specifically, molding, semiconductor manufacturing, and micro-scale manufacturing has been developed. The knowledge database has search functions for technical data, evaluated benchmark data, academic papers, and patents. The database also displays trends and future roadmaps for research topics. It has quick-calculation functions for basic design. This paper summarizes present research topics and future research on thermal control in manufacturing engineering to collate the information to the knowledge database. In the molding process, the initial mold and melt temperatures are very important parameters. In addition, thermal control is related to many semiconductor processes, and the main parameter is temperature variation in wafers. Accurate in-situ temperature measurment of wafers is important. And many technologies are being developed to manufacture micro-structures. Accordingly, the knowledge database will help further advance these technologies.

  18. The Effect of Gravity on the Combustion Synthesis of Porous Biomaterials

    NASA Technical Reports Server (NTRS)

    Castillo, M.; Zhang, X.; Moore, J. J.; Schowengerdt, F. D.; Ayers, R. A.

    2003-01-01

    Production of highly porous composite materials by traditional materials processing is limited by difficult processing techniques. This work investigates the use of self propagating high temperature (combustion) synthesis (SHS) to create porous tricalcium phosphate (Ca3(PO4)2), TiB-Ti, and NiTi in low and microgravity. Combustion synthesis provides the ability to use set processing parameters to engineer the required porous structure suitable for bone repair or replacement. The processing parameters include green density, particle size, gasifying agents, composition, and gravity. The advantage of the TiB-Ti system is the high level of porosity achieved together with a modulus that can be controlled by both composition (TiB-Ti) and porosity. At the same time, NiTi exhibits shape memory properties. SHS of biomaterials allows the engineering of required porosity coupled with resorbtion properties and specific mechanical properties into the composite materials to allow for a better biomaterial.

  19. Development of high strength, high temperature ceramics

    NASA Technical Reports Server (NTRS)

    Hall, W. B.

    1982-01-01

    Improvement in the high-pressure turbopumps, both fuel and oxidizer, in the Space Shuttle main engine were considered. The operation of these pumps is limited by temperature restrictions of the metallic components used in these pumps. Ceramic materials that retain strength at high temperatures and appear to be promising candidates for use as turbine blades and impellers are discussed. These high strength materials are sensitive to many related processing parameters such as impurities, sintering aids, reaction aids, particle size, processing temperature, and post thermal treatment. The specific objectives of the study were to: (1) identify and define the processing parameters that affect the properties of Si3N4 ceramic materials, (2) design and assembly equipment required for processing high strength ceramics, (3) design and assemble test apparatus for evaluating the high temperature properties of Si3N4, and (4) conduct a research program of manufacturing and evaluating Si3N4 materials as applicable to rocket engine applications.

  20. Laser Direct Metal Deposition of 2024 Al Alloy: Trace Geometry Prediction via Machine Learning.

    PubMed

    Caiazzo, Fabrizia; Caggiano, Alessandra

    2018-03-19

    Laser direct metal deposition is an advanced additive manufacturing technology suitably applicable in maintenance, repair, and overhaul of high-cost products, allowing for minimal distortion of the workpiece, reduced heat affected zones, and superior surface quality. Special interest is growing for the repair and coating of 2024 aluminum alloy parts, extensively utilized for a wide range of applications in the automotive, military, and aerospace sectors due to its excellent plasticity, corrosion resistance, electric conductivity, and strength-to-weight ratio. A critical issue in the laser direct metal deposition process is related to the geometrical parameters of the cross-section of the deposited metal trace that should be controlled to meet the part specifications. In this research, a machine learning approach based on artificial neural networks is developed to find the correlation between the laser metal deposition process parameters and the output geometrical parameters of the deposited metal trace produced by laser direct metal deposition on 5-mm-thick 2024 aluminum alloy plates. The results show that the neural network-based machine learning paradigm is able to accurately estimate the appropriate process parameters required to obtain a specified geometry for the deposited metal trace.

  1. Laser Direct Metal Deposition of 2024 Al Alloy: Trace Geometry Prediction via Machine Learning

    PubMed Central

    2018-01-01

    Laser direct metal deposition is an advanced additive manufacturing technology suitably applicable in maintenance, repair, and overhaul of high-cost products, allowing for minimal distortion of the workpiece, reduced heat affected zones, and superior surface quality. Special interest is growing for the repair and coating of 2024 aluminum alloy parts, extensively utilized for a wide range of applications in the automotive, military, and aerospace sectors due to its excellent plasticity, corrosion resistance, electric conductivity, and strength-to-weight ratio. A critical issue in the laser direct metal deposition process is related to the geometrical parameters of the cross-section of the deposited metal trace that should be controlled to meet the part specifications. In this research, a machine learning approach based on artificial neural networks is developed to find the correlation between the laser metal deposition process parameters and the output geometrical parameters of the deposited metal trace produced by laser direct metal deposition on 5-mm-thick 2024 aluminum alloy plates. The results show that the neural network-based machine learning paradigm is able to accurately estimate the appropriate process parameters required to obtain a specified geometry for the deposited metal trace. PMID:29562682

  2. Low cost solar array project. Task 1: Silicon material, gaseous melt replenishment system

    NASA Technical Reports Server (NTRS)

    Jewett, D. N.; Bates, H. E.; Hill, D. M.

    1979-01-01

    A system to combine silicon formation, by hydrogen reduction of trichlorosilane, with the capability to replenish a crystal growth system is described. A variety of process parameters to allow sizing and specification of gas handling system components was estimated.

  3. RNA quality in fresh-frozen gastrointestinal tumor specimens-experiences from the tumor and healthy tissue bank TU Dresden.

    PubMed

    Zeugner, Silke; Mayr, Thomas; Zietz, Christian; Aust, Daniela E; Baretton, Gustavo B

    2015-01-01

    The term "pre-analytics" summarizes all procedures concerned with specimen collection or processing as well as logistical aspects like transport or storage of tissue specimens. All or these variables as well as tissue-specific characteristics affect sample quality. While certain parameters like warm ischemia or tissue-specific characteristics cannot be changed, other parameters can be assessed and optimized. The aim of this study was to determine RNA quality by assessing the RIN values of specimens from different organs and to assess the influence of vacuum preservation. Samples from the GI tract, in general, appear to have lower RNA quality when compared to samples from other organ sites. This may be due to the digestive enzymes or bacterial colonization. Processing time in pathology does not significantly influence RNA quality. Tissue preservation with a vacuum sealer leads to preserved RNA quality over an extended period of time and offers a feasible alternative to minimize the influence of transport time into pathology.

  4. Eye Tracking and Pupillometry are Indicators of Dissociable Latent Decision Processes

    PubMed Central

    Cavanagh, James F.; Wiecki, Thomas V.; Kochar, Angad; Frank, Michael J.

    2014-01-01

    Can you predict what someone is going to do just by watching them? This is certainly difficult: it would require a clear mapping between observable indicators and unobservable cognitive states. In this report we demonstrate how this is possible by monitoring eye gaze and pupil dilation, which predict dissociable biases during decision making. We quantified decision making using the Drift Diffusion Model (DDM), which provides an algorithmic account of how evidence accumulation and response caution contribute to decisions through separate latent parameters of drift rate and decision threshold, respectively. We used a hierarchical Bayesian estimation approach to assess the single trial influence of observable physiological signals on these latent DDM parameters. Increased eye gaze dwell time specifically predicted an increased drift rate toward the fixated option, irrespective of the value of the option. In contrast, greater pupil dilation specifically predicted an increase in decision threshold during difficult decisions. These findings suggest that eye tracking and pupillometry reflect the operations of dissociated latent decision processes. PMID:24548281

  5. Melt-Pool Temperature and Size Measurement During Direct Laser Sintering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    List, III, Frederick Alyious; Dinwiddie, Ralph Barton; Carver, Keith

    2017-08-01

    Additive manufacturing has demonstrated the ability to fabricate complex geometries and components not possible with conventional casting and machining. In many cases, industry has demonstrated the ability to fabricate complex geometries with improved efficiency and performance. However, qualification and certification of processes is challenging, leaving companies to focus on certification of material though design allowable based approaches. This significantly reduces the business case for additive manufacturing. Therefore, real time monitoring of the melt pool can be used to detect the development of flaws, such as porosity or un-sintered powder and aid in the certification process. Characteristics of the melt poolmore » in the Direct Laser Sintering (DLS) process is also of great interest to modelers who are developing simulation models needed to improve and perfect the DLS process. Such models could provide a means to rapidly develop the optimum processing parameters for new alloy powders and optimize processing parameters for specific part geometries. Stratonics’ ThermaViz system will be integrated with the Renishaw DLS system in order to demonstrate its ability to measure melt pool size, shape and temperature. These results will be compared with data from an existing IR camera to determine the best approach for the determination of these critical parameters.« less

  6. Use of in-die powder densification parameters in the implementation of process analytical technologies for tablet production on industrial scale.

    PubMed

    Cespi, Marco; Perinelli, Diego R; Casettari, Luca; Bonacucina, Giulia; Caporicci, Giuseppe; Rendina, Filippo; Palmieri, Giovanni F

    2014-12-30

    The use of process analytical technologies (PAT) to ensure final product quality is by now a well established practice in pharmaceutical industry. To date, most of the efforts in this field have focused on development of analytical methods using spectroscopic techniques (i.e., NIR, Raman, etc.). This work evaluated the possibility of using the parameters derived from the processing of in-line raw compaction data (the forces and displacement of the punches) as a PAT tool for controlling the tableting process. To reach this goal, two commercially available formulations were used, changing the quantitative composition and compressing them on a fully instrumented rotary pressing machine. The Heckel yield pressure and the compaction energies, together with the tablets hardness and compaction pressure, were selected and evaluated as discriminating parameters in all the prepared formulations. The apparent yield pressure, as shown in the obtained results, has the necessary sensitivity to be effectively included in a PAT strategy to monitor the tableting process. Additional investigations were performed to understand the criticalities and the mechanisms beyond this performing parameter and the associated implications. Specifically, it was discovered that the efficiency of the apparent yield pressure depends on the nominal drug title, the drug densification mechanism and the error in pycnometric density. In this study, the potential of using some parameters derived from the compaction raw data has been demonstrated to be an attractive alternative and complementary method to the well established spectroscopic techniques to monitor and control the tableting process. The compaction data monitoring method is also easy to set up and very cost effective. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions

    NASA Astrophysics Data System (ADS)

    Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia

    2018-03-01

    Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.

  8. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  9. The Naval Research Laboratory’s Ongoing Implementation of the Open Geospatial Consortium’s Catalogue Services Specification

    DTIC Science & Technology

    2010-06-01

    then forwarded to Tomcat for processing. Tomcat receives these requests and sends them to the NRL-created CSW servlet (a servlet is a Java -based...server-side program) running inside it. The CSW servlet identifies which HTTP method is being used and whether KVP or XML is being used to send the...request data. Once the CSW servlet identifies the parameter passing scheme it can extract the parameters from the request. It then identifies and

  10. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  11. Recent Advances in Near-Net-Shape Fabrication of Al-Li Alloy 2195 for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Wagner, John; Domack, Marcia; Hoffman, Eric

    2007-01-01

    Recent applications in launch vehicles use 2195 processed to Super Lightweight Tank specifications. Potential benefits exist by tailoring heat treatment and other processing parameters to the application. Assess the potential benefits and advocate application of Al-Li near-net-shape technologies for other launch vehicle structural components. Work with manufacturing and material producers to optimize Al-Li ingot shape and size for enhanced near-net-shape processing. Examine time dependent properties of 2195 critical for reusable applications.

  12. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  13. Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.

    2011-12-01

    We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.

  14. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  15. Aerodynamic method for obtaining the soil water retention curve

    NASA Astrophysics Data System (ADS)

    Alekseev, V. V.; Maksimov, I. I.

    2013-07-01

    A new method for the rapid plotting of the soil water retention curve (SWRC) has been proposed that considers the soil water as an environment limited by the soil solid phase on one side and by the soil air on the other side. Both contact surfaces have surface energies, which play the main role in water retention. The use of an idealized soil model with consideration for the nonequilibrium thermodynamic laws and the aerodynamic similarity principles allows us to estimate the volumetric specific surface areas of soils and, using the proposed pedotransfer function (PTF), to plot the SWRC. The volumetric specific surface area of the solid phase, the porosity, and the specific free surface energy at the water-air interface are used as the SWRC parameters. Devices for measuring the parameters are briefly described. The differences between the proposed PTF and the experimental data have been analyzed using the statistical processing of the data.

  16. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  17. Selected algorithms for measurement data processing in impulse-radar-based system for monitoring of human movements

    NASA Astrophysics Data System (ADS)

    Miękina, Andrzej; Wagner, Jakub; Mazurek, Paweł; Morawski, Roman Z.

    2016-11-01

    The importance of research on new technologies that could be employed in care services for elderly and disabled persons is highlighted. Advantages of impulse-radar sensors, when applied for non-intrusive monitoring of such persons in their home environment, are indicated. Selected algorithms for the measurement data preprocessing - viz. the algorithms for clutter suppression and echo parameter estimation, as well as for estimation of the twodimensional position of a monitored person - are proposed. The capability of an impulse-radar- based system to provide some application-specific parameters, viz. the parameters characterising the patient's health condition, is also demonstrated.

  18. Spectral Induced Polarization approaches to characterize reactive transport parameters and processes

    NASA Astrophysics Data System (ADS)

    Schmutz, M.; Franceschi, M.; Revil, A.; Peruzzo, L.; Maury, T.; Vaudelet, P.; Ghorbani, A.; Hubbard, S. S.

    2017-12-01

    For almost a decade, geophysical methods have explored the potential for characterization of reactive transport parameters and processes relevant to hydrogeology, contaminant remediation, and oil and gas applications. Spectral Induced Polarization (SIP) methods show particular promise in this endeavour, given the sensitivity of the SIP signature to geological material electrical double layer properties and the critical role of the electrical double layer on reactive transport processes, such as adsorption. In this presentation, we discuss results from several recent studies that have been performed to quantify the value of SIP parameters for characterizing reactive transport parameters. The advances have been realized through performing experimental studies and interpreting their responses using theoretical and numerical approaches. We describe a series of controlled experimental studies that have been performed to quantify the SIP responses to variations in grain size and specific surface area, pore fluid geochemistry, and other factors. We also model chemical reactions at the interface fluid/matrix linked to part of our experimental data set. For some examples, both geochemical modelling and measurements are integrated into a SIP physico-chemical based model. Our studies indicate both the potential of and the opportunity for using SIP to estimate reactive transport parameters. In case of well sorted granulometry of the samples, we find that the grain size characterization (as well as the permeabililty for some specific examples) value can be estimated using SIP. We show that SIP is sensitive to physico-chemical conditions at the fluid/mineral interface, including the different pore fluid dissolved ions (Na+, Cu2+, Zn2+, Pb2+) due to their different adsorption behavior. We also showed the relevance of our approach to characterize the fluid/matrix interaction for various organic contents (wetting and non-wetting oils). We also discuss early efforts to jointly interpret SIP and other information for improved estimation, approaches to use SIP information to constrain mechanistic flow and transport models, and the potential to apply some of the approaches to field scale applications.

  19. Fatigue Crack Growth Database for Damage Tolerance Analysis

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Shivakumar, V.; Cardinal, J. W.; Williams, L. C.; McKeighan, P. C.

    2005-01-01

    The objective of this project was to begin the process of developing a fatigue crack growth database (FCGD) of metallic materials for use in damage tolerance analysis of aircraft structure. For this initial effort, crack growth rate data in the NASGRO (Registered trademark) database, the United States Air Force Damage Tolerant Design Handbook, and other publicly available sources were examined and used to develop a database that characterizes crack growth behavior for specific applications (materials). The focus of this effort was on materials for general commercial aircraft applications, including large transport airplanes, small transport commuter airplanes, general aviation airplanes, and rotorcraft. The end products of this project are the FCGD software and this report. The specific goal of this effort was to present fatigue crack growth data in three usable formats: (1) NASGRO equation parameters, (2) Walker equation parameters, and (3) tabular data points. The development of this FCGD will begin the process of developing a consistent set of standard fatigue crack growth material properties. It is envisioned that the end product of the process will be a general repository for credible and well-documented fracture properties that may be used as a default standard in damage tolerance analyses.

  20. Rotary wave-ejector enhanced pulse detonation engine

    NASA Astrophysics Data System (ADS)

    Nalim, M. R.; Izzy, Z. A.; Akbari, P.

    2012-01-01

    The use of a non-steady ejector based on wave rotor technology is modeled for pulse detonation engine performance improvement and for compatibility with turbomachinery components in hybrid propulsion systems. The rotary wave ejector device integrates a pulse detonation process with an efficient momentum transfer process in specially shaped channels of a single wave-rotor component. In this paper, a quasi-one-dimensional numerical model is developed to help design the basic geometry and operating parameters of the device. The unsteady combustion and flow processes are simulated and compared with a baseline PDE without ejector enhancement. A preliminary performance assessment is presented for the wave ejector configuration, considering the effect of key geometric parameters, which are selected for high specific impulse. It is shown that the rotary wave ejector concept has significant potential for thrust augmentation relative to a basic pulse detonation engine.

  1. 3D Printing Optical Engine for Controlling Material Microstructure

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Chin; Chang, Kuang-Po; Wu, Ping-Han; Wu, Chih-Hsien; Lin, Ching-Chih; Chuang, Chuan-Sheng; Lin, De-Yau; Liu, Sung-Ho; Horng, Ji-Bin; Tsau, Fang-Hei

    Controlling the cooling rate of alloy during melting and resolidification is the most commonly used method for varying the material microstructure and consequently the resuling property. However, the cooling rate of a selective laser melting (SLM) production is restricted by a preset optimal parameter of a good dense product. The head room for locally manipulating material property in a process is marginal. In this study, we invent an Optical Engine for locally controlling material microstructure in a SLM process. It develops an invovative method to control and adjust thermal history of the solidification process to gain desired material microstucture and consequently drastically improving the quality. Process parameters selected locally for specific materials requirement according to designed characteristics by using thermal dynamic principles of solidification process. It utilize a technique of complex laser beam shape of adaptive irradiation profile to permit local control of material characteristics as desired. This technology could be useful for industrial application of medical implant, aerospace and automobile industries.

  2. Boltzmann Energy-based Image Analysis Demonstrates that Extracellular Domain Size Differences Explain Protein Segregation at Immune Synapses

    PubMed Central

    Burroughs, Nigel J.; Köhler, Karsten; Miloserdov, Vladimir; Dustin, Michael L.; van der Merwe, P. Anton; Davis, Daniel M.

    2011-01-01

    Immune synapses formed by T and NK cells both show segregation of the integrin ICAM1 from other proteins such as CD2 (T cell) or KIR (NK cell). However, the mechanism by which these proteins segregate remains unclear; one key hypothesis is a redistribution based on protein size. Simulations of this mechanism qualitatively reproduce observed segregation patterns, but only in certain parameter regimes. Verifying that these parameter constraints in fact hold has not been possible to date, this requiring a quantitative coupling of theory to experimental data. Here, we address this challenge, developing a new methodology for analysing and quantifying image data and its integration with biophysical models. Specifically we fit a binding kinetics model to 2 colour fluorescence data for cytoskeleton independent synapses (2 and 3D) and test whether the observed inverse correlation between fluorophores conforms to size dependent exclusion, and further, whether patterned states are predicted when model parameters are estimated on individual synapses. All synapses analysed satisfy these conditions demonstrating that the mechanisms of protein redistribution have identifiable signatures in their spatial patterns. We conclude that energy processes implicit in protein size based segregation can drive the patternation observed in individual synapses, at least for the specific examples tested, such that no additional processes need to be invoked. This implies that biophysical processes within the membrane interface have a crucial impact on cell∶cell communication and cell signalling, governing protein interactions and protein aggregation. PMID:21829338

  3. Automated Processing of Dynamic Contrast-Enhanced MRI: Correlation of Advanced Pharmacokinetic Metrics with Tumor Grade in Pediatric Brain Tumors.

    PubMed

    Vajapeyam, S; Stamoulis, C; Ricci, K; Kieran, M; Poussaint, T Young

    2017-01-01

    Pharmacokinetic parameters from dynamic contrast-enhanced MR imaging have proved useful for differentiating brain tumor grades in adults. In this study, we retrospectively reviewed dynamic contrast-enhanced perfusion data from children with newly diagnosed brain tumors and analyzed the pharmacokinetic parameters correlating with tumor grade. Dynamic contrast-enhanced MR imaging data from 38 patients were analyzed by using commercially available software. Subjects were categorized into 2 groups based on pathologic analyses consisting of low-grade (World Health Organization I and II) and high-grade (World Health Organization III and IV) tumors. Pharmacokinetic parameters were compared between the 2 groups by using linear regression models. For parameters that were statistically distinct between the 2 groups, sensitivity and specificity were also estimated. Eighteen tumors were classified as low-grade, and 20, as high-grade. Transfer constant from the blood plasma into the extracellular extravascular space (K trans ), rate constant from extracellular extravascular space back into blood plasma (K ep ), and extracellular extravascular volume fraction (V e ) were all significantly correlated with tumor grade; high-grade tumors showed higher K trans , higher K ep , and lower V e . Although all 3 parameters had high specificity (range, 82%-100%), K ep had the highest specificity for both grades. Optimal sensitivity was achieved for V e , with a combined sensitivity of 76% (compared with 71% for K trans and K ep ). Pharmacokinetic parameters derived from dynamic contrast-enhanced MR imaging can effectively discriminate low- and high-grade pediatric brain tumors. © 2017 by American Journal of Neuroradiology.

  4. A technique for automatically extracting useful field of view and central field of view images.

    PubMed

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.

  5. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  6. The Rangeland Hydrology and Erosion Model: A dynamic approach for predicting soil loss on rangelands

    USDA-ARS?s Scientific Manuscript database

    In this study we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed agains...

  7. Analysis of multispectral scanner (MSS) and Thematic Mapper (TM) performance (pre-launch and post-launch)

    NASA Technical Reports Server (NTRS)

    Barker, J. L.

    1983-01-01

    Tables and graphs show the results of the spectral, radiometric, and geometric characterization of LANDSAT 4 sensors associated with imagery and of the imagery associated with sensors and processing. Specifications for the various parameters are compared with the photoflight and flight values.

  8. Vapor Hydrogen Peroxide as Alternative to Dry Heat Microbial Reduction

    NASA Technical Reports Server (NTRS)

    Cash, Howard A.; Kern, Roger G.; Chung, Shirley Y.; Koukol, Robert C.; Barengoltz, Jack B.

    2006-01-01

    The Jet Propulsion Laboratory, in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal is to include this technique, with appropriate specification, in NPG8020.12C as a low temperature complementary technique to the dry heat sterilization process. A series of experiments were conducted in vacuum to determine VHP process parameters that provided significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. With this knowledge of D values, sensible margins can be applied in a planetary protection specification. The outcome of this study provided an optimization of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D value may be imposed, a process humidity range for which the worst case D value may be imposed, and robustness to selected spacecraft material substrates.

  9. Statistical process control for electron beam monitoring.

    PubMed

    López-Tarjuelo, Juan; Luquero-Llopis, Naika; García-Mollá, Rafael; Quirós-Higueras, Juan David; Bouché-Babiloni, Ana; Juan-Senabre, Xavier Jordi; de Marco-Blancas, Noelia; Ferrer-Albiach, Carlos; Santos-Serra, Agustín

    2015-07-01

    To assess the electron beam monitoring statistical process control (SPC) in linear accelerator (linac) daily quality control. We present a long-term record of our measurements and evaluate which SPC-led conditions are feasible for maintaining control. We retrieved our linac beam calibration, symmetry, and flatness daily records for all electron beam energies from January 2008 to December 2013, and retrospectively studied how SPC could have been applied and which of its features could be used in the future. A set of adjustment interventions designed to maintain these parameters under control was also simulated. All phase I data was under control. The dose plots were characterized by rising trends followed by steep drops caused by our attempts to re-center the linac beam calibration. Where flatness and symmetry trends were detected they were less-well defined. The process capability ratios ranged from 1.6 to 9.3 at a 2% specification level. Simulated interventions ranged from 2% to 34% of the total number of measurement sessions. We also noted that if prospective SPC had been applied it would have met quality control specifications. SPC can be used to assess the inherent variability of our electron beam monitoring system. It can also indicate whether a process is capable of maintaining electron parameters under control with respect to established specifications by using a daily checking device, but this is not practical unless a method to establish direct feedback from the device to the linac can be devised. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Distinct neural markers of TVA-based visual processing speed and short-term storage capacity parameters.

    PubMed

    Wiegand, Iris; Töllner, Thomas; Habekost, Thomas; Dyrholm, Mads; Müller, Hermann J; Finke, Kathrin

    2014-08-01

    An individual's visual attentional capacity is characterized by 2 central processing resources, visual perceptual processing speed and visual short-term memory (vSTM) storage capacity. Based on Bundesen's theory of visual attention (TVA), independent estimates of these parameters can be obtained from mathematical modeling of performance in a whole report task. The framework's neural interpretation (NTVA) further suggests distinct brain mechanisms underlying these 2 functions. Using an interindividual difference approach, the present study was designed to establish the respective ERP correlates of both parameters. Participants with higher compared to participants with lower processing speed were found to show significantly reduced visual N1 responses, indicative of higher efficiency in early visual processing. By contrast, for participants with higher relative to lower vSTM storage capacity, contralateral delay activity over visual areas was enhanced while overall nonlateralized delay activity was reduced, indicating that holding (the maximum number of) items in vSTM relies on topographically specific sustained activation within the visual system. Taken together, our findings show that the 2 main aspects of visual attentional capacity are reflected in separable neurophysiological markers, validating a central assumption of NTVA. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Near-infrared spectroscopy (NIRS) for a real time monitoring of the biogas process.

    PubMed

    Stockl, Andrea; Lichti, Fabian

    2018-01-01

    In this research project Near-infrared spectroscopy (NIRS) was applied to monitor the content of specific process parameters in anaerobic digestion. A laboratory scaled biogas digester was constantly fed every four hours with maize- and grass silage to keep a base load with an organic loading rate (OLR) of 2.5 kg oDM/m 3  ∗ d. Daily impact loads with shredded wheat up to an OLR of 8 kg oDM/m 3  ∗ d were added in order to generate peaks at the parameters tested. The developed calibration models are capable to show changes in process parameters like volatile fatty acids (VFA), propionic acid, total inorganic carbon (TIC) and the ratio of the volatile fatty acids to the carbonate buffer (VFA/TIC). Based on the calibration of the models for VFA and TIC, the values could be predicted with an R 2 of 0.94 and 0.97, respectively. Moreover, the residual prediction deviations were 4.0 and 6.0 for VFA and TIC, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Radionuclide transfer in marine coastal ecosystems, a modelling study using metabolic processes and site data.

    PubMed

    Konovalenko, L; Bradshaw, C; Kumblad, L; Kautsky, U

    2014-07-01

    This study implements new site-specific data and improved process-based transport model for 26 elements (Ac, Ag, Am, Ca, Cl, Cm, Cs, Ho, I, Nb, Ni, Np, Pa, Pb, Pd, Po, Pu, Ra, Se, Sm, Sn, Sr, Tc, Th, U, Zr), and validates model predictions with site measurements and literature data. The model was applied in the safety assessment of a planned nuclear waste repository in Forsmark, Öregrundsgrepen (Baltic Sea). Radionuclide transport models are central in radiological risk assessments to predict radionuclide concentrations in biota and doses to humans. Usually concentration ratios (CRs), the ratio of the measured radionuclide concentration in an organism to the concentration in water, drive such models. However, CRs vary with space and time and CR estimates for many organisms are lacking. In the model used in this study, radionuclides were assumed to follow the circulation of organic matter in the ecosystem and regulated by radionuclide-specific mechanisms and metabolic rates of the organisms. Most input parameters were represented by log-normally distributed probability density functions (PDFs) to account for parameter uncertainty. Generally, modelled CRs for grazers, benthos, zooplankton and fish for the 26 elements were in good agreement with site-specific measurements. The uncertainty was reduced when the model was parameterized with site data, and modelled CRs were most similar to measured values for particle reactive elements and for primary consumers. This study clearly demonstrated that it is necessary to validate models with more than just a few elements (e.g. Cs, Sr) in order to make them robust. The use of PDFs as input parameters, rather than averages or best estimates, enabled the estimation of the probable range of modelled CR values for the organism groups, an improvement over models that only estimate means. Using a mechanistic model that is constrained by ecological processes enables (i) the evaluation of the relative importance of food and water uptake pathways and processes such as assimilation and excretion, (ii) the possibility to extrapolate within element groups (a common requirement in many risk assessments when initial model parameters are scarce) and (iii) predictions of radionuclide uptake in the ecosystem after changes in ecosystem structure or environmental conditions. These features are important for the longterm (>1000 year) risk assessments that need to be considered for a deep nuclear waste repository. Copyright © 2013. Published by Elsevier Ltd.

  13. Considerations for setting the specifications of vaccines.

    PubMed

    Minor, Philip

    2012-05-01

    The specifications of vaccines are determined by the particular product and its method of manufacture, which raise issues unique to the vaccine in question. However, the general principles are shared, including the need to have sufficient active material to immunize a very high proportion of recipients, an acceptable level of safety, which may require specific testing or may come from the production process, and an acceptable low level of contamination with unwanted materials, which may include infectious agents or materials used in production. These principles apply to the earliest smallpox vaccines and the most recent recombinant vaccines, such as those against HPV. Manufacturing development includes more precise definitions of the product through improved tests and tighter control of the process parameters. Good manufacturing practice plays a major role, which is likely to increase in importance in assuring product quality almost independent of end-product specifications.

  14. Fracture toughness of ultrashort pulse-bonded fused silica

    NASA Astrophysics Data System (ADS)

    Richter, S.; Naumann, F.; Zimmermann, F.; Tünnermann, A.; Nolte, S.

    2016-02-01

    We determined the bond interface strength of ultrashort pulse laser-welded fused silica for different processing parameters. To this end, we used a high repetition rate ultrashort pulse laser system to inscribe parallel welding lines with a specific V-shaped design into optically contacted fused silica samples. Afterward, we applied a micro-chevron test to measure the fracture toughness and surface energy of the laser-inscribed welding seams. We analyzed the influence of different processing parameters such as laser repetition rate and line separation on the fracture toughness and fracture surface energy. Welding the entire surface a fracture toughness of 0.71 {MPa} {m}^{1/2}, about 90 % of the pristine bulk material ({≈ } 0.8 {MPa} {m}^{1/2}), is obtained.

  15. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  16. Capacitance of a highly ordered array of nanocapacitors: Model and microscopy

    NASA Astrophysics Data System (ADS)

    Cortés, A.; Celedón, C.; Ulloa, P.; Kepaptsoglou, D.; Häberle, P.

    2011-11-01

    This manuscript describes briefly the process used to build an ordered porous array in an anodic aluminum oxide (AAO) membrane, filled with multiwall carbon nanotubes (MWCNTs). The MWCNTs were grown directly inside the membrane through chemical vapor deposition (CVD). The role of the CNTs is to provide narrow metal electrodes contact with a dielectric surface barrier, hence, forming a capacitor. This procedure allows the construction of an array of 1010 parallel nano-spherical capacitors/cm2. A central part of this contribution is the use of physical parameters obtained from processing transmission electron microscopy (TEM) images, to predict the specific capacitance of the AAOs arrays. Electrical parameters were obtained by solving Laplace's equation through finite element methods (FEMs).

  17. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  18. Stratification based on reproductive state reveals contrasting patterns of age-related variation in demographic parameters in the kittiwake

    USGS Publications Warehouse

    Cam, E.; Monnat, J.-Y.

    2000-01-01

    Heterogeneity in individual quality can be a major obstacle when interpreting age-specific variation in life-history traits. Heterogeneity is likely to lead to within-generation selection, and patterns observed at the population level may result from the combination of hidden patterns specific to subpopulations. Population-level patterns are not relevant to hypotheses concerning the evolution of age-specific reproductive strategies if they differ from patterns at the individual level. We addressed the influence of age and a variable used as a surrogate of quality (yearly reproductive state) on survival and breeding probability in the kittiwake. We found evidence of an effect of age and quality on both demographic parameters. Patterns observed in breeders are consistent with the selection hypothesis, which predicts age-related increases in survival and traits positively correlated with survival. Our results also reveal unexpected age effects specific to subgroups: the influence of age on survival and future breeding probability is not the same in nonbreeders and breeders. These patterns are observed in higher-quality breeding habitats, where the influence of extrinsic factors on breeding state is the weakest. Moreover, there is slight evidence of an influence of sex on breeding probability (not on survival), but the same overall pattern is observed in both sexes. Our results support the hypothesis that age-related variation in demographic parameters observed at the population level is partly shaped by heterogeneity among individuals. They also suggest processes specific to subpopulations. Recent theoreticaI developments lay emphasis on integration of sources of heterogeneity in optimization models to account for apparently 'sub-optimal' empirical patterns. Incorporation of sources of heterogeneity is also the key to investigation of age-related reproductive strategies in heterogeneous populations. Thwarting 'heterogeneity's ruses' has become a major challenge: for detecting and understanding natural processes, and a constructive confrontation between empirical and theoretical studies.

  19. Study of Material Consolidation at Higher Throughput Parameters in Selective Laser Melting of Inconel 718

    NASA Technical Reports Server (NTRS)

    Prater, Tracie

    2016-01-01

    Selective Laser Melting (SLM) is a powder bed fusion additive manufacturing process used increasingly in the aerospace industry to reduce the cost, weight, and fabrication time for complex propulsion components. SLM stands poised to revolutionize propulsion manufacturing, but there are a number of technical questions that must be addressed in order to achieve rapid, efficient fabrication and ensure adequate performance of parts manufactured using this process in safety-critical flight applications. Previous optimization studies for SLM using the Concept Laser M1 and M2 machines at NASA Marshall Space Flight Center have centered on machine default parameters. The objective of this work is to characterize the impact of higher throughput parameters (a previously unexplored region of the manufacturing operating envelope for this application) on material consolidation. In phase I of this work, density blocks were analyzed to explore the relationship between build parameters (laser power, scan speed, hatch spacing, and layer thickness) and material consolidation (assessed in terms of as-built density and porosity). Phase II additionally considers the impact of post-processing, specifically hot isostatic pressing and heat treatment, as well as deposition pattern on material consolidation in the same higher energy parameter regime considered in the phase I work. Density and microstructure represent the "first-gate" metrics for determining the adequacy of the SLM process in this parameter range and, as a critical initial indicator of material quality, will factor into a follow-on DOE that assesses the impact of these parameters on mechanical properties. This work will contribute to creating a knowledge base (understanding material behavior in all ranges of the AM equipment operating envelope) that is critical to transitioning AM from the custom low rate production sphere it currently occupies to the world of mass high rate production, where parts are fabricated at a rapid rate with confidence that they will meet or exceed all stringent functional requirements for spaceflight hardware. These studies will also provide important data on the sensitivity of material consolidation to process parameters that will inform the design and development of future flight articles using SLM.

  20. An Attempt of Formalizing the Selection Parameters for Settlements Generalization in Small-Scales

    NASA Astrophysics Data System (ADS)

    Karsznia, Izabela

    2014-12-01

    The paper covers one of the most important problems concerning context-sensitive settlement selection for the purpose of the small-scale maps. So far, no formal parameters for small-scale settlements generalization have been specified, hence the problem seems to be an important and innovative challenge. It is also crucial from the practical point of view as it is necessary to develop appropriate generalization algorithms for the purpose of the General Geographic Objects Database generalization which is the essential Spatial Data Infrastructure component in Poland. The author proposes and verifies quantitative generalization parameters for the purpose of the settlement selection process in small-scale maps. The selection of settlements was carried out in two research areas - in Lower Silesia and Łódź Province. Based on the conducted analysis appropriate contextual-sensitive settlements selection parameters have been defined. Particular effort has been made to develop a methodology of quantitative settlements selection which would be useful in the automation processes and that would make it possible to keep specifics of generalized objects unchanged.

  1. Physics of Canopy Boundary Layer Resistance for Better Quantification of Sensitivity of Deforestation Scenarios

    NASA Astrophysics Data System (ADS)

    Ragi, K. B.; Patel, R.

    2015-12-01

    A great deal of studies focused on deforestation scenarios in the tropical rainforests. Though all these efforts are useful in the understanding of its response to climate, the systematic understanding of uncertainties in representation of physical processes related to vegetation through sensitivity studies is imperative antecedently to understand the real role of vegetation in changing the climate. It is understood that the dense vegetation fluxes energy and moisture to the atmosphere. But, how much a specific process/a group of processes in the surface conditions of a specific area helps flux energy, moisture and tracers is unknown due to lack of process sensitivity studies and uncertain due to malfunctioning of processes. In this presentation, we have found a faulty parameterization, through process sensitivity studies, that would abet in energy and moisture fluxes to the atmosphere. The model we have employed is the Common Land Model2014. The area we have chosen is the Congolese rainforest. We have discovered the flaw in the leaf boundary layer resistance (LBLR), through sensitivity studies in the LSMs, especially in the dense forest regions. This LBLR is over-parameterized with constant heat transfer coefficient and characteristic dimension of leaves; and friction velocity. However, it is too scant because of overlooking of significant complex physics of turbulence and canopy roughness boundary layer to function it realistically. Our sensitivity results show the deficiency of this process and we have formulated canopy boundary layer resistance, instead of LBLR, with depending variables such as LAI, roughness length, vegetation temperature using appropriate thermo-fluid dynamical principles. We are running the sensitivity experiments with new formulations for setting the parameter values for the data not available so far. This effort would lead to better physics for the land-use change studies and demand for the retrieval of new parameters from satellite/field experiments such as leaf mass per area and specific heat capacity of vegetation.

  2. Analysis of Air Traffic Track Data with the AutoBayes Synthesis System

    NASA Technical Reports Server (NTRS)

    Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.

    2010-01-01

    The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.

  3. Approach to quantify human dermal skin aging using multiphoton laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Puschmann, Stefan; Rahn, Christian-Dennis; Wenck, Horst; Gallinat, Stefan; Fischer, Frank

    2012-03-01

    Extracellular skin structures in human skin are impaired during intrinsic and extrinsic aging. Assessment of these dermal changes is conducted by subjective clinical evaluation and histological and molecular analysis. We aimed to develop a new parameter for the noninvasive quantitative determination of dermal skin alterations utilizing the high-resolution three-dimensional multiphoton laser scanning microscopy (MPLSM) technique. To quantify structural differences between chronically sun-exposed and sun-protected human skin, the respective collagen-specific second harmonic generation and the elastin-specific autofluorescence signals were recorded in young and elderly volunteers using the MPLSM technique. After image processing, the elastin-to-collagen ratio (ELCOR) was calculated. Results show that the ELCOR parameter of volar forearm skin significantly increases with age. For elderly volunteers, the ELCOR value calculated for the chronically sun-exposed temple area is significantly augmented compared to the sun-protected upper arm area. Based on the MPLSM technology, we introduce the ELCOR parameter as a new means to quantify accurately age-associated alterations in the extracellular matrix.

  4. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  5. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    NASA Astrophysics Data System (ADS)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  6. Respiration and enzymatic activities as indicators of stabilization of sewage sludge composting.

    PubMed

    Nikaeen, Mahnaz; Nafez, Amir Hossein; Bina, Bijan; Nabavi, BiBi Fatemeh; Hassanzadeh, Akbar

    2015-05-01

    The objective of this work was to study the evolution of physico-chemical and microbial parameters in the composting process of sewage sludge (SS) with pruning wastes (PW) in order to compare these parameters with respect to their applicability in the evaluation of organic matter (OM) stabilization. To evaluate the composting process and organic matter stability, different microbial activities were compared during composting of anaerobically digested SS with two volumetric ratios, 1:1 and 3:1 of PW:SS and two aeration techniques including aerated static piles (ASP) and turned windrows (TW). Dehydrogenase activity, fluorescein diacetate hydrolysis, and specific oxygen uptake rate (SOUR) were used as microbial activity indices. These indices were compared with traditional parameters, including temperature, pH, moisture content, organic matter, and C/N ratio. The results showed that the TW method and 3:1 (PW:SS) proportion was superior to the ASP method and 1:1 proportion, since the former accelerate the composting process by catalyzing the OM stabilization. Enzymatic activities and SOUR, which reflect microbial activity, correlated well with temperature fluctuations. Based on these results it appears that SOUR and the enzymatic activities are useful parameters to monitor the stabilization of SS compost. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets

    PubMed Central

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-01-01

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801

  8. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets.

    PubMed

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-03-03

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.

  9. Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Falcone, Ylies; Havelund, Klaus; Reger, Giles; Rydeheard, David

    2012-01-01

    Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach.We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.

  10. Lunar ash flow with heat transfer.

    NASA Technical Reports Server (NTRS)

    Pai, S. I.; Hsieh, T.; O'Keefe, J. A.

    1972-01-01

    The most important heat-transfer process in the ash flow under consideration is heat convection. Besides the four important nondimensional parameters of isothermal ash flow (Pai et al., 1972), we have three additional important nondimensional parameters: the ratio of the specific heat of the gas, the ratio of the specific heat of the solid particles to that of gas, and the Prandtl number. We reexamine the one dimensional steady ash flow discussed by Pai et al. (1972) by including the effects of heat transfer. Numerical results for the pressure, temperature, density of the gas, velocities of gas and solid particles, and volume fraction of solid particles as function of altitude for various values of the Jeffreys number, initial velocity ratio, and two different gas species (steam and hydrogen) are presented.

  11. Constraints on the Dynamical Environments of Supermassive Black-Hole Binaries Using Pulsar-Timing Arrays.

    PubMed

    Taylor, Stephen R; Simon, Joseph; Sampson, Laura

    2017-05-05

    We introduce a technique for gravitational-wave analysis, where Gaussian process regression is used to emulate the strain spectrum of a stochastic background by training on population-synthesis simulations. This leads to direct Bayesian inference on astrophysical parameters. For pulsar timing arrays specifically, we interpolate over the parameter space of supermassive black-hole binary environments, including three-body stellar scattering, and evolving orbital eccentricity. We illustrate our approach on mock data, and assess the prospects for inference with data similar to the NANOGrav 9-yr data release.

  12. CR softcopy display presets based on optimum visualization of specific findings

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.

    1999-07-01

    The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  13. Low cost composite manufacturing utilizing intelligent pultrusion and resin transfer molding (IPRTM)

    NASA Astrophysics Data System (ADS)

    Bradley, James E.; Wysocki, Tadeusz S., Jr.

    1993-02-01

    This article describes an innovative method for the economical manufacturing of large, intricately-shaped tubular composite parts. Proprietary intelligent process control techniques are combined with standard pultrusion and RTM methodologies to provide high part throughput, performance, and quality while substantially reducing scrap, rework costs, and labor requirements. On-line process monitoring and control is achieved through a smart tooling interface consisting of modular zone tiles installed on part-specific die assemblies. Real-time archiving of process run parameters provides enhanced SPC and SQC capabilities.

  14. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  15. An extensive analysis of various texture feature extractors to detect Diabetes Mellitus using facial specific regions.

    PubMed

    Shu, Ting; Zhang, Bob; Yan Tang, Yuan

    2017-04-01

    Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Optical characteristics and parameters of gas-discharge plasma in a mixture of mercury dibromide vapor with neon

    NASA Astrophysics Data System (ADS)

    Malinina, A. A.; Malinin, A. N.

    2013-12-01

    Results are presented from studies of the optical characteristics and parameters of plasma of a dielectric barrier discharge in a mixture of mercury dibromide vapor with neon—the working medium of a non-coaxial exciplex gas-discharge emitter. The electron energy distribution function, the transport characteristics, the specific power losses for electron processes, the electron density and temperature, and the rate constants for the processes of elastic and inelastic electron scattering by the working mixture components are determined as functions of the reduced electric field. The rate constant of the process leading to the formation of exciplex mercury monobromide molecules is found to be 1.6 × 10-14 m3/s for a reduced electric field of E/ N = 15 Td, at which the maximum emission intensity in the blue-green spectral region (λmax = 502 nm) was observed in this experiment.

  17. Process Parameter Evaluation and Optimization for Advanced Material Development Final Report CRADA No. TC-1234-96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrubesh, L.; McGann, T. W.

    This project was established as a three-year collaboration to produce and characterize · silica aerogels prepared by a Rapid Supercritical Extraction (RSCE) process to meet . BNA, Inc. application requirements. The objectives of this project were to study the parameters necessary to produce optimized aerogel parts with narrowly specified properties and establish the range and limits of the process for producing such aerogels. The project also included development of new aerogel materials useful for high temperature applications. The results of the project were expected to set the conditions necessary to produce quantities of aerogels having particular specifications such as size,more » shape, density, and mechanical strength. BNA, Inc. terminated the project on April 7, 1999, 10-months prior to the anticipated completion date, due to termination of corporate funding for the project. The technical accomplishments achieved are outlined in Paragraph C below.« less

  18. Economics of food irradiation

    NASA Astrophysics Data System (ADS)

    Kunstadt, Peter; Eng, P.; Steeves, Colyn; Beaulieu, Daniel; Eng, P.

    1993-07-01

    The number of products being radiation processed worldwide is constantly increasing and today includes such diverse items as medical disposables, fruits and vegetables, spices, meats, seafoods and waste products. This range of products to be processed has resulted in a wide range of irradiator designs and capital and operating cost requirements. This paper discusses the economics of low dose food irradiation applications and the effects of various parameters on unit processing costs. It provides a model for calculating specific unit processing costs by correlating known capital costs with annual operating costs and annual throughputs. It is intended to provide the reader with a general knowledge of how unit processing costs are derived.

  19. Rational Design of Molecular Gelator - Solvent Systems Guided by Solubility Parameters

    NASA Astrophysics Data System (ADS)

    Lan, Yaqi

    Self-assembled architectures, such as molecular gels, have attracted wide interest among chemists, physicists and engineers during the past decade. However, the mechanism behind self-assembly remains largely unknown and no capability exists to predict a priori whether a small molecule will gelate a specific solvent or not. The process of self-assembly, in molecular gels, is intricate and must balance parameters influencing solubility and those contrasting forces that govern epitaxial growth into axially symmetric elongated aggregates. Although the gelator-gelator interactions are of paramount importance in understanding gelation, the solvent-gelator specific (i.e., H-bonding) and nonspecific (dipole-dipole, dipole-induced and instantaneous dipole induced forces) intermolecular interactions are equally important. Solvent properties mediate the self-assembly of molecular gelators into their self-assembled fibrillar networks. Herein, solubility parameters of solvents, ranging from partition coefficients (logP), to Henry's law constants (HLC), to solvatochromic ET(30) parameters, to Kamlet-Taft parameters (beta, alpha and pi), to Hansen solubility parameters (deltap, deltad, deltah), etc., are correlated with the gelation ability of numerous classes of molecular gelators. Advanced solvent clustering techniques have led to the development of a priori tools that can identify the solvents that will be gelled and not gelled by molecular gelators. These tools will greatly aid in the development of novel gelators without solely relying on serendipitous discoveries.

  20. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  1. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  2. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell has been designed and tested to deliver high capacity at a C/1.5 discharge rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet made at a discharge rate this high in the 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters, performance, and future test plans are described.

  3. The Methodology of Calculation of Cutting Forces When Machining Composite Materials

    NASA Astrophysics Data System (ADS)

    Rychkov, D. A.; Yanyushkin, A. S.

    2016-08-01

    Cutting of composite materials has specific features and is different from the processing of metals. When this characteristic intense wear of the cutting tool. An important criterion in the selection process parameters composite processing is the value of the cutting forces, which depends on many factors and is determined experimentally, it is not always appropriate. The study developed a method of determining the cutting forces when machining composite materials and the comparative evaluation of the calculated and actual values of cutting forces. The methodology for calculating cutting forces into account specific features of the cutting tool and the extent of wear, the strength properties of the processed material and cutting conditions. Experimental studies conducted with fiberglass milling cutter equipped with elements of hard metal VK3M. The discrepancy between the estimated and the actual values of the cutting force is not more than 10%.

  4. Location specific solidification microstructure control in electron beam melting of Ti-6Al-4V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narra, Sneha P.; Cunningham, Ross; Beuth, Jack

    Relationships between prior beta grain size in solidified Ti-6Al-4V and melting process parameters in the Electron Beam Melting (EBM) process are investigated. Samples are built by varying a machine-dependent proprietary speed function to cover the process space. Optical microscopy is used to measure prior beta grain widths and assess the number of prior beta grains present in a melt pool in the raster region of the build. Despite the complicated evolution of beta grain sizes, the beta grain width scales with melt pool width. The resulting understanding of the relationship between primary machine variables and prior beta grain widths ismore » a key step toward enabling the location specific control of as-built microstructure in the EBM process. Control of grain width in separate specimens and within a single specimen is demonstrated.« less

  5. A review of biocompatible metal injection moulding process parameters for biomedical applications.

    PubMed

    Hamidi, M F F A; Harun, W S W; Samykano, M; Ghani, S A C; Ghazalli, Z; Ahmad, F; Sulong, A B

    2017-09-01

    Biocompatible metals have been revolutionizing the biomedical field, predominantly in human implant applications, where these metals widely used as a substitute to or as function restoration of degenerated tissues or organs. Powder metallurgy techniques, in specific the metal injection moulding (MIM) process, have been employed for the fabrication of controlled porous structures used for dental and orthopaedic surgical implants. The porous metal implant allows bony tissue ingrowth on the implant surface, thereby enhancing fixation and recovery. This paper elaborates a systematic classification of various biocompatible metals from the aspect of MIM process as used in medical industries. In this study, three biocompatible metals are reviewed-stainless steels, cobalt alloys, and titanium alloys. The applications of MIM technology in biomedicine focusing primarily on the MIM process setting parameters discussed thoroughly. This paper should be of value to investigators who are interested in state of the art of metal powder metallurgy, particularly the MIM technology for biocompatible metal implant design and development. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Manufacturing Methods and Technology (MM&T) program. 10.6 micrometer carbon dioxide TEA (Transverely Excited Atmospheric) lasers

    NASA Astrophysics Data System (ADS)

    Luck, C. F.

    1983-06-01

    This report documents the efforts of Raytheon Company to conduct a manufacturing methods and technology (MM&T) program for 10.6 micrometer carbon dioxide TEA lasers. A set of laser parameters is given and a conforming tube design is described. Results of thermal and mechanical stress analyses are detailed along with a procedure for assembling and testing the laser tube. Also provided are purchase specifications for optics and process specifications for some of the essential operations.

  7. Development of a corn and soybean labeling procedure for use with profile parameter classification

    NASA Technical Reports Server (NTRS)

    Magness, E. R. (Principal Investigator)

    1982-01-01

    Some essential processes for the development of a green-number-based logic for identifying (labeling) crops in LANDSAT imagery are documented. The supporting data and subsequent conclusions that resulted from development of a specific labeling logic for corn and soybean crops in the United States are recorded.

  8. Parameters or Cues?

    ERIC Educational Resources Information Center

    MacWhinney, Brian

    2004-01-01

    Truscott and Sharwood Smith (henceforth T&SS) attempt to show how second language acquisition can occur without any learning. In their APT model, change depends only on the tuning of innate principles through the normal course of processing of L2. There are some features of their model that I find attractive. Specifically, their acceptance of the…

  9. Canopy gap dynamics of second-growth red spruce-northern hardwood stands in West Virginia

    Treesearch

    James S. Rentch; Thomas M. Schuler; Gregory J. Nowacki; Nathan R. Beane; W. Mark Ford

    2010-01-01

    Forest restoration requires an understanding of the natural disturbance regime of the target community and estimates of the historic range of variability of ecosystem components (composition, structure, and disturbance processes). Management prescriptions that support specific restoration activities should be consistent with these parameters. In this study, we describe...

  10. Dream controller

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L; Wang, Qiang; Chow, Andrew J

    2013-11-26

    A method and apparatus for intelligently controlling continuous process variables. A Dream Controller comprises an Intelligent Engine mechanism and a number of Model-Free Adaptive (MFA) controllers, each of which is suitable to control a process with specific behaviors. The Intelligent Engine can automatically select the appropriate MFA controller and its parameters so that the Dream Controller can be easily used by people with limited control experience and those who do not have the time to commission, tune, and maintain automatic controllers.

  11. Experimental investigation of orbitally shaken bioreactor hydrodynamics

    NASA Astrophysics Data System (ADS)

    Reclari, Martino; Dreyer, Matthieu; Farhat, Mohamed

    2010-11-01

    The growing interest in the use of orbitally shaken bioreactors for mammalian cells cultivation raises challenging hydrodynamic issues. Optimizations of mixing and oxygenation, as well as similarity relations between different culture scales are still lacking. In the present study, we investigated the relation between the shape of the free surface, the mixing process and the velocity fields, using specific image processing of high speed visualization and Laser Doppler velocimetry. Moreover, similarity parameters were identified for scale-up purposes.

  12. Quantitative Förster resonance energy transfer analysis for kinetic determinations of SUMO-specific protease.

    PubMed

    Liu, Yan; Song, Yang; Madahar, Vipul; Liao, Jiayu

    2012-03-01

    Förster resonance energy transfer (FRET) technology has been widely used in biological and biomedical research, and it is a very powerful tool for elucidating protein interactions in either dynamic or steady state. SUMOylation (the process of SUMO [small ubiquitin-like modifier] conjugation to substrates) is an important posttranslational protein modification with critical roles in multiple biological processes. Conjugating SUMO to substrates requires an enzymatic cascade. Sentrin/SUMO-specific proteases (SENPs) act as an endopeptidase to process the pre-SUMO or as an isopeptidase to deconjugate SUMO from its substrate. To fully understand the roles of SENPs in the SUMOylation cycle, it is critical to understand their kinetics. Here, we report a novel development of a quantitative FRET-based protease assay for SENP1 kinetic parameter determination. The assay is based on the quantitative analysis of the FRET signal from the total fluorescent signal at acceptor emission wavelength, which consists of three components: donor (CyPet-SUMO1) emission, acceptor (YPet) emission, and FRET signal during the digestion process. Subsequently, we developed novel theoretical and experimental procedures to determine the kinetic parameters, k(cat), K(M), and catalytic efficiency (k(cat)/K(M)) of catalytic domain SENP1 toward pre-SUMO1. Importantly, the general principles of this quantitative FRET-based protease kinetic determination can be applied to other proteases. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Microorganisms in Fermented Apple Beverages: Current Knowledge and Future Directions

    PubMed Central

    Le Guellec, Rozenn; Schlusselhuber, Margot; Laplace, Jean-Marie; Cretenet, Marina

    2017-01-01

    Production of fermented apple beverages is spread all around the world with specificities in each country. ‘French ciders’ refer to fermented apple juice mainly produced in the northwest of France and often associated with short periods of consumption. Research articles on this kind of product are scarce compared to wine, especially on phenomena associated with microbial activities. The wine fermentation microbiome and its dynamics, organoleptic improvement for healthy and pleasant products and development of starters are now widely studied. Even if both beverages seem close in terms of microbiome and process (with both alcoholic and malolactic fermentations), the inherent properties of the raw materials and different production and environmental parameters make research on the specificities of apple fermentation beverages worthwhile. This review summarizes current knowledge on the cider microbial ecosystem, associated activities and the influence of process parameters. In addition, available data on cider quality and safety is reviewed. Finally, we focus on the future role of lactic acid bacteria and yeasts in the development of even better or new beverages made from apples. PMID:28757560

  14. Optimization of commercial scale photonuclear production of radioisotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bindu, K. C.; Harmon, Frank; Starovoitova, Valeriia N.

    2013-04-19

    Photonuclear production of radioisotopes driven by bremsstrahlung photons using a linear electron accelerator in the suitable energy range is a promising method for producing radioisotopes. The photonuclear production method is capable of making radioisotopes more conveniently, cheaply and with much less radioactive waste compared to existing methods. Historically, photo-nuclear reactions have not been exploited for isotope production because of the low specific activity that is generally associated with this production process, although the technique is well-known to be capable of producing large quantities of certain radioisotopes. We describe an optimization technique for a set of parameters to maximize specific activitymore » of the final product. This set includes the electron beam energy and current, the end station design (an integrated converter and target as well as cooling system), the purity of materials used, and the activation time. These parameters are mutually dependent and thus their optimization is not trivial. {sup 67}Cu photonuclear production via {sup 68}Zn({gamma}p){sup 67}Cu reaction was used as an example of such an optimization process.« less

  15. Microorganisms in Fermented Apple Beverages: Current Knowledge and Future Directions.

    PubMed

    Cousin, Fabien J; Le Guellec, Rozenn; Schlusselhuber, Margot; Dalmasso, Marion; Laplace, Jean-Marie; Cretenet, Marina

    2017-07-25

    Production of fermented apple beverages is spread all around the world with specificities in each country. 'French ciders' refer to fermented apple juice mainly produced in the northwest of France and often associated with short periods of consumption. Research articles on this kind of product are scarce compared to wine, especially on phenomena associated with microbial activities. The wine fermentation microbiome and its dynamics, organoleptic improvement for healthy and pleasant products and development of starters are now widely studied. Even if both beverages seem close in terms of microbiome and process (with both alcoholic and malolactic fermentations), the inherent properties of the raw materials and different production and environmental parameters make research on the specificities of apple fermentation beverages worthwhile. This review summarizes current knowledge on the cider microbial ecosystem, associated activities and the influence of process parameters. In addition, available data on cider quality and safety is reviewed. Finally, we focus on the future role of lactic acid bacteria and yeasts in the development of even better or new beverages made from apples.

  16. Eye tracking and pupillometry are indicators of dissociable latent decision processes.

    PubMed

    Cavanagh, James F; Wiecki, Thomas V; Kochar, Angad; Frank, Michael J

    2014-08-01

    Can you predict what people are going to do just by watching them? This is certainly difficult: it would require a clear mapping between observable indicators and unobservable cognitive states. In this report, we demonstrate how this is possible by monitoring eye gaze and pupil dilation, which predict dissociable biases during decision making. We quantified decision making using the drift diffusion model (DDM), which provides an algorithmic account of how evidence accumulation and response caution contribute to decisions through separate latent parameters of drift rate and decision threshold, respectively. We used a hierarchical Bayesian estimation approach to assess the single trial influence of observable physiological signals on these latent DDM parameters. Increased eye gaze dwell time specifically predicted an increased drift rate toward the fixated option, irrespective of the value of the option. In contrast, greater pupil dilation specifically predicted an increase in decision threshold during difficult decisions. These findings suggest that eye tracking and pupillometry reflect the operations of dissociated latent decision processes. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. Correlation Analysis of Reactivity in the Photo- and Electro-Reduction of Cobalt(III) Complexes in Binary Organic Solvent/Water Mixtures

    NASA Astrophysics Data System (ADS)

    Sivaraj, Kumarasamy; Elango, Kuppanagounder P.

    2008-08-01

    The photo- and electro-reduction of a series of cobalt(III) complexes of the type cis-β - [Co(trien)(RC6H4NH2)Cl]Cl2 with R = H, p-OMe, p-OEt, p-Me, p-Et, p-F, and m-Me has been studied in binary propan-2-ol/water mixtures. The redox potential (E1/2) and photo-reduction quantum yield (ΦCo(II)) data were correlated with solvent and structural parameters with the aim to shed some light on the mechanism of these reactions. The correlation of E1/2 and ΦCo(II) with macroscopic solvent parameters, viz. relative permittivity, indicated that the reactivity is influenced by both specific and non-specific solute-solvent interactions. The Kamlet-Taft solvatochromic comparison method was used to separate and quantify these effects: An increase in the percentage of organic cosolvent in the medium enhances both reduction processes, and there exists a good linear correlation between E1/2 and ΦCo(II), suggesting a similar solvation of the participants in these redox processes.

  18. A Telemetry Browser Built with Java Components

    NASA Astrophysics Data System (ADS)

    Poupart, E.

    In the context of CNES balloon scientific campaigns and telemetry survey field, a generic telemetry processing product, called TelemetryBrowser in the following, was developed reusing COTS, Java Components for most of them. Connection between those components relies on a software architecture based on parameter producers and parameter consumers. The first one transmit parameter values to the second one which has registered to it. All of those producers and consumers can be spread over the network thanks to Corba, and over every kind of workstation thanks to Java. This gives a very powerful mean to adapt to constraints like network bandwidth, or workstations processing or memory. It's also very useful to display and correlate at the same time information coming from multiple and various sources. An important point of this architecture is that the coupling between parameter producers and parameter consumers is reduced to the minimum and that transmission of information on the network is made asynchronously. So, if a parameter consumer goes down or runs slowly, there is no consequence on the other consumers, because producers don't wait for their consumers to finish their data processing before sending it to other consumers. An other interesting point is that parameter producers, also called TelemetryServers in the following are generated nearly automatically starting from a telemetry description using Flavori component. Keywords Java components, Corba, distributed application, OpenORBii, software reuse, COTS, Internet, Flavor. i Flavor (Formal Language for Audio-Visual Object Representation) is an object-oriented media representation language being developed at Columbia University. It is designed as an extension of Java and C++ and simplifies the development of applications that involve a significant media processing component (encoding, decoding, editing, manipulation, etc.) by providing bitstream representation semantics. (flavor.sourceforge.net) ii OpenORB provides a Java implementation of the OMG Corba 2.4.2 specification (openorb.sourceforge.net) 1/16

  19. Study of Material Densification of In718 in the Higher Throughput Parameter Regime

    NASA Technical Reports Server (NTRS)

    Cordner, Samuel

    2016-01-01

    Selective Laser Melting (SLM) is a powder bed fusion additive manufacturing process used increasingly in the aerospace industry to reduce the cost, weight, and fabrication time for complex propulsion components. Previous optimization studies for SLM using the Concept Laser M1 and M2 machines at NASA Marshall Space Flight Center have centered on machine default parameters. The objective of this project is to characterize how heat treatment affects density and porosity from a microscopic point of view. This is performs using higher throughput parameters (a previously unexplored region of the manufacturing operating envelope for this application) on material consolidation. Density blocks were analyzed to explore the relationship between build parameters (laser power, scan speed, and hatch spacing) and material consolidation (assessed in terms of density and porosity). The study also considers the impact of post-processing, specifically hot isostatic pressing and heat treatment, as well as deposition pattern on material consolidation in the higher energy parameter regime. Metallurgical evaluation of specimens will also be presented. This work will contribute to creating a knowledge base (understanding material behavior in all ranges of the AM equipment operating envelope) that is critical to transitioning AM from the custom low rate production sphere it currently occupies to the world of mass high rate production, where parts are fabricated at a rapid rate with confidence that they will meet or exceed all stringent functional requirements for spaceflight hardware. These studies will also provide important data on the sensitivity of material consolidation to process parameters that will inform the design and development of future flight articles using SLM.

  20. Advanced optic fabrication using ultrafast laser radiation

    NASA Astrophysics Data System (ADS)

    Taylor, Lauren L.; Qiao, Jun; Qiao, Jie

    2016-03-01

    Advanced fabrication and finishing techniques are desired for freeform optics and integrated photonics. Methods including grinding, polishing and magnetorheological finishing used for final figuring and polishing of such optics are time consuming, expensive, and may be unsuitable for complex surface features while common photonics fabrication techniques often limit devices to planar geometries. Laser processing has been investigated as an alternative method for optic forming, surface polishing, structure writing, and welding, as direct tuning of laser parameters and flexible beam delivery are advantageous for complex freeform or photonics elements and material-specific processing. Continuous wave and pulsed laser radiation down to the nanosecond regime have been implemented to achieve nanoscale surface finishes through localized material melting, but the temporal extent of the laser-material interaction often results in the formation of a sub-surface heat affected zone. The temporal brevity of ultrafast laser radiation can allow for the direct vaporization of rough surface asperities with minimal melting, offering the potential for smooth, final surface quality with negligible heat affected material. High intensities achieved in focused ultrafast laser radiation can easily induce phase changes in the bulk of materials for processing applications. We have experimentally tested the effectiveness of ultrafast laser radiation as an alternative laser source for surface processing of monocrystalline silicon. Simulation of material heating associated with ultrafast laser-material interaction has been performed and used to investigate optimized processing parameters including repetition rate. The parameter optimization process and results of experimental processing will be presented.

  1. Bayesian Analysis of Non-Gaussian Long-Range Dependent Processes

    NASA Astrophysics Data System (ADS)

    Graves, T.; Franzke, C.; Gramacy, R. B.; Watkins, N. W.

    2012-12-01

    Recent studies have strongly suggested that surface temperatures exhibit long-range dependence (LRD). The presence of LRD would hamper the identification of deterministic trends and the quantification of their significance. It is well established that LRD processes exhibit stochastic trends over rather long periods of time. Thus, accurate methods for discriminating between physical processes that possess long memory and those that do not are an important adjunct to climate modeling. We have used Markov Chain Monte Carlo algorithms to perform a Bayesian analysis of Auto-Regressive Fractionally-Integrated Moving-Average (ARFIMA) processes, which are capable of modeling LRD. Our principal aim is to obtain inference about the long memory parameter, d,with secondary interest in the scale and location parameters. We have developed a reversible-jump method enabling us to integrate over different model forms for the short memory component. We initially assume Gaussianity, and have tested the method on both synthetic and physical time series such as the Central England Temperature. Many physical processes, for example the Faraday time series from Antarctica, are highly non-Gaussian. We have therefore extended this work by weakening the Gaussianity assumption. Specifically, we assume a symmetric α -stable distribution for the innovations. Such processes provide good, flexible, initial models for non-Gaussian processes with long memory. We will present a study of the dependence of the posterior variance σ d of the memory parameter d on the length of the time series considered. This will be compared with equivalent error diagnostics for other measures of d.

  2. Development of an intelligent system for cooling rate and fill control in GMAW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Einerson, C.J.; Smartt, H.B.; Johnson, J.A.

    1992-09-01

    A control strategy for gas metal arc welding (GMAW) is developed in which the welding system detects certain existing conditions and adjusts the process in accordance to pre-specified rules. This strategy is used to control the reinforcement and weld bead centerline cooling rate during welding. Relationships between heat and mass transfer rates to the base metal and the required electrode speed and welding speed for specific open circuit voltages are taught to a artificial neural network. Control rules are programmed into a fuzzy logic system. TRADITOINAL CONTROL OF THE GMAW PROCESS is based on the use of explicit welding proceduresmore » detailing allowable parameter ranges on a pass by pass basis for a given weld. The present work is an exploration of a completely different approach to welding control. In this work the objectives are to produce welds having desired weld bead reinforcements while maintaining the weld bead centerline cooling rate at preselected values. The need for this specific control is related to fabrication requirements for specific types of pressure vessels. The control strategy involves measuring weld joint transverse cross-sectional area ahead of the welding torch and the weld bead centerline cooling rate behind the weld pool, both by means of video (2), calculating the required process parameters necessary to obtain the needed heat and mass transfer rates (in appropriate dimensions) by means of an artificial neural network, and controlling the heat transfer rate by means of a fuzzy logic controller (3). The result is a welding machine that senses the welding conditions and responds to those conditions on the basis of logical rules, as opposed to producing a weld based on a specific procedure.« less

  3. Development of an intelligent system for cooling rate and fill control in GMAW. [Gas Metal Arc Welding (GMAW)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Einerson, C.J.; Smartt, H.B.; Johnson, J.A.

    1992-01-01

    A control strategy for gas metal arc welding (GMAW) is developed in which the welding system detects certain existing conditions and adjusts the process in accordance to pre-specified rules. This strategy is used to control the reinforcement and weld bead centerline cooling rate during welding. Relationships between heat and mass transfer rates to the base metal and the required electrode speed and welding speed for specific open circuit voltages are taught to a artificial neural network. Control rules are programmed into a fuzzy logic system. TRADITOINAL CONTROL OF THE GMAW PROCESS is based on the use of explicit welding proceduresmore » detailing allowable parameter ranges on a pass by pass basis for a given weld. The present work is an exploration of a completely different approach to welding control. In this work the objectives are to produce welds having desired weld bead reinforcements while maintaining the weld bead centerline cooling rate at preselected values. The need for this specific control is related to fabrication requirements for specific types of pressure vessels. The control strategy involves measuring weld joint transverse cross-sectional area ahead of the welding torch and the weld bead centerline cooling rate behind the weld pool, both by means of video (2), calculating the required process parameters necessary to obtain the needed heat and mass transfer rates (in appropriate dimensions) by means of an artificial neural network, and controlling the heat transfer rate by means of a fuzzy logic controller (3). The result is a welding machine that senses the welding conditions and responds to those conditions on the basis of logical rules, as opposed to producing a weld based on a specific procedure.« less

  4. Magnetorheological finishing: a perfect solution to nanofinishing requirements

    NASA Astrophysics Data System (ADS)

    Sidpara, Ajay

    2014-09-01

    Finishing of optics for different applications is the most important as well as difficult step to meet the specification of optics. Conventional grinding or other polishing processes are not able to reduce surface roughness beyond a certain limit due to high forces acting on the workpiece, embedded abrasive particles, limited control over process, etc. Magnetorheological finishing (MRF) process provides a new, efficient, and innovative way to finish optical materials as well many metals to their desired level of accuracy. This paper provides an overview of MRF process for different applications, important process parameters, requirement of magnetorheological fluid with respect to workpiece material, and some areas that need to be explored for extending the application of MRF process.

  5. Airbreathing engine selection criteria for SSTO propulsion system

    NASA Astrophysics Data System (ADS)

    Ohkami, Yoshiaki; Maita, Masataka

    1995-02-01

    This paper presents airbreathing engine selection criteria to be applied to the propulsion system of a Single Stage To Orbit (SSTO). To establish the criteria, a relation among three major parameters, i.e., delta-V capability, weight penalty, and effective specific impulse of the engine subsystem, is derived as compared to these parameters of the LH2/LOX rocket engine. The effective specific impulse is a function of the engine I(sub sp) and vehicle thrust-to-drag ratio which is approximated by a function of the vehicle velocity. The weight penalty includes the engine dry weight, cooling subsystem weight. The delta-V capability is defined by the velocity region starting from the minimum operating velocity up to the maximum velocity. The vehicle feasibility is investigated in terms of the structural and propellant weights, which requires an iteration process adjusting the system parameters. The system parameters are computed by iteration based on the Newton-Raphson method. It has been concluded that performance in the higher velocity region is extremely important so that the airbreathing engines are required to operate beyond the velocity equivalent to the rocket engine exhaust velocity (approximately 4500 m/s).

  6. Application of dielectric constant measurement in microwave sludge disintegration and wastewater purification processes.

    PubMed

    Kovács, Petra Veszelovszki; Lemmer, Balázs; Keszthelyi-Szabó, Gábor; Hodúr, Cecilia; Beszédes, Sándor

    2018-05-01

    It has been numerously verified that microwave radiation could be advantageous as a pre-treatment for enhanced disintegration of sludge. Very few data related to the dielectric parameters of wastewater of different origins are available; therefore, the objective of our work was to measure the dielectric constant of municipal and meat industrial wastewater during a continuous flow operating microwave process. Determination of the dielectric constant and its change during wastewater and sludge processing make it possible to decide on the applicability of dielectric measurements for detecting the organic matter removal efficiency of wastewater purification process or disintegration degree of sludge. With the measurement of dielectric constant as a function of temperature, total solids (TS) content and microwave specific process parameters regression models were developed. Our results verified that in the case of municipal wastewater sludge, the TS content has a significant effect on the dielectric constant and disintegration degree (DD), as does the temperature. The dielectric constant has a decreasing tendency with increasing temperature for wastewater sludge of low TS content, but an adverse effect was found for samples with high TS and organic matter contents. DD of meat processing wastewater sludge was influenced significantly by the volumetric flow rate and power level, as process parameters of continuously flow microwave pre-treatments. It can be concluded that the disintegration process of food industry sludge can be detected by dielectric constant measurements. From technical purposes the applicability of dielectric measurements was tested in the purification process of municipal wastewater, as well. Determination of dielectric behaviour was a sensitive method to detect the purification degree of municipal wastewater.

  7. Improved Anomaly Detection using Integrated Supervised and Unsupervised Processing

    NASA Astrophysics Data System (ADS)

    Hunt, B.; Sheppard, D. G.; Wetterer, C. J.

    There are two broad technologies of signal processing applicable to space object feature identification using nonresolved imagery: supervised processing analyzes a large set of data for common characteristics that can be then used to identify, transform, and extract information from new data taken of the same given class (e.g. support vector machine); unsupervised processing utilizes detailed physics-based models that generate comparison data that can then be used to estimate parameters presumed to be governed by the same models (e.g. estimation filters). Both processes have been used in non-resolved space object identification and yield similar results yet arrived at using vastly different processes. The goal of integrating the results of the two is to seek to achieve an even greater performance by building on the process diversity. Specifically, both supervised processing and unsupervised processing will jointly operate on the analysis of brightness (radiometric flux intensity) measurements reflected by space objects and observed by a ground station to determine whether a particular day conforms to a nominal operating mode (as determined from a training set) or exhibits anomalous behavior where a particular parameter (e.g. attitude, solar panel articulation angle) has changed in some way. It is demonstrated in a variety of different scenarios that the integrated process achieves a greater performance than each of the separate processes alone.

  8. Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f

    NASA Astrophysics Data System (ADS)

    Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi

    2018-03-01

    We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.

  9. Panaceas, uncertainty, and the robust control framework in sustainability science

    PubMed Central

    Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan

    2007-01-01

    A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574

  10. On the identifiability of inertia parameters of planar Multi-Body Space Systems

    NASA Astrophysics Data System (ADS)

    Nabavi-Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2018-04-01

    This work describes a new formulation to study the identifiability characteristics of Serially Linked Multi-body Space Systems (SLMBSS). The process exploits the so called "Lagrange Formulation" to develop a linear form of Equations of Motion w.r.t the system Inertia Parameters (IPs). Having developed a specific form of regressor matrix, we aim to expedite the identification process. The new approach allows analytical as well as numerical identification and identifiability analysis for different SLMBSSs' configurations. Moreover, the explicit forms of SLMBSSs identifiable parameters are derived by analyzing the identifiability characteristics of the robot. We further show that any SLMBSS designed with Variable Configurations Joint allows all IPs to be identifiable through comparing two successive identification outcomes. This feature paves the way to design new class of SLMBSS for which accurate identification of all IPs is at hand. Different case studies reveal that proposed formulation provides fast and accurate results, as required by the space applications. Further studies might be necessary for cases where planar-body assumption becomes inaccurate.

  11. The practical use of simplicity in developing ground water models

    USGS Publications Warehouse

    Hill, M.C.

    2006-01-01

    The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.

  12. Morphology engineering - Osmolality and its effect on Aspergillus niger morphology and productivity

    PubMed Central

    2011-01-01

    Background The filamentous fungus Aspergillus niger is a widely used strain in a broad range of industrial processes from food to pharmaceutical industry. One of the most intriguing and often uncontrollable characteristics of this filamentous organism is its complex morphology, ranging from dense spherical pellets to viscous mycelia depending on culture conditions. Optimal productivity correlates strongly with a specific morphological form, thus making high demands on process control. Results In about 50 2L stirred tank cultivations the influence of osmolality on A. niger morphology and productivity was investigated. The specific productivity of fructofuranosidase producing strain A. niger SKAn 1015 could be increased notably from 0.5 to 9 U mg-1 h-1 around eighteen fold, by increasing the culture broth osmolality by addition of sodium chloride. The specific productivity of glucoamylase producing strain A. niger AB1.13, could be elevated using the same procedure. An optimal producing osmolality was shown to exist well over the standard osmolality at about 3.2 osmol kg-1 depending on the strain. Fungal morphology of all cultivations was examined by microscope and characterized by digital image analysis. Particle shape parameters were combined to a dimensionless Morphology number, which enabled a comprehensive characterization of fungal morphology correlating closely with productivity. A novel method for determination of germination time in submerged cultivations by laser diffraction, introduced in this study, revealed a decelerated germination process with increasing osmolality. Conclusions Through the introduction of the versatile Morphology number, this study provides the means for a desirable characterization of fungal morphology and demonstrates its relation to productivity. Furthermore, osmolality as a fairly new parameter in process engineering is introduced and found to affect fungal morphology and productivity. Osmolality might provide an auspicious and reliable approach to increase the productivity in industrial processes. Because of the predictable behavior fungal morphology showed in dependence of osmolality, a customization of morphology for process needs seems feasible. PMID:21801352

  13. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population.

    PubMed

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat.

  14. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population

    PubMed Central

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat. PMID:27486464

  15. Tailoring of processing parameters for sintering microsphere-based scaffolds with dense-phase carbon dioxide

    PubMed Central

    Jeon, Ju Hyeong; Bhamidipati, Manjari; Sridharan, BanuPriya; Scurto, Aaron M.; Berkland, Cory J.; Detamore, Michael S.

    2015-01-01

    Microsphere-based polymeric tissue-engineered scaffolds offer the advantage of shape-specific constructs with excellent spatiotemporal control and interconnected porous structures. The use of these highly versatile scaffolds requires a method to sinter the discrete microspheres together into a cohesive network, typically with the use of heat or organic solvents. We previously introduced subcritical CO2 as a sintering method for microsphere-based scaffolds; here we further explored the effect of processing parameters. Gaseous or subcritical CO2 was used for making the scaffolds, and various pressures, ratios of lactic acid to glycolic acid in poly(lactic acid-co-glycolic acid), and amounts of NaCl particles were explored. By changing these parameters, scaffolds with different mechanical properties and morphologies were prepared. The preferred range of applied subcritical CO2 was 15–25 bar. Scaffolds prepared at 25 bar with lower lactic acid ratios and without NaCl particles had a higher stiffness, while the constructs made at 15 bar, lower glycolic acid content, and with salt granules had lower elastic moduli. Human umbilical cord mesenchymal stromal cells (hUCMSCs) seeded on the scaffolds demonstrated that cells penetrate the scaffolds and remain viable. Overall, the study demonstrated the dependence of the optimal CO2 sintering parameters on the polymer and conditions, and identified desirable CO2 processing parameters to employ in the sintering of microsphere-based scaffolds as a more benign alternative to heat-sintering or solvent-based sintering methods. PMID:23115065

  16. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    NASA Astrophysics Data System (ADS)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.

  17. Effects of process parameters on properties of porous foams formed by laser-assisted melting of steel powder (AISI P21)/foaming agent (ZrH2) mixture

    NASA Astrophysics Data System (ADS)

    Seo, Ja-Ye; Lee, Ki-Yong; Shim, Do-Sik

    2018-01-01

    This paper describes the fabrication of lightweight metal foams using the directed energy deposition (DED) method. DED is a highly flexible additive manufacturing process wherein a metal powder mixed with a foaming agent is sprayed while a high-power laser is used to simultaneously melt the powder mixture into layered metal foams. In this study, a mixture of a carbon steel material (P21 powder) and a widely used foaming agent, ZrH2, is used to fabricate metal foams. The effects of various process parameters, such as the laser power, powder feed rate, powder gas flow rate, and scanning speed, on the deposition characteristics (porosity, pore size, and pore distribution) are investigated. The synthesized metal foams exhibit porosities of 10% or lower, and a mean pore area of 7 × 105 μm2. It is observed that the degree of foaming increases in proportion to the laser power to a certain extent. The results also show that the powder feed rate has the most pronounced effect on the porosity of the metal foams, while the powder gas flow rate is the most suitable parameter for adjusting the size of the pores formed within the foams. Further, the scanning speed, which determines the amounts of energy and powder delivered, has a significant effect on the height of the deposits as well as on the properties of the foams. Thus, during the DED process for fabricating metal foams, the pore size and distribution and hence the foam porosity can be tailored by varying the individual process parameters. These findings should be useful as reference data for the design of processes for fabricating porous metallic materials that meet the specific requirements for specialized parts.

  18. Verification and Validation of Residual Stresses in Bi-Material Composite Rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy

    Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less

  19. Multi-Mission Automated Task Invocation Subsystem

    NASA Technical Reports Server (NTRS)

    Cheng, Cecilia S.; Patel, Rajesh R.; Sayfi, Elias M.; Lee, Hyun H.

    2009-01-01

    Multi-Mission Automated Task Invocation Subsystem (MATIS) is software that establishes a distributed data-processing framework for automated generation of instrument data products from a spacecraft mission. Each mission may set up a set of MATIS servers for processing its data products. MATIS embodies lessons learned in experience with prior instrument- data-product-generation software. MATIS is an event-driven workflow manager that interprets project-specific, user-defined rules for managing processes. It executes programs in response to specific events under specific conditions according to the rules. Because requirements of different missions are too diverse to be satisfied by one program, MATIS accommodates plug-in programs. MATIS is flexible in that users can control such processing parameters as how many pipelines to run and on which computing machines to run them. MATIS has a fail-safe capability. At each step, MATIS captures and retains pertinent information needed to complete the step and start the next step. In the event of a restart, this information is retrieved so that processing can be resumed appropriately. At this writing, it is planned to develop a graphical user interface (GUI) for monitoring and controlling a product generation engine in MATIS. The GUI would enable users to schedule multiple processes and manage the data products produced in the processes. Although MATIS was initially designed for instrument data product generation,

  20. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  1. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    USGS Publications Warehouse

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  2. Using Bayesian regression to test hypotheses about relationships between parameters and covariates in cognitive models.

    PubMed

    Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan

    2018-06-01

    An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.

  3. Paleo-reconstruction: Using multiple biomarker parameters

    NASA Astrophysics Data System (ADS)

    Chen, Zhengzheng

    Advanced technologies have played essential roles in the development of molecular organic geochemistry. In this thesis, we have developed several new techniques and explored their applications, alone and with previous techniques, to paleo-reconstruction. First, we developed a protocol to separate biomarker fractions for accurate measurement of compound-specific isotope analysis. This protocol involves combination of zeolite adduction and HPLC separation. Second, an integrated study of traditional biomarker parameters, diamondoids and compound-specific biomarker isotopes, differentiated oil groups from Saudi Arabia. Specifically, Cretaceous reservoired oils were divided into three groups and the Jurassic reservoired oils were divided into two groups. Third, biomarker acids provide an alternative way to characterize biodegradation. Oils from San Joaquin Valley, U.S.A. and oils from Mediterranean display drastically different acid profiles. These differences in biomarker acids probably reflect different processes of biodegradation. Fourth, by analyzing biomarker distributions in the organic-rich rocks recording the onset of Late Ordovician extinction, we propose that changes in salinity associated with eustatic sea-level fall, contributed at least locally to the extinction of graptolite species.

  4. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  5. A Range Finding Protocol to Support Design for Transcriptomics Experimentation: Examples of In-Vitro and In-Vivo Murine UV Exposure

    PubMed Central

    van Oostrom, Conny T.; Jonker, Martijs J.; de Jong, Mark; Dekker, Rob J.; Rauwerda, Han; Ensink, Wim A.; de Vries, Annemieke; Breit, Timo M.

    2014-01-01

    In transcriptomics research, design for experimentation by carefully considering biological, technological, practical and statistical aspects is very important, because the experimental design space is essentially limitless. Usually, the ranges of variable biological parameters of the design space are based on common practices and in turn on phenotypic endpoints. However, specific sub-cellular processes might only be partially reflected by phenotypic endpoints or outside the associated parameter range. Here, we provide a generic protocol for range finding in design for transcriptomics experimentation based on small-scale gene-expression experiments to help in the search for the right location in the design space by analyzing the activity of already known genes of relevant molecular mechanisms. Two examples illustrate the applicability: in-vitro UV-C exposure of mouse embryonic fibroblasts and in-vivo UV-B exposure of mouse skin. Our pragmatic approach is based on: framing a specific biological question and associated gene-set, performing a wide-ranged experiment without replication, eliminating potentially non-relevant genes, and determining the experimental ‘sweet spot’ by gene-set enrichment plus dose-response correlation analysis. Examination of many cellular processes that are related to UV response, such as DNA repair and cell-cycle arrest, revealed that basically each cellular (sub-) process is active at its own specific spot(s) in the experimental design space. Hence, the use of range finding, based on an affordable protocol like this, enables researchers to conveniently identify the ‘sweet spot’ for their cellular process of interest in an experimental design space and might have far-reaching implications for experimental standardization. PMID:24823911

  6. Selection of site specific vibration equation by using analytic hierarchy process in a quarry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalayci, Ulku, E-mail: ukalayci@istanbul.edu.tr; Ozer, Umit, E-mail: uozer@istanbul.edu.tr

    This paper presents a new approach for the selection of the most accurate SSVA (Site Specific Vibration Attenuation) equation for blasting processes in a quarry located near settlements in Istanbul, Turkey. In this context, the SSVA equations obtained from the same study area in the literature were considered in terms of distance between the shot points and buildings and the amount of explosive charge. In this purpose, 11 different SSVA equations obtained from the study area in the past 12 years, forecasting capabilities according to designated new conditions, using 102 vibration records as test data obtained from the study areamore » was investigated. In this study, AHP (Analytic Hierarchy Process) was selected as an analysis method in order to determine the most accurate equation among 11 SSAV equations, and the parameters such as year, distance, charge, and r{sup 2} of the equations were used as criteria for AHP. Finally, the most appropriate equation was selected among the existing ones, and the process of selecting according to different target criteria was presented. Furthermore, it was noted that the forecasting results of the selected equation is more accurate than that formed using the test results. - Highlights: • The optimum Site Specific Vibration Attenuation equation for blasting in a quarry located near settlements was determined. • It is indicated that SSVA equations changing over the years don’t give always accurate estimates at changing conditions. • Selection of the blast induced SSVA equation was made using AHP. • Equation selection method was highlighted based on parameters such as charge, distance, and quarry geometry changes (year).« less

  7. An investigation on co-axial water-jet assisted fiber laser cutting of metal sheets

    NASA Astrophysics Data System (ADS)

    Madhukar, Yuvraj K.; Mullick, Suvradip; Nath, Ashish K.

    2016-02-01

    Water assisted laser cutting has received significant attention in recent times with assurance of many advantages than conventional gas assisted laser cutting. A comparative study between co-axial water-jet and gas-jet assisted laser cutting of thin sheets of mild steel (MS) and titanium (Ti) by fiber laser is presented. Fiber laser (1.07 μm wavelength) was utilised because of its low absorption in water. The cut quality was evaluated in terms of average kerf, projected dross height, heat affected zone (HAZ) and cut surface roughness. It was observed that a broad range process parameter could produce consistent cut quality in MS. However, oxygen assisted cutting could produce better quality only with optimised parameters at high laser power and high cutting speed. In Ti cutting the water-jet assisted laser cutting performed better over the entire range of process parameters compared with gas assisted cutting. The specific energy, defined as the amount of laser energy required to remove unit volume of material was found more in case of water-jet assisted laser cutting process. It is mainly due to various losses associated with water assisted laser processing such as absorption of laser energy in water and scattering at the interaction zone.

  8. Optimization of Biomathematical Model Predictions for Cognitive Performance Impairment in Individuals: Accounting for Unknown Traits and Uncertain States in Homeostatic and Circadian Processes

    PubMed Central

    Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.

    2007-01-01

    Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385

  9. Experimental Study of Heat Transfer Performance of Polysilicon Slurry Drying Process

    NASA Astrophysics Data System (ADS)

    Wang, Xiaojing; Ma, Dongyun; Liu, Yaqian; Wang, Zhimin; Yan, Yangyang; Li, Yuankui

    2016-12-01

    In recent years, the growth of the solar energy photovoltaic industry has greatly promoted the development of polysilicon. However, there has been little research into the slurry by-products of polysilicon production. In this paper the thermal performance of polysilicon slurry was studied in an industrial drying process with a twin-screw horizontal intermittent dryer. By dividing the drying process into several subunits, the parameters of each unit could be regarded as constant in that period. The time-dependent changes in parameters including temperature, specific heat and evaporation enthalpy were plotted. An equation for the change in the heat transfer coefficient over time was calculated based on heat transfer equations. The concept of a distribution coefficient was introduced to reflect the influence of stirring on the heat transfer area. The distribution coefficient ranged from 1.2 to 1.7 and was obtained with the fluid simulation software FLUENT, which simplified the calculation of heat transfer area during the drying process. These experimental data can be used to guide the study of polysilicon slurry drying and optimize the design of dryers for industrial processes.

  10. URPD: a specific product primer design tool

    PubMed Central

    2012-01-01

    Background Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. Findings URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. Conclusions URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/. PMID:22713312

  11. URPD: a specific product primer design tool.

    PubMed

    Chuang, Li-Yeh; Cheng, Yu-Huei; Yang, Cheng-Hong

    2012-06-19

    Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/.

  12. Chemometrics-based process analytical technology (PAT) tools: applications and adaptation in pharmaceutical and biopharmaceutical industries.

    PubMed

    Challa, Shruthi; Potumarthi, Ravichandra

    2013-01-01

    Process analytical technology (PAT) is used to monitor and control critical process parameters in raw materials and in-process products to maintain the critical quality attributes and build quality into the product. Process analytical technology can be successfully implemented in pharmaceutical and biopharmaceutical industries not only to impart quality into the products but also to prevent out-of-specifications and improve the productivity. PAT implementation eliminates the drawbacks of traditional methods which involves excessive sampling and facilitates rapid testing through direct sampling without any destruction of sample. However, to successfully adapt PAT tools into pharmaceutical and biopharmaceutical environment, thorough understanding of the process is needed along with mathematical and statistical tools to analyze large multidimensional spectral data generated by PAT tools. Chemometrics is a chemical discipline which incorporates both statistical and mathematical methods to obtain and analyze relevant information from PAT spectral tools. Applications of commonly used PAT tools in combination with appropriate chemometric method along with their advantages and working principle are discussed. Finally, systematic application of PAT tools in biopharmaceutical environment to control critical process parameters for achieving product quality is diagrammatically represented.

  13. Quantitative phase-digital holographic microscopy: a new imaging modality to identify original cellular biomarkers of diseases

    NASA Astrophysics Data System (ADS)

    Marquet, P.; Rothenfusser, K.; Rappaz, B.; Depeursinge, C.; Jourdain, P.; Magistretti, P. J.

    2016-03-01

    Quantitative phase microscopy (QPM) has recently emerged as a powerful label-free technique in the field of living cell imaging allowing to non-invasively measure with a nanometric axial sensitivity cell structure and dynamics. Since the phase retardation of a light wave when transmitted through the observed cells, namely the quantitative phase signal (QPS), is sensitive to both cellular thickness and intracellular refractive index related to the cellular content, its accurate analysis allows to derive various cell parameters and monitor specific cell processes, which are very likely to identify new cell biomarkers. Specifically, quantitative phase-digital holographic microscopy (QP-DHM), thanks to its numerical flexibility facilitating parallelization and automation processes, represents an appealing imaging modality to both identify original cellular biomarkers of diseases as well to explore the underlying pathophysiological processes.

  14. VPPA weld model evaluation

    NASA Technical Reports Server (NTRS)

    Mccutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.

    1992-01-01

    NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.

  15. Display device for indicating the value of a parameter in a process plant

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  16. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  17. Optimization of High-Throughput Sequencing Kinetics for determining enzymatic rate constants of thousands of RNA substrates

    PubMed Central

    Niland, Courtney N.; Jankowsky, Eckhard; Harris, Michael E.

    2016-01-01

    Quantification of the specificity of RNA binding proteins and RNA processing enzymes is essential to understanding their fundamental roles in biological processes. High Throughput Sequencing Kinetics (HTS-Kin) uses high throughput sequencing and internal competition kinetics to simultaneously monitor the processing rate constants of thousands of substrates by RNA processing enzymes. This technique has provided unprecedented insight into the substrate specificity of the tRNA processing endonuclease ribonuclease P. Here, we investigate the accuracy and robustness of measurements associated with each step of the HTS-Kin procedure. We examine the effect of substrate concentration on the observed rate constant, determine the optimal kinetic parameters, and provide guidelines for reducing error in amplification of the substrate population. Importantly, we find that high-throughput sequencing, and experimental reproducibility contribute their own sources of error, and these are the main sources of imprecision in the quantified results when otherwise optimized guidelines are followed. PMID:27296633

  18. Parallel optimization of signal detection in active magnetospheric signal injection experiments

    NASA Astrophysics Data System (ADS)

    Gowanlock, Michael; Li, Justin D.; Rude, Cody M.; Pankratius, Victor

    2018-05-01

    Signal detection and extraction requires substantial manual parameter tuning at different stages in the processing pipeline. Time-series data depends on domain-specific signal properties, necessitating unique parameter selection for a given problem. The large potential search space makes this parameter selection process time-consuming and subject to variability. We introduce a technique to search and prune such parameter search spaces in parallel and select parameters for time series filters using breadth- and depth-first search strategies to increase the likelihood of detecting signals of interest in the field of magnetospheric physics. We focus on studying geomagnetic activity in the extremely and very low frequency ranges (ELF/VLF) using ELF/VLF transmissions from Siple Station, Antarctica, received at Québec, Canada. Our technique successfully detects amplified transmissions and achieves substantial speedup performance gains as compared to an exhaustive parameter search. We present examples where our algorithmic approach reduces the search from hundreds of seconds down to less than 1 s, with a ranked signal detection in the top 99th percentile, thus making it valuable for real-time monitoring. We also present empirical performance models quantifying the trade-off between the quality of signal recovered and the algorithm response time required for signal extraction. In the future, improved signal extraction in scenarios like the Siple experiment will enable better real-time diagnostics of conditions of the Earth's magnetosphere for monitoring space weather activity.

  19. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  20. Array Automated Assembly Task Low Cost Silicon Solar Array Project, Phase 2

    NASA Technical Reports Server (NTRS)

    Rhee, S. S.; Jones, G. T.; Allison, K. L.

    1978-01-01

    Progress in the development of solar cells and module process steps for low-cost solar arrays is reported. Specific topics covered include: (1) a system to automatically measure solar cell electrical performance parameters; (2) automation of wafer surface preparation, printing, and plating; (3) laser inspection of mechanical defects of solar cells; and (4) a silicon antireflection coating system. Two solar cell process steps, laser trimming and holing automation and spray-on dopant junction formation, are described.

  1. The Role of Evolutive Elastic Properties in the Performance of a Sheet Formed Spring Applied in Multimedia Car Industry

    NASA Astrophysics Data System (ADS)

    Faria, J.; Silva, J.; Bernardo, P.; Araújo, M.; Alves, J. L.

    2016-08-01

    The manufacturing process and the behaviour of a spring manufactured from an aluminium sheet is described and investigated in this work considering the specifications for the in-service conditions. The spring is intended to be applied in car multimedia industry to replace bolted connections. Among others, are investigated the roles of the constitutive parameters and the hypothesis of evolutive elastic properties with the plastic work in the multistep forming process and in working conditions.

  2. Problems of collaborative work of the automated process control system (APCS) and the its information security and solutions.

    NASA Astrophysics Data System (ADS)

    Arakelyan, E. K.; Andryushin, A. V.; Mezin, S. V.; Kosoy, A. A.; Kalinina, Ya V.; Khokhlov, I. S.

    2017-11-01

    The principle of interaction of the specified systems of technological protections by the Automated process control system (APCS) and information safety in case of incorrect execution of the algorithm of technological protection is offered. - checking the correctness of the operation of technological protection in each specific situation using the functional relationship between the monitored parameters. The methodology for assessing the economic feasibility of developing and implementing an information security system.

  3. Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamrick, Todd

    2011-01-01

    Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to computemore » the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.« less

  4. Analog design optimization methodology for ultralow-power circuits using intuitive inversion-level and saturation-level parameters

    NASA Astrophysics Data System (ADS)

    Eimori, Takahisa; Anami, Kenji; Yoshimatsu, Norifumi; Hasebe, Tetsuya; Murakami, Kazuaki

    2014-01-01

    A comprehensive design optimization methodology using intuitive nondimensional parameters of inversion-level and saturation-level is proposed, especially for ultralow-power, low-voltage, and high-performance analog circuits with mixed strong, moderate, and weak inversion metal-oxide-semiconductor transistor (MOST) operations. This methodology is based on the synthesized charge-based MOST model composed of Enz-Krummenacher-Vittoz (EKV) basic concepts and advanced-compact-model (ACM) physics-based equations. The key concept of this methodology is that all circuit and system characteristics are described as some multivariate functions of inversion-level parameters, where the inversion level is used as an independent variable representative of each MOST. The analog circuit design starts from the first step of inversion-level design using universal characteristics expressed by circuit currents and inversion-level parameters without process-dependent parameters, followed by the second step of foundry-process-dependent design and the last step of verification using saturation-level criteria. This methodology also paves the way to an intuitive and comprehensive design approach for many kinds of analog circuit specifications by optimization using inversion-level log-scale diagrams and saturation-level criteria. In this paper, we introduce an example of our design methodology for a two-stage Miller amplifier.

  5. Non-electrical-power temperature-time integrating sensor for RFID based on microfluidics

    NASA Astrophysics Data System (ADS)

    Schneider, Mike; Hoffmann, Martin

    2011-06-01

    The integration of RFID tags into packages offers the opportunity to combine logistic advantages of the technology with monitoring different parameters from inside the package at the same time. An essential demand for enhanced product safety especially in pharmacy or food industry is the monitoring of the time-temperature-integral. Thus, completely passive time-temperature-integrators (TTI) requiring no battery, microprocessor nor data logging devices are developed. TTI representing the sterilization process inside an autoclave system is a demanding challenge: a temperature of at least 120 °C have to be maintained over 45 minutes to assure that no unwanted organism remains. Due to increased temperature, the viscosity of a fluid changes and thus the speed of the fluid inside the channel increases. The filled length of the channel represents the time temperature integral affecting the system. Measurements as well as simulations allow drawing conclusions about the influence of the geometrical parameters of the system and provide the possibility of adaptation. Thus a completely passive sensor element for monitoring an integral parameter with waiving of external electrical power supply and data processing technology is demonstrated. Furthermore, it is shown how to adjust the specific TTI parameters of the sensor to different applications and needs by modifying the geometrical parameters of the system.

  6. Algorithms and Results of Eye Tissues Differentiation Based on RF Ultrasound

    PubMed Central

    Jurkonis, R.; Janušauskas, A.; Marozas, V.; Jegelevičius, D.; Daukantas, S.; Patašius, M.; Paunksnis, A.; Lukoševičius, A.

    2012-01-01

    Algorithms and software were developed for analysis of B-scan ultrasonic signals acquired from commercial diagnostic ultrasound system. The algorithms process raw ultrasonic signals in backscattered spectrum domain, which is obtained using two time-frequency methods: short-time Fourier and Hilbert-Huang transformations. The signals from selected regions of eye tissues are characterized by parameters: B-scan envelope amplitude, approximated spectral slope, approximated spectral intercept, mean instantaneous frequency, mean instantaneous bandwidth, and parameters of Nakagami distribution characterizing Hilbert-Huang transformation output. The backscattered ultrasound signal parameters characterizing intraocular and orbit tissues were processed by decision tree data mining algorithm. The pilot trial proved that applied methods are able to correctly classify signals from corpus vitreum blood, extraocular muscle, and orbit tissues. In 26 cases of ocular tissues classification, one error occurred, when tissues were classified into classes of corpus vitreum blood, extraocular muscle, and orbit tissue. In this pilot classification parameters of spectral intercept and Nakagami parameter for instantaneous frequencies distribution of the 1st intrinsic mode function were found specific for corpus vitreum blood, orbit and extraocular muscle tissues. We conclude that ultrasound data should be further collected in clinical database to establish background for decision support system for ocular tissue noninvasive differentiation. PMID:22654643

  7. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  8. Using implicit association tests in age-heterogeneous samples: The importance of cognitive abilities and quad model processes.

    PubMed

    Wrzus, Cornelia; Egloff, Boris; Riediger, Michaela

    2017-08-01

    Implicit association tests (IATs) are increasingly used to indirectly assess people's traits, attitudes, or other characteristics. In addition to measuring traits or attitudes, IAT scores also reflect differences in cognitive abilities because scores are based on reaction times (RTs) and errors. As cognitive abilities change with age, questions arise concerning the usage and interpretation of IATs for people of different age. To address these questions, the current study examined how cognitive abilities and cognitive processes (i.e., quad model parameters) contribute to IAT results in a large age-heterogeneous sample. Participants (N = 549; 51% female) in an age-stratified sample (range = 12-88 years) completed different IATs and 2 tasks to assess cognitive processing speed and verbal ability. From the IAT data, D2-scores were computed based on RTs, and quad process parameters (activation of associations, overcoming bias, detection, guessing) were estimated from individual error rates. Substantial IAT scores and quad processes except guessing varied with age. Quad processes AC and D predicted D2-scores of the content-specific IAT. Importantly, the effects of cognitive abilities and quad processes on IAT scores were not significantly moderated by participants' age. These findings suggest that IATs seem suitable for age-heterogeneous studies from adolescence to old age when IATs are constructed and analyzed appropriately, for example with D-scores and process parameters. We offer further insight into how D-scoring controls for method effects in IATs and what IAT scores capture in addition to implicit representations of characteristics. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Parmeterization of spectra

    NASA Technical Reports Server (NTRS)

    Cornish, C. R.

    1983-01-01

    Following reception and analog to digital conversion (A/D) conversion, atmospheric radar backscatter echoes need to be processed so as to obtain desired information about atmospheric processes and to eliminate or minimize contaminating contributions from other sources. Various signal processing techniques have been implemented at mesosphere-stratosphere-troposphere (MST) radar facilities to estimate parameters of interest from received spectra. Such estimation techniques need to be both accurate and sufficiently efficient to be within the capabilities of the particular data-processing system. The various techniques used to parameterize the spectra of received signals are reviewed herein. Noise estimation, electromagnetic interference, data smoothing, correlation, and the Doppler effect are among the specific points addressed.

  10. Comparative Analysis on Nonlinear Models for Ron Gasoline Blending Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Aguilera, R. Carreño; Yu, Wen; Rodríguez, J. C. Tovar; Mosqueda, M. Elena Acevedo; Ortiz, M. Patiño; Juarez, J. J. Medel; Bautista, D. Pacheco

    The blending process always being a nonlinear process is difficult to modeling, since it may change significantly depending on the components and the process variables of each refinery. Different components can be blended depending on the existing stock, and the chemical characteristics of each component are changing dynamically, they all are blended until getting the expected specification in different properties required by the customer. One of the most relevant properties is the Octane, which is difficult to control in line (without the component storage). Since each refinery process is quite different, a generic gasoline blending model is not useful when a blending in line wants to be done in a specific process. A mathematical gasoline blending model is presented in this paper for a given process described in state space as a basic gasoline blending process description. The objective is to adjust the parameters allowing the blending gasoline model to describe a signal in its trajectory, representing in neural networks extreme learning machine method and also for nonlinear autoregressive-moving average (NARMA) in neural networks method, such that a comparative work be developed.

  11. Landsat-5 bumper-mode geometric correction

    USGS Publications Warehouse

    Storey, James C.; Choate, Michael J.

    2004-01-01

    The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.

  12. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  13. Extending the performance of KrF laser for microlithography by using novel F2 control technology

    NASA Astrophysics Data System (ADS)

    Zambon, Paolo; Gong, Mengxiong; Carlesi, Jason; Padmabandu, Gunasiri G.; Binder, Mike; Swanson, Ken; Das, Palash P.

    2000-07-01

    Exposure tools for 248nm lithography have reached a level of maturity comparable to those based on i-line. With this increase in maturity, there is a concomitant requirement for greater flexibility from the laser by the process engineers. Usually, these requirements pertain to energy, spectral width and repetition rate. By utilizing a combination of laser parameters, the process engineers are often able to optimize throughput, reduce cost-of-operation or achieve greater process margin. Hitherto, such flexibility of laser operation was possible only via significant changes to various laser modules. During our investigation, we found that the key measure of the laser that impacts the aforementioned parameters is its F2 concentration. By monitoring and controlling its slope efficiency, the laser's F2 concentration may be precisely controlled. Thus a laser may tune to operate under specifications as diverse as 7mJ, (Delta) (lambda) FWHM < 0.3 pm and 10mJ, (Delta) (lambda) FWHM < 0.6pm and still meet the host of requirements necessary for lithography. We discus this new F2 control technique and highlight some laser performance parameters.

  14. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy

    PubMed Central

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós

    2014-01-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  15. Image-based information, communication, and retrieval

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A. L.

    1980-01-01

    IBIS/VICAR system combines video image processing and information management. Flexible programs require user to supply only parameters specific to particular application. Special-purpose input/output routines transfer image data with reduced memory requirements. New application programs are easily incorporated. Program is written in FORTRAN IV, Assembler, and OS JCL for batch execution and has been implemented on IBM 360.

  16. 40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR part 60... the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...

  17. 40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR part 60... the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...

  18. Minimization of operational impacts on spectrophotometer color measurements for cotton

    USDA-ARS?s Scientific Manuscript database

    A key cotton quality and processing property that is gaining increasing importance is the color of the cotton. Cotton fiber in the U.S. is classified for color using the Uster® High Volume Instrument (HVI), using the parameters Rd and +b. Rd and +b are specific to cotton fiber and are not typical ...

  19. Chronic alcoholism: insights from neurophysiology.

    PubMed

    Campanella, S; Petit, G; Maurage, P; Kornreich, C; Verbanck, P; Noël, X

    2009-01-01

    Increasing knowledge of the anatomical structures and cellular processes underlying psychiatric disorders may help bridge the gap between clinical signs and basic physiological processes. Accordingly, considerable insight has been gained in recent years into a common psychiatric condition, i.e., chronic alcoholism. We reviewed various physiological parameters that are altered in chronic alcoholic patients compared to healthy individuals--continuous electroencephalogram, oculomotor measures, cognitive event-related potentials and event-related oscillations--to identify links between these physiological parameters, altered cognitive processes and specific clinical symptoms. Alcoholic patients display: (1) high beta and theta power in the resting electroencephalogram, suggesting hyperarousal of their central nervous system; (2) abnormalities in smooth pursuit eye movements, in saccadic inhibition during antisaccade tasks, and in prepulse inhibition, suggesting disturbed attention modulation and abnormal patterns of prefrontal activation that may stem from the same prefrontal "inhibitory" cortical dysfunction; (3) decreased amplitude for cognitive event-related potentials situated along the continuum of information-processing, suggesting that alcoholism is associated with neurophysiological deficits at the level of the sensory cortex and not only disturbances involving associative cortices and limbic structures; and (4) decreased theta, gamma and delta oscillations, suggesting cognitive disinhibition at a functional level. The heterogeneity of alcoholic disorders in terms of symptomatology, course and outcome is the result of various pathophysiological processes that physiological parameters may help to define. These alterations may be related to precise cognitive processes that could be easily monitored neurophysiologically in order to create more homogeneous subgroups of alcoholic individuals.

  20. Indigenous lunar construction materials

    NASA Technical Reports Server (NTRS)

    Rogers, Wayne; Sture, Stein

    1991-01-01

    The objectives are the following: to investigate the feasibility of the use of local lunar resources for construction of a lunar base structure; to develop a material processing method and integrate the method with design and construction of a pressurized habitation structure; to estimate specifications of the support equipment necessary for material processing and construction; and to provide parameters for systems models of lunar base constructions, supply, and operations. The topics are presented in viewgraph form and include the following: comparison of various lunar structures; guidelines for material processing methods; cast lunar regolith; examples of cast basalt components; cast regolith process; processing equipment; mechanical properties of cast basalt; material properties and structural design; and future work.

  1. Vapor hydrogen peroxide as alternative to dry heat microbial reduction

    NASA Astrophysics Data System (ADS)

    Chung, S.; Kern, R.; Koukol, R.; Barengoltz, J.; Cash, H.

    2008-09-01

    The Jet Propulsion Laboratory (JPL), in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal was to include this technique, with an appropriate specification, in NASA Procedural Requirements 8020.12 as a low-temperature complementary technique to the dry heat sterilization process. The VHP process is widely used by the medical industry to sterilize surgical instruments and biomedical devices, but high doses of VHP may degrade the performance of flight hardware, or compromise material compatibility. The goal for this study was to determine the minimum VHP process conditions for planetary protection acceptable microbial reduction levels. Experiments were conducted by the STERIS Corporation, under contract to JPL, to evaluate the effectiveness of vapor hydrogen peroxide for the inactivation of the standard spore challenge, Geobacillus stearothermophilus. VHP process parameters were determined that provide significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. In addition to the obvious process parameters of interest: hydrogen peroxide concentration, number of injection cycles, and exposure duration, the investigation also considered the possible effect on lethality of environmental parameters: temperature, absolute humidity, and material substrate. This study delineated a range of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D-value may be imposed, a process humidity range for which the worst case D-value may be imposed, and the dependence on selected spacecraft material substrates. The derivation of D-values from the lethality data permitted conservative planetary protection recommendations.

  2. Work-Facilitating Information Visualization Techniques for Complex Wastewater Systems

    NASA Astrophysics Data System (ADS)

    Ebert, Achim; Einsfeld, Katja

    The design and the operation of urban drainage systems and wastewater treatment plants (WWTP) have become increasingly complex. This complexity is due to increased requirements concerning process technology, technical, environmental, economical, and occupational safety aspects. The plant operator has access not only to some timeworn filers and measured parameters but also to numerous on-line and off-line parameters that characterize the current state of the plant in detail. Moreover, expert databases and specific support pages of plant manufactures are accessible through the World Wide Web. Thus, the operator is overwhelmed with predominantly unstructured data.

  3. A MULTISCALE FRAMEWORK FOR THE STOCHASTIC ASSIMILATION AND MODELING OF UNCERTAINTY ASSOCIATED NCF COMPOSITE MATERIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin

    multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.

  4. Customizing vacuum fluctuations for enhanced entanglement creation

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2018-07-01

    This paper connects the creation of entanglement through cavity enhanced decay rate with practical design parameters such as cavity dimension and cavity mirror reflectivity. The clarification of specific physical parameters on cavity enhanced emission in relation to entanglement creation is discussed. It is found that entanglement increases as the size of the cavity decreases, or the reflectivity of the cavity mirrors increases. Additionally, the negative effect of individual qubit decoherence on the entanglement is discussed. These results can be used to design or choose a practical system for implementing entanglement between two qubits for quantum computation and information processing.

  5. Development of analysis technique to predict the material behavior of blowing agent

    NASA Astrophysics Data System (ADS)

    Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo

    2014-11-01

    In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.

  6. Discharge runaway in high power impulse magnetron sputtering of carbon: the effect of gas pressure, composition and target peak voltage

    NASA Astrophysics Data System (ADS)

    Vitelaru, Catalin; Aijaz, Asim; Constantina Parau, Anca; Kiss, Adrian Emil; Sobetkii, Arcadie; Kubart, Tomas

    2018-04-01

    Pressure and target voltage driven discharge runaway from low to high discharge current density regimes in high power impulse magnetron sputtering of carbon is investigated. The main purpose is to provide a meaningful insight of the discharge dynamics, with the ultimate goal to establish a correlation between discharge properties and process parameters to control the film growth. This is achieved by examining a wide range of pressures (2–20 mTorr) and target voltages (700–850 V) and measuring ion saturation current density at the substrate position. We show that the minimum plasma impedance is an important parameter identifying the discharge transition as well as establishing a stable operating condition. Using the formalism of generalized recycling model, we introduce a new parameter, ‘recycling ratio’, to quantify the process gas recycling for specific process conditions. The model takes into account the ion flux to the target, the amount of gas available, and the amount of gas required for sustaining the discharge. We show that this parameter describes the relation between the gas recycling and the discharge current density. As a test case, we discuss the pressure and voltage driven transitions by changing the gas composition when adding Ne into the discharge. We propose that standard Ar HiPIMS discharges operated with significant gas recycling do not require Ne to increase the carbon ionization.

  7. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    USGS Publications Warehouse

    Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.

    2015-01-01

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  8. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.

    2015-03-15

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  9. The topographic development and areal parametric characterization of a stratified surface polished by mass finishing

    NASA Astrophysics Data System (ADS)

    Walton, Karl; Blunt, Liam; Fleming, Leigh

    2015-09-01

    Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.

  10. Parametric Study and Multi-Criteria Optimization in Laser Cladding by a High Power Direct Diode Laser

    NASA Astrophysics Data System (ADS)

    Farahmand, Parisa; Kovacevic, Radovan

    2014-12-01

    In laser cladding, the performance of the deposited layers subjected to severe working conditions (e.g., wear and high temperature conditions) depends on the mechanical properties, the metallurgical bond to the substrate, and the percentage of dilution. The clad geometry and mechanical characteristics of the deposited layer are influenced greatly by the type of laser used as a heat source and process parameters used. Nowadays, the quality of fabricated coating by laser cladding and the efficiency of this process has improved thanks to the development of high-power diode lasers, with power up to 10 kW. In this study, the laser cladding by a high power direct diode laser (HPDDL) as a new heat source in laser cladding was investigated in detail. The high alloy tool steel material (AISI H13) as feedstock was deposited on mild steel (ASTM A36) by a HPDDL up to 8kW laser and with new design lateral feeding nozzle. The influences of the main process parameters (laser power, powder flow rate, and scanning speed) on the clad-bead geometry (specifically layer height and depth of the heat affected zone), and clad microhardness were studied. Multiple regression analysis was used to develop the analytical models for desired output properties according to input process parameters. The Analysis of Variance was applied to check the accuracy of the developed models. The response surface methodology (RSM) and desirability function were used for multi-criteria optimization of the cladding process. In order to investigate the effect of process parameters on the molten pool evolution, in-situ monitoring was utilized. Finally, the validation results for optimized process conditions show the predicted results were in a good agreement with measured values. The multi-criteria optimization makes it possible to acquire an efficient process for a combination of clad geometrical and mechanical characteristics control.

  11. Characterization of breast lesion using T1-perfusion magnetic resonance imaging: Qualitative vs. quantitative analysis.

    PubMed

    Thakran, S; Gupta, P K; Kabra, V; Saha, I; Jain, P; Gupta, R K; Singh, A

    2018-06-14

    The objective of this study was to quantify the hemodynamic parameters using first pass analysis of T 1 -perfusion magnetic resonance imaging (MRI) data of human breast and to compare these parameters with the existing tracer kinetic parameters, semi-quantitative and qualitative T 1 -perfusion analysis in terms of lesion characterization. MRI of the breast was performed in 50 women (mean age, 44±11 [SD] years; range: 26-75) years with a total of 15 benign and 35 malignant breast lesions. After pre-processing, T 1 -perfusion MRI data was analyzed using qualitative approach by two radiologists (visual inspection of the kinetic curve into types I, II or III), semi-quantitative (characterization of kinetic curve types using empirical parameters), generalized-tracer-kinetic-model (tracer kinetic parameters) and first pass analysis (hemodynamic-parameters). Chi-squared test, t-test, one-way analysis-of-variance (ANOVA) using Bonferroni post-hoc test and receiver-operating-characteristic (ROC) curve were used for statistical analysis. All quantitative parameters except leakage volume (Ve), qualitative (type-I and III) and semi-quantitative curves (type-I and III) provided significant differences (P<0.05) between benign and malignant lesions. Kinetic parameters, particularly volume transfer coefficient (K trans ) provided a significant difference (P<0.05) between all grades except grade-II vs III. The hemodynamic parameter (relative-leakage-corrected-breast-blood-volume [rBBVcorr) provided a statistically significant difference (P<0.05) between all grades. It also provided highest sensitivity and specificity among all parameters in differentiation between different grades of malignant breast lesions. Quantitative parameters, particularly rBBVcorr and K trans provided similar sensitivity and specificity in differentiating benign from malignant breast lesions for this cohort. Moreover, rBBVcorr provided better differentiation between different grades of malignant breast lesions among all the parameters. Copyright © 2018. Published by Elsevier Masson SAS.

  12. Minimizing energy dissipation of matrix multiplication kernel on Virtex-II

    NASA Astrophysics Data System (ADS)

    Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook

    2002-07-01

    In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.

  13. Consistent Long-Time Series of GPS Satellite Antenna Phase Center Corrections

    NASA Astrophysics Data System (ADS)

    Steigenberger, P.; Schmid, R.; Rothacher, M.

    2004-12-01

    The current IGS processing strategy disregards satellite antenna phase center variations (pcvs) depending on the nadir angle and applies block-specific phase center offsets only. However, the transition from relative to absolute receiver antenna corrections presently under discussion necessitates the consideration of satellite antenna pcvs. Moreover, studies of several groups have shown that the offsets are not homogeneous within a satellite block. Manufacturer specifications seem to confirm this assumption. In order to get best possible antenna corrections, consistent ten-year time series (1994-2004) of satellite-specific pcvs and offsets were generated. This challenging effort became possible as part of the reprocessing of a global GPS network currently performed by the Technical Universities of Munich and Dresden. The data of about 160 stations since the official start of the IGS in 1994 have been reprocessed, as today's GPS time series are mostly inhomogeneous and inconsistent due to continuous improvements in the processing strategies and modeling of global GPS solutions. An analysis of the signals contained in the time series of the phase center offsets demonstrates amplitudes on the decimeter level, at least one order of magnitude worse than the desired accuracy. The periods partly arise from the GPS orbit configuration, as the orientation of the orbit planes with regard to the inertial system repeats after about 350 days due to the rotation of the ascending nodes. In addition, the rms values of the X- and Y-offsets show a high correlation with the angle between the orbit plane and the direction to the sun. The time series of the pcvs mainly point at the correlation with the global terrestrial scale. Solutions with relative and absolute phase center corrections, with block- and satellite-specific satellite antenna corrections demonstrate the effect of this parameter group on other global GPS parameters such as the terrestrial scale, station velocities, the geocenter position or the tropospheric delays. Thus, deeper insight into the so-called `Bermuda triangle' of several highly correlated parameters is given.

  14. A Time-Space Domain Information Fusion Method for Specific Emitter Identification Based on Dempster-Shafer Evidence Theory.

    PubMed

    Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang

    2017-08-28

    Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.

  15. Age and gender specific biokinetic model for strontium in humans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shagina, N. B.; Tolstykh, E. I.; Degteva, M. O.

    A biokinetic model for strontium in humans is necessary for quantification of internal doses due to strontium radioisotopes. The ICRP-recommended biokinetic model for strontium has limitation for use in a population study, because it is not gender specific and does not cover all age ranges. The extensive Techa River data set on 90Sr in humans (tens of thousands of measurements) is a unique source of data on long-term strontium retention for men and women of all ages at intake. These, as well as published data, were used for evaluation of age- and gender-specific parameters for a new compartment biokinetic modelmore » for strontium (Sr-AGe model). The Sr-AGe model has similar structure as the ICRP model for the alkaline earth elements. The following parameters were mainly reevaluated: gastro-intestinal absorption and parameters related to the processes of bone formation and resorption defining calcium and strontium transfers in skeletal compartments. The Sr-AGe model satisfactorily describes available data sets on strontium retention for different kinds of intake (dietary and intravenous) at different ages (0–80 years old) and demonstrates good agreement with data sets for different ethnic groups. The Sr-AGe model can be used for dose assessment in epidemiological studies of general population exposed to ingested strontium radioisotopes.« less

  16. Bimanual cross-talk during reaching movements is primarily related to response selection, not the specification of motor parameters

    NASA Technical Reports Server (NTRS)

    Hazeltine, Eliot; Diedrichsen, Joern; Kennerley, Steven W.; Ivry, Richard B.

    2003-01-01

    Simultaneous reaching movements made with the two hands can show a considerable increase in reaction time (RT) when they differ in terms of direction or extent, compared to when the movements involve the same direction and extent. This cost has been attributed to cross-talk in the specification of the motor parameters for the two hands. However, a recent study [Diedrichsen, Hazeltine, Kennerley, & Ivry, (2001). Psychological Science, 12, 493-498] indicates that when reaching movements are cued by the onset of the target endpoint, no compatibility effects are observed. To determine why directly cued movements are immune from interference, we varied the stimulus onset asynchrony for the two movements and used different combinations of directly cued and symbolically cued movements. In two experiments, compatibility effects were only observed when both movements were symbolically cued. No difference was found between compatible and incompatible movements when both movements were directly cued or when one was directly cued and the other was symbolically cued. These results indicate that interference is not related to the specification of movement parameters but instead emerges from processes associated with response selection. Moreover, the data suggest that cross-talk, when present, primarily shortens the RT of the second movement on compatible trials rather than lengthening this RT on incompatible trials.

  17. The Development of a Microbial Challenge Test with Acholeplasma laidlawii To Rate Mycoplasma-Retentive Filters by Filter Manufacturers.

    PubMed

    Folmsbee, Martha; Lentine, Kerry Roche; Wright, Christine; Haake, Gerhard; Mcburnie, Leesa; Ashtekar, Dilip; Beck, Brian; Hutchison, Nick; Okhio-Seaman, Laura; Potts, Barbara; Pawar, Vinayak; Windsor, Helena

    2014-01-01

    Mycoplasma are bacteria that can penetrate 0.2 and 0.22 μm rated sterilizing-grade filters and even some 0.1 μm rated filters. Primary applications for mycoplasma filtration include large scale mammalian and bacterial cell culture media and serum filtration. The Parenteral Drug Association recognized the absence of standard industry test parameters for testing and classifying 0.1 μm rated filters for mycoplasma clearance and formed a task force to formulate consensus test parameters. The task force established some test parameters by common agreement, based upon general industry practices, without the need for additional testing. However, the culture medium and incubation conditions, for generating test mycoplasma cells, varied from filter company to filter company and was recognized as a serious gap by the task force. Standardization of the culture medium and incubation conditions required collaborative testing in both commercial filter company laboratories and in an Independent laboratory (Table I). The use of consensus test parameters will facilitate the ultimate cross-industry goal of standardization of 0.1 μm filter claims for mycoplasma clearance. However, it is still important to recognize filter performance will depend on the actual conditions of use. Therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. Mycoplasma are small bacteria that have the ability to penetrate sterilizing-grade filters. Filtration of large-scale mammalian and bacterial cell culture media is an example of an industry process where effective filtration of mycoplasma is required. The Parenteral Drug Association recognized the absence of industry standard test parameters for evaluating mycoplasma clearance filters by filter manufacturers and formed a task force to formulate such a consensus among manufacturers. The use of standardized test parameters by filter manufacturers, including the preparation of the culture broth, will facilitate the end user's evaluation of the mycoplasma clearance claims provided by filter vendors. However, it is still important to recognize filter performance will depend on the actual conditions of use; therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. © PDA, Inc. 2014.

  18. Uncertainty analysis as essential step in the establishment of the dynamic Design Space of primary drying during freeze-drying.

    PubMed

    Mortier, Séverine Thérèse F C; Van Bockstal, Pieter-Jan; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2016-06-01

    Large molecules, such as biopharmaceuticals, are considered the key driver of growth for the pharmaceutical industry. Freeze-drying is the preferred way to stabilise these products when needed. However, it is an expensive, inefficient, time- and energy-consuming process. During freeze-drying, there are only two main process variables to be set, i.e. the shelf temperature and the chamber pressure, however preferably in a dynamic way. This manuscript focuses on the essential use of uncertainty analysis for the determination and experimental verification of the dynamic primary drying Design Space for pharmaceutical freeze-drying. Traditionally, the chamber pressure and shelf temperature are kept constant during primary drying, leading to less optimal process conditions. In this paper it is demonstrated how a mechanistic model of the primary drying step gives the opportunity to determine the optimal dynamic values for both process variables during processing, resulting in a dynamic Design Space with a well-known risk of failure. This allows running the primary drying process step as time efficient as possible, hereby guaranteeing that the temperature at the sublimation front does not exceed the collapse temperature. The Design Space is the multidimensional combination and interaction of input variables and process parameters leading to the expected product specifications with a controlled (i.e., high) probability. Therefore, inclusion of parameter uncertainty is an essential part in the definition of the Design Space, although it is often neglected. To quantitatively assess the inherent uncertainty on the parameters of the mechanistic model, an uncertainty analysis was performed to establish the borders of the dynamic Design Space, i.e. a time-varying shelf temperature and chamber pressure, associated with a specific risk of failure. A risk of failure acceptance level of 0.01%, i.e. a 'zero-failure' situation, results in an increased primary drying process time compared to the deterministic dynamic Design Space; however, the risk of failure is under control. Experimental verification revealed that only a risk of failure acceptance level of 0.01% yielded a guaranteed zero-defect quality end-product. The computed process settings with a risk of failure acceptance level of 0.01% resulted in a decrease of more than half of the primary drying time in comparison with a regular, conservative cycle with fixed settings. Copyright © 2016. Published by Elsevier B.V.

  19. Assessing locomotor skills development in childhood using wearable inertial sensor devices: the running paradigm.

    PubMed

    Masci, Ilaria; Vannozzi, Giuseppe; Bergamini, Elena; Pesce, Caterina; Getchell, Nancy; Cappozzo, Aurelio

    2013-04-01

    Objective quantitative evaluation of motor skill development is of increasing importance to carefully drive physical exercise programs in childhood. Running is a fundamental motor skill humans adopt to accomplish locomotion, which is linked to physical activity levels, although the assessment is traditionally carried out using qualitative evaluation tests. The present study aimed at investigating the feasibility of using inertial sensors to quantify developmental differences in the running pattern of young children. Qualitative and quantitative assessment tools were adopted to identify a skill-sensitive set of biomechanical parameters for running and to further our understanding of the factors that determine progression to skilled running performance. Running performances of 54 children between the ages of 2 and 12 years were submitted to both qualitative and quantitative analysis, the former using sequences of developmental level, the latter estimating temporal and kinematic parameters from inertial sensor measurements. Discriminant analysis with running developmental level as dependent variable allowed to identify a set of temporal and kinematic parameters, within those obtained with the sensor, that best classified children into the qualitative developmental levels (accuracy higher than 67%). Multivariate analysis of variance with the quantitative parameters as dependent variables allowed to identify whether and which specific parameters or parameter subsets were differentially sensitive to specific transitions between contiguous developmental levels. The findings showed that different sets of temporal and kinematic parameters are able to tap all steps of the transitional process in running skill described through qualitative observation and can be prospectively used for applied diagnostic and sport training purposes. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. A time series model: First-order integer-valued autoregressive (INAR(1))

    NASA Astrophysics Data System (ADS)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  1. Reducing the Volume of NASA Earth-Science Data

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre

    2010-01-01

    A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.

  2. Coupling heat and chemical tracer experiments for estimating heat transfer parameters in shallow alluvial aquifers.

    PubMed

    Wildemeersch, S; Jamin, P; Orban, P; Hermans, T; Klepikova, M; Nguyen, F; Brouyère, S; Dassargues, A

    2014-11-15

    Geothermal energy systems, closed or open, are increasingly considered for heating and/or cooling buildings. The efficiency of such systems depends on the thermal properties of the subsurface. Therefore, feasibility and impact studies performed prior to their installation should include a field characterization of thermal properties and a heat transfer model using parameter values measured in situ. However, there is a lack of in situ experiments and methodology for performing such a field characterization, especially for open systems. This study presents an in situ experiment designed for estimating heat transfer parameters in shallow alluvial aquifers with focus on the specific heat capacity. This experiment consists in simultaneously injecting hot water and a chemical tracer into the aquifer and monitoring the evolution of groundwater temperature and concentration in the recovery well (and possibly in other piezometers located down gradient). Temperature and concentrations are then used for estimating the specific heat capacity. The first method for estimating this parameter is based on a modeling in series of the chemical tracer and temperature breakthrough curves at the recovery well. The second method is based on an energy balance. The values of specific heat capacity estimated for both methods (2.30 and 2.54MJ/m(3)/K) for the experimental site in the alluvial aquifer of the Meuse River (Belgium) are almost identical and consistent with values found in the literature. Temperature breakthrough curves in other piezometers are not required for estimating the specific heat capacity. However, they highlight that heat transfer in the alluvial aquifer of the Meuse River is complex and contrasted with different dominant process depending on the depth leading to significant vertical heat exchange between upper and lower part of the aquifer. Furthermore, these temperature breakthrough curves could be included in the calibration of a complex heat transfer model for estimating the entire set of heat transfer parameters and their spatial distribution by inverse modeling. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.

  4. Mathematical modeling of a continuous alcoholic fermentation process in a two-stage tower reactor cascade with flocculating yeast recycle.

    PubMed

    de Oliveira, Samuel Conceição; de Castro, Heizir Ferreira; Visconti, Alexandre Eliseu Stourdze; Giudici, Reinaldo

    2015-03-01

    Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D 1 = D 2 = 0.27-0.95 h(-1)), constant recycle ratio (α = F R /F = 4.0) and a sugar concentration in the feed stream (S 0) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.

  5. Robust parameter design for automatically controlled systems and nanostructure synthesis

    NASA Astrophysics Data System (ADS)

    Dasgupta, Tirthankar

    2007-12-01

    This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.

  6. eFurniture for home-based frailty detection using artificial neural networks and wireless sensors.

    PubMed

    Chang, Yu-Chuan; Lin, Chung-Chih; Lin, Pei-Hsin; Chen, Chun-Chang; Lee, Ren-Guey; Huang, Jing-Siang; Tsai, Tsai-Hsuan

    2013-02-01

    The purpose of this study is to integrate wireless sensor technologies and artificial neural networks to develop a system to manage personal frailty information automatically. The system consists of five parts: (1) an eScale to measure the subject's reaction time; (2) an eChair to detect slowness in movement, weakness and weight loss; (3) an ePad to measure the subject's balancing ability; (4) an eReach to measure body extension; and (5) a Home-based Information Gateway, which collects all the data and predicts the subject's frailty. Using a furniture-based measuring device to provide home-based measurement means that health checks are not confined to health institutions. We designed two experiments to obtain optimum frailty prediction model and test overall system performance: (1) We developed a three-step process to adjust different parameters to obtain an optimized neural identification network whose parameters include initialization, L.R. dec and L.R. inc. The post-process identification rate increased from 77.85% to 83.22%. (2) We used 149 cases to evaluate the sensitivity and specificity of our frailty prediction algorithm. The sensitivity and specificity of this system are 79.71% and 86.25% respectively. These results show that our system is a high specificity prediction tool that can be used to assess frailty. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Emergent Aerospace Designs Using Negotiating Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Deshmukh, Abhijit; Middelkoop, Timothy; Krothapalli, Anjaneyulu; Smith, Charles

    2000-01-01

    This paper presents a distributed design methodology where designs emerge as a result of the negotiations between different stake holders in the process, such as cost, performance, reliability, etc. The proposed methodology uses autonomous agents to represent design decision makers. Each agent influences specific design parameters in order to maximize their utility. Since the design parameters depend on the aggregate demand of all the agents in the system, design agents need to negotiate with others in the market economy in order to reach an acceptable utility value. This paper addresses several interesting research issues related to distributed design architectures. First, we present a flexible framework which facilitates decomposition of the design problem. Second, we present overview of a market mechanism for generating acceptable design configurations. Finally, we integrate learning mechanisms in the design process to reduce the computational overhead.

  8. Concurrent Image Processing Executive (CIPE). Volume 3: User's guide

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh

    1990-01-01

    CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.

  9. High speed demodulation systems for fiber optic grating sensors

    NASA Technical Reports Server (NTRS)

    Udd, Eric (Inventor); Weisshaar, Andreas (Inventor)

    2002-01-01

    Fiber optic grating sensor demodulation systems are described that offer high speed and multiplexing options for both single and multiple parameter fiber optic grating sensors. To attain very high speeds for single parameter fiber grating sensors ratio techniques are used that allow a series of sensors to be placed in a single fiber while retaining high speed capability. These methods can be extended to multiparameter fiber grating sensors. Optimization of speeds can be obtained by minimizing the number of spectral peaks that must be processed and it is shown that two or three spectral peak measurements may in specific multiparameter applications offer comparable or better performance than processing four spectral peaks. Combining the ratio methods with minimization of peak measurements allows very high speed measurement of such important environmental effects as transverse strain and pressure.

  10. Risk analysis of hematopoietic stem cell transplant process: failure mode, effect, and criticality analysis and hazard analysis critical control point methods integration based on guidelines to good manufacturing practice for medicinal product ANNEX 20 (February 2008).

    PubMed

    Gianassi, S; Bisin, S; Bindi, B; Spitaleri, I; Bambi, F

    2010-01-01

    The collection and handling of hematopoietic stem cells (HSCs) must meet high quality requirements. An integrated Quality Risk Management can help to identify and contain potential risks related to HSC production. Risk analysis techniques allow one to "weigh" identified hazards, considering the seriousness of their effects, frequency, and detectability, seeking to prevent the most harmful hazards. The Hazard Analysis Critical Point, recognized as the most appropriate technique to identify risks associated with physical, chemical, and biological hazards for cellular products, consists of classifying finished product specifications and limits of acceptability, identifying all off-specifications, defining activities that can cause them, and finally establishing both a monitoring system for each Critical Control Point and corrective actions for deviations. The severity of possible effects on patients, as well as the occurrence and detectability of critical parameters, are measured on quantitative scales (Risk Priority Number [RPN]). Risk analysis was performed with this technique on manipulation process of HPC performed at our blood center. The data analysis showed that hazards with higher values of RPN with greater impact on the process are loss of dose and tracking; technical skills of operators and manual transcription of data were the most critical parameters. Problems related to operator skills are handled by defining targeted training programs, while other critical parameters can be mitigated with the use of continuous control systems. The blood center management software was completed by a labeling system with forms designed to be in compliance with standards in force and by starting implementation of a cryopreservation management module. Copyright 2010 Elsevier Inc. All rights reserved.

  11. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  12. SU-E-T-760: Tolerance Design for Site-Specific Range in Proton Patient QA Process Using the Six Sigma Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lah, J; Shin, D; Kim, G

    Purpose: To show how tolerance design and tolerancing approaches can be used to predict and improve the site-specific range in patient QA process in implementing the Six Sigma. Methods: In this study, patient QA plans were selected according to 6 site-treatment groups: head &neck (94 cases), spine (76 cases), lung (89 cases), liver (53 cases), pancreas (55 cases), and prostate (121 cases), treated between 2007 and 2013. We evaluated a model of the Six Sigma that determines allowable deviations in design parameters and process variables in patient-specific QA, where possible, tolerance may be loosened, then customized if it necessary tomore » meet the functional requirements. A Six Sigma problem-solving methodology is known as DMAIC phases, which are used stand for: Define a problem or improvement opportunity, Measure process performance, Analyze the process to determine the root causes of poor performance, Improve the process by fixing root causes, Control the improved process to hold the gains. Results: The process capability for patient-specific range QA is 0.65 with only ±1 mm of tolerance criteria. Our results suggested the tolerance level of ±2–3 mm for prostate and liver cases and ±5 mm for lung cases. We found that customized tolerance between calculated and measured range reduce that patient QA plan failure and almost all sites had failure rates less than 1%. The average QA time also improved from 2 hr to less than 1 hr for all including planning and converting process, depth-dose measurement and evaluation. Conclusion: The objective of tolerance design is to achieve optimization beyond that obtained through QA process improvement and statistical analysis function detailing to implement a Six Sigma capable design.« less

  13. A theoretical model to determine the capacity performance of shape-specific electrodes

    NASA Astrophysics Data System (ADS)

    Yue, Yuan; Liang, Hong

    2018-06-01

    A theory is proposed to explain and predict the electrochemical process during reaction between lithium ions and electrode materials. In the model, the process of reaction is proceeded into two steps, surface adsorption and diffusion of lithium ions. The surface adsorption is an instantaneous process for lithium ions to adsorb onto the surface sites of active materials. The diffusion of lithium ions into particles is determined by the charge-discharge condition. A formula to determine the maximum specific capacity of active materials at different charging rates (C-rates) is derived. The maximum specific capacity is correlated to characteristic parameters of materials and cycling - such as size, aspect ratio, surface area, and C-rate. Analysis indicates that larger particle size or greater aspect ratio of active materials and faster C-rates can reduce maximum specific capacity. This suggests that reducing particle size of active materials and slowing the charge-discharge speed can provide enhanced electrochemical performance of a battery cell. Furthermore, the model is validated by published experimental results. This model brings new understanding in quantification of electrochemical kinetics and capacity performance. It enables development of design strategies for novel electrodes and future generation of energy storage devices.

  14. TMS uncovers details about sub-regional language-specific processing networks in early bilinguals.

    PubMed

    Hämäläinen, Sini; Mäkelä, Niko; Sairanen, Viljami; Lehtonen, Minna; Kujala, Teija; Leminen, Alina

    2018-05-01

    Despite numerous functional neuroimaging and intraoperative electrical cortical mapping studies aimed at investigating the cortical organisation of native (L1) and second (L2) language processing, the neural underpinnings of bilingualism remain elusive. We investigated whether the neural network engaged in speech production over the bilateral posterior inferior frontal gyrus (pIFG) is the same (i.e., shared) or different (i.e., language-specific) for the two languages of bilingual speakers. Navigated transcranial magnetic stimulation (TMS) was applied over the left and right posterior inferior gyrus (pIFG), while early simultaneous bilinguals performed a picture naming task with their native languages. An ex-Gaussian distribution was fitted to the naming latencies and the resulting parameters were compared between languages and across stimulation conditions. The results showed that although the naming performance in general was highly comparable between the languages, TMS produced a language-specific effect when the pulses were delivered to the left pIFG at 200 ms poststimulus. We argue that this result causally demonstrates, for the first time, that even within common language-processing areas, there are distinct language-specific neural populations for the different languages in early simultaneous bilinguals. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  16. Conceptual design and structural analysis for an 8.4-m telescope

    NASA Astrophysics Data System (ADS)

    Mendoza, Manuel; Farah, Alejandro; Ruiz Schneider, Elfego

    2004-09-01

    This paper describes the conceptual design of the optics support structures of a telescope with a primary mirror of 8.4 m, the same size as a Large Binocular Telescope (LBT) primary mirror. The design goal is to achieve a structure for supporting the primary and secondary mirrors and keeping them joined as rigid as possible. With this purpose an optimization with several models was done. This iterative design process includes: specifications development, concepts generation and evaluation. Process included Finite Element Analysis (FEA) as well as other analytical calculations. Quality Function Deployment (QFD) matrix was used to obtain telescope tube and spider specifications. Eight spiders and eleven tubes geometric concepts were proposed. They were compared in decision matrixes using performance indicators and parameters. Tubes and spiders went under an iterative optimization process. The best tubes and spiders concepts were assembled together. All assemblies were compared and ranked according to their performance.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  19. A real-time multi-channel monitoring system for stem cell culture process.

    PubMed

    Xicai Yue; Drakakis, E M; Lim, M; Radomska, A; Hua Ye; Mantalaris, A; Panoskaltsis, N; Cass, A

    2008-06-01

    A novel, up to 128 channels, multi-parametric physiological measurement system suitable for monitoring hematopoietic stem cell culture processes and cell cultures in general is presented in this paper. The system aims to measure in real-time the most important physical and chemical culture parameters of hematopoietic stem cells, including physicochemical parameters, nutrients, and metabolites, in a long-term culture process. The overarching scope of this research effort is to control and optimize the whole bioprocess by means of the acquisition of real-time quantitative physiological information from the culture. The system is designed in a modular manner. Each hardware module can operate as an independent gain programmable, level shift adjustable, 16 channel data acquisition system specific to a sensor type. Up to eight such data acquisition modules can be combined and connected to the host PC to realize the whole system hardware. The control of data acquisition and the subsequent management of data is performed by the system's software which is coded in LabVIEW. Preliminary experimental results presented here show that the system not only has the ability to interface to various types of sensors allowing the monitoring of different types of culture parameters. Moreover, it can capture dynamic variations of culture parameters by means of real-time multi-channel measurements thus providing additional information on both temporal and spatial profiles of these parameters within a bioreactor. The system is by no means constrained in the hematopoietic stem cell culture field only. It is suitable for cell growth monitoring applications in general.

  20. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  1. Exploiting Auto-Collimation for Real-Time Onboard Monitoring of Space Optical Camera Geometric Parameters

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, H.; Liu, D.; Miu, Y.

    2018-05-01

    Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.

  2. Estimation of hydraulic parameters from an unconfined aquifer test conducted in a glacial outwash deposit, Cape Cod, Massachusetts

    USGS Publications Warehouse

    Moench, A.F.; Garabedian, Stephen P.; LeBlanc, Denis R.

    2000-01-01

    An aquifer test conducted in a sand and gravel, glacial outwash deposit on Cape Cod, Massachusetts was analyzed by means of a model for flow to a partially penetrating well in a homogeneous, anisotropic unconfined aquifer. The model is designed to account for all significant mechanisms expected to influence drawdown in observation piezometers and in the pumped well. In addition to the usual fluid-flow and storage processes, additional processes include effects of storage in the pumped well, storage in observation piezometers, effects of skin at the pumped-well screen, and effects of drainage from the zone above the water table. The aquifer was pumped at a rate of 320 gallons per minute for 72-hours and drawdown measurements were made in the pumped well and in 20 piezometers located at various distances from the pumped well and depths below the land surface. To facilitate the analysis, an automatic parameter estimation algorithm was used to obtain relevant unconfined aquifer parameters, including the saturated thickness and a set of empirical parameters that relate to gradual drainage from the unsaturated zone. Drainage from the unsaturated zone is treated in this paper as a finite series of exponential terms, each of which contains one empirical parameter that is to be determined. It was necessary to account for effects of gradual drainage from the unsaturated zone to obtain satisfactory agreement between measured and simulated drawdown, particularly in piezometers located near the water table. The commonly used assumption of instantaneous drainage from the unsaturated zone gives rise to large discrepancies between measured and predicted drawdown in the intermediate-time range and can result in inaccurate estimates of aquifer parameters when automatic parameter estimation procedures are used. The values of the estimated hydraulic parameters are consistent with estimates from prior studies and from what is known about the aquifer at the site. Effects of heterogeneity at the site were small as measured drawdowns in all piezometers and wells were very close to the simulated values for a homogeneous porous medium. The estimated values are: specific yield, 0.26; saturated thickness, 170 feet; horizontal hydraulic conductivity, 0.23 feet per minute; vertical hydraulic conductivity, 0.14 feet per minute; and specific storage, 1.3x10-5 per foot. It was found that drawdown in only a few piezometers strategically located at depth near the pumped well yielded parameter estimates close to the estimates obtained for the entire data set analyzed simultaneously. If the influence of gradual drainage from the unsaturated zone is not taken into account, specific yield is significantly underestimated even in these deep-seated piezometers. This helps to explain the low values of specific yield often reported for granular aquifers in the literature. If either the entire data set or only the drawdown in selected deep-seated piezometers was used, it was found unnecessary to conduct the test for the full 72-hours to obtain accurate estimates of the hydraulic parameters. For some piezometer groups, practically identical results would be obtained for an aquifer test conducted for only 8-hours. Drawdowns measured in the pumped well and piezometers at distant locations were diagnostic only of aquifer transmissivity.

  3. lumpR 2.0.0: an R package facilitating landscape discretisation for hillslope-based hydrological models

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2017-08-01

    The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all. Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation. In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.

  4. Empirical evaluation of cross-site reproducibility in radiomic features for characterizing prostate MRI

    NASA Astrophysics Data System (ADS)

    Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish

    2018-02-01

    The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.

  5. Automation of data processing and calculation of retention parameters and thermodynamic data for gas chromatography

    NASA Astrophysics Data System (ADS)

    Makarycheva, A. I.; Faerman, V. A.

    2017-02-01

    The analyses of automation patterns is performed and the programming solution for the automation of data processing of the chromatographic data and their further information storage with a help of a software package, Mathcad and MS Excel spreadsheets, is developed. The offered approach concedes the ability of data processing algorithm modification and does not require any programming experts participation. The approach provides making a measurement of the given time and retention volumes, specific retention volumes, a measurement of differential molar free adsorption energy, and a measurement of partial molar solution enthalpies and isosteric heats of adsorption. The developed solution is focused on the appliance in a small research group and is tested on the series of some new gas chromatography sorbents. More than 20 analytes were submitted to calculation of retention parameters and thermodynamic sorption quantities. The received data are provided in the form accessible to comparative analysis, and they are able to find sorbing agents with the most profitable properties to solve some concrete analytic issues.

  6. Mass production of bacterial communities adapted to the degradation of volatile organic compounds (TEX).

    PubMed

    Lapertot, Miléna; Seignez, Chantal; Ebrahimi, Sirous; Delorme, Sandrine; Peringer, Paul

    2007-06-01

    This study focuses on the mass cultivation of bacteria adapted to the degradation of a mixture composed of toluene, ethylbenzene, o-, m- and p-xylenes (TEX). For the cultivation process Substrate Pulse Batch (SPB) technique was adapted under well-automated conditions. The key parameters to be monitored were handled by LabVIEW software including, temperature, pH, dissolved oxygen and turbidity. Other parameters, such as biomass, ammonium or residual substrate concentrations needed offline measurements. SPB technique has been successfully tested experimentally on TEX. The overall behavior of the mixed bacterial population was observed and discussed along the cultivation process. Carbon and nitrogen limitations were shown to affect the integrity of the bacterial cells as well as their production of exopolymeric substances (EPS). Average productivity and yield values successfully reached the industrial specifications, which were 0.45 kg(DW)m(-3) d(-1) and 0.59 g(DW)g (C) (-1) , respectively. Accuracy and reproducibility of the obtained results present the controlled SPB process as a feasible technique.

  7. Role of Temperature and SiCP Parameters in Stability and Quality of Al-Si-Mg/SiC Foams

    NASA Astrophysics Data System (ADS)

    Ravi Kumar, N. V.; Gokhale, Amol A.

    2018-06-01

    Composites of Al-Si-Mg (A356) alloy with silicon carbide particles were synthesized in-house and foamed by melt processing using titanium hydride as foaming agent. The effects of the SiCP size and content, and foaming temperature on the stability and quality of the foam were explored. It was observed that the foam stability depended on the foaming temperature alone but not on the particle size or volume percent within the studied ranges. Specifically, foam stability was poor at 670°C. Among the stable foams obtained at 640°C, cell soundness (absence of/low defects, and collapse) was seen to vary depending on the particle size and content; For example, for finer size, lower particle contents were sufficient to obtain sound cell structure. It is possible to determine a foaming process window based on material and process parameters for good expansion, foam stability, and cell structure.

  8. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    NASA Astrophysics Data System (ADS)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  9. Online Denoising Based on the Second-Order Adaptive Statistics Model.

    PubMed

    Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei

    2017-07-20

    Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.

  10. Microstructure and Magnetic Properties of Magnetic Material Fabricated by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Jhong, Kai Jyun; Huang, Wei-Chin; Lee, Wen Hsi

    Selective Laser Melting (SLM) is a powder-based additive manufacturing which is capable of producing parts layer-by-layer from a 3D CAD model. The aim of this study is to adopt the selective laser melting technique to magnetic material fabrication. [1]For the SLM process to be practical in industrial use, highly specific mechanical properties of the final product must be achieved. The integrity of the manufactured components depend strongly on each single laser-melted track and every single layer, as well as the strength of the connections between them. In this study, effects of the processing parameters, such as the space distance of surface morphology is analyzed. Our hypothesis is that when a magnetic product is made by the selective laser melting techniques instead of traditional techniques, the finished component will have more precise and effective properties. This study analyzed the magnitudes of magnetic properties in comparison with different parameters in the SLM process and compiled a completed product to investigate the efficiency in contrast with products made with existing manufacturing processes.

  11. Wolf Creek Research Basin Cold REgion Process Studies - 1992-2003

    NASA Astrophysics Data System (ADS)

    Janowicz, R.; Hedstrom, N.; Pomeroy, J.; Granger, R.; Carey, S.

    2004-12-01

    The development of hydrological models in northern regions are complicated by cold region processes. Sparse vegetation influences snowpack accumulation, redistribution and melt, frozen ground effects infiltration and runoff and cold soils in the summer effect evapotranspiration rates. Situated in the upper Yukon River watershed, the 195 km2 Wolf Creek Research Basin was instrumented in 1992 to calibrate hydrologic flow models, and has since evolved into a comprehensive study of cold region processes and linkages, contributing significantly to hydrological and climate change modelling. Studies include those of precipitation distribution, snowpack accumulation and redistribution, energy balance, snowmelt infiltration, and water balance. Studies of the spatial variability of hydrometeorological data demonstrate the importance of physical parameters on their distribution and control on runoff processes. Many studies have also identified the complex interaction of several of the physical parameters, including topography, vegetation and frozen ground (seasonal or permafrost) as important. They also show that there is a fundamental, underlying spatial structure to the watershed that must be adequately represented in parameterization schemes for scaling and watershed modelling. The specific results of numerous studies are presented.

  12. Mechanism and design of intermittent aeration activated sludge process for nitrogen removal.

    PubMed

    Hanhan, Oytun; Insel, Güçlü; Yagci, Nevin Ozgur; Artan, Nazik; Orhon, Derin

    2011-01-01

    The paper provided a comprehensive evaluation of the mechanism and design of intermittent aeration activated sludge process for nitrogen removal. Based on the specific character of the process the total cycle time, (T(C)), the aerated fraction, (AF), and the cycle time ratio, (CTR) were defined as major design parameters, aside from the sludge age of the system. Their impact on system performance was evaluated by means of process simulation. A rational design procedure was developed on the basis of basic stochiometry and mass balance related to the oxidation and removal of nitrogen under aerobic and anoxic conditions, which enabled selected of operation parameters of optimum performance. The simulation results indicated that the total nitrogen level could be reduced to a minimum level by appropriate manipulation of the aerated fraction and cycle time ratio. They also showed that the effluent total nitrogen could be lowered to around 4.0 mgN/L by adjusting the dissolved oxygen set-point to 0.5 mg/L, a level which promotes simultaneous nitrification and denitrification.

  13. 40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... HAP as THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR... subtract the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...

  14. 40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... HAP as THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR... subtract the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...

  15. 40 CFR Table 4 to Subpart Dddd of... - Requirements for Performance Tests

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... HAP as THC compliance option measure emissions of total HAP as THC Method 25A in appendix A to 40 CFR... subtract the methane emissions from the emissions of total HAP as THC. (6) each process unit subject to a... § 63.2240(c) establish the site-specific operating requirements (including the parameter limits or THC...

  16. PDC bit hydraulics design, profile are key to reducing balling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hariharan, P.R.; Azar, J.J.

    1996-12-09

    Polycrystalline diamond compact (PDC) bits with a parabolic profile and bladed hydraulic design have a lesser tendency to ball during drilling of reactive shales. PDC bits with ribbed or open-face hydraulic designs and those with flat or rounded profiles tended to ball more often in the bit balling experiments conducted. Experimental work also indicates that PDC hydraulic design seems to have a greater influence on bit balling tendency compared to bit profile design. There are five main factors that affect bit balling: formation type, drilling fluid, drilling hydraulics, bit design, and confining pressures. An equation for specific energy showed thatmore » it could be used to describe the efficiency of the drilling process by examining the amount of energy spent in drilling a unit volume of rock. This concept of specific energy has been used herein to correlate with the parameter Rd, a parameter to quantify the degree of balling.« less

  17. Specific modes of vibratory technological machines: mathematical models, peculiarities of interaction of system elements

    NASA Astrophysics Data System (ADS)

    Eliseev, A. V.; Sitov, I. S.; Eliseev, S. V.

    2018-03-01

    The methodological basis of constructing mathematical models of vibratory technological machines is developed in the article. An approach is proposed that makes it possible to introduce a vibration table in a specific mode that provides conditions for the dynamic damping of oscillations for the zone of placement of a vibration exciter while providing specified vibration parameters in the working zone of the vibration table. The aim of the work is to develop methods of mathematical modeling, oriented to technological processes with long cycles. The technologies of structural mathematical modeling are used with structural schemes, transfer functions and amplitude-frequency characteristics. The concept of the work is to test the possibilities of combining the conditions for reducing loads with working components of a vibration exciter while simultaneously maintaining sufficiently wide limits in variating the parameters of the vibrational field.

  18. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  19. German dentists' websites on periodontitis have low quality of information.

    PubMed

    Schwendicke, Falk; Stange, Jörg; Stange, Claudia; Graetz, Christian

    2017-08-02

    The internet is an increasingly relevant source of health information. We aimed to assess the quality of German dentists' websites on periodontitis, hypothesizing that it was significantly associated with a number of practice-specific parameters. We searched four electronic search engines and included pages which were freely accessible, posted by a dental practice in Germany, and mentioned periodontal disease/therapy. Websites were assessed for (1) technical and functional aspects, (2) generic quality and risk of bias, (3) disease-specific information. For 1 and 2, validated tools (LIDA/DISCERN) were used for assessment. For 3, we developed a criterion catalogue encompassing items on etiologic and prognostic factors for periodontitis, the diagnostic and treatment process, and the generic chance of tooth retention in periodontitis patients. Inter- and intra-rater reliabilities were largely moderate. Generalized linear modeling was used to assess the association between the information quality (measured as % of maximally available scores) and practice-specific characteristics. Seventy-one websites were included. Technical and functional aspects were reported in significantly higher quality (median: 71%, 25/75th percentiles: 67/79%) than all other aspects (p < 0.05). Generic risk of bias and most disease-specific aspects showed significantly lower reporting quality (median range was 0-40%), with poorest reporting for prognostic factors (9;0/27%), diagnostic process (0;0/33%) and chances of tooth retention (0;0/2%). We found none of the practice-specific parameters to have significant impact on the overall quality of the websites. Most German dentists' websites on periodontitis are not fully trustworthy and relevant information are not or insufficiently considered. There is great need to improve the information quality from such websites at least with regards to periodontitis.

  20. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  1. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  2. Homogenization Theory for the Prediction of Obstructed Solute Diffusivity in Macromolecular Solutions.

    PubMed

    Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova

    2016-01-01

    The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter.

  3. Diamond Tool Specific Wear Rate Assessment in Granite Machining by Means of Knoop Micro-Hardness and Process Parameters

    NASA Astrophysics Data System (ADS)

    Goktan, R. M.; Gunes Yılmaz, N.

    2017-09-01

    The present study was undertaken to investigate the potential usability of Knoop micro-hardness, both as a single parameter and in combination with operational parameters, for sawblade specific wear rate (SWR) assessment in the machining of ornamental granites. The sawing tests were performed on different commercially available granite varieties by using a fully instrumented side-cutting machine. During the sawing tests, two fundamental productivity parameters, namely the workpiece feed rate and cutting depth, were varied at different levels. The good correspondence observed between the measured Knoop hardness and SWR values for different operational conditions indicates that it has the potential to be used as a rock material property that can be employed in preliminary wear estimations of diamond sawblades. Also, a multiple regression model directed to SWR prediction was developed which takes into account the Knoop hardness, cutting depth and workpiece feed rate. The relative contribution of each independent variable in the prediction of SWR was determined by using test statistics. The prediction accuracy of the established model was checked against new observations. The strong prediction performance of the model suggests that its framework may be applied to other granites and operational conditions for quantifying or differentiating the relative wear performance of diamond sawblades.

  4. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  5. Intracellular response to process optimization and impact on productivity and product aggregates for a high-titer CHO cell process.

    PubMed

    Handlogten, Michael W; Lee-O'Brien, Allison; Roy, Gargi; Levitskaya, Sophia V; Venkat, Raghavan; Singh, Shailendra; Ahuja, Sanjeev

    2018-01-01

    A key goal in process development for antibodies is to increase productivity while maintaining or improving product quality. During process development of an antibody, titers were increased from 4 to 10 g/L while simultaneously decreasing aggregates. Process development involved optimization of media and feed formulations, feed strategy, and process parameters including pH and temperature. To better understand how CHO cells respond to process changes, the changes were implemented in a stepwise manner. The first change was an optimization of the feed formulation, the second was an optimization of the medium, and the third was an optimization of process parameters. Multiple process outputs were evaluated including cell growth, osmolality, lactate production, ammonium concentration, antibody production, and aggregate levels. Additionally, detailed assessment of oxygen uptake, nutrient and amino acid consumption, extracellular and intracellular redox environment, oxidative stress, activation of the unfolded protein response (UPR) pathway, protein disulfide isomerase (PDI) expression, and heavy and light chain mRNA expression provided an in-depth understanding of the cellular response to process changes. The results demonstrate that mRNA expression and UPR activation were unaffected by process changes, and that increased PDI expression and optimized nutrient supplementation are required for higher productivity processes. Furthermore, our findings demonstrate the role of extra- and intracellular redox environment on productivity and antibody aggregation. Processes using the optimized medium, with increased concentrations of redox modifying agents, had the highest overall specific productivity, reduced aggregate levels, and helped cells better withstand the high levels of oxidative stress associated with increased productivity. Specific productivities of different processes positively correlated to average intracellular values of total glutathione. Additionally, processes with the optimized media maintained an oxidizing intracellular environment, important for correct disulfide bond pairing, which likely contributed to reduced aggregate formation. These findings shed important understanding into how cells respond to process changes and can be useful to guide future development efforts to enhance productivity and improve product quality. © 2017 Wiley Periodicals, Inc.

  6. A self-organizing neural network for job scheduling in distributed systems

    NASA Astrophysics Data System (ADS)

    Newman, Harvey B.; Legrand, Iosif C.

    2001-08-01

    The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.

  7. The Art and Science of Climate Model Tuning

    DOE PAGES

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew; ...

    2017-03-31

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  8. The Art and Science of Climate Model Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew

    The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less

  9. Study on influence of Surface roughness of Ni-Al2O3 nano composite coating and evaluation of wear characteristics

    NASA Astrophysics Data System (ADS)

    Raghavendra, C. R.; Basavarajappa, S.; Sogalad, Irappa

    2018-02-01

    Electrodeposition is one of the most technologically feasible and economically superior techniques for producing metallic coating. The advancement in the application of nano particles has grabbed the attention in all fields of engineering. In this present study an attempt has been made on the Ni-Al2O3nano particle composite coating on aluminium substrate by electrodeposition process. The aluminium surface requires a specific pre-treatment for better adherence of coating. In light of this a thin zinc layer is coated on the aluminium substrate by electroless process. In addition to this surface roughness is an important parameter for any coating method and material. In this work Ni-Al2O3 composite coating were successfully coated by varying the process parameters such as bath temperature, current density and particle loading. The experimentation was performed using central composite design based 20 trials of experiments. The effect of process parameters and surface roughness before and after coating is analyzed on wear rate and coating thickness. The results shown a better wear resistance of Ni-Al2O3 composite electrodeposited coating compared to Ni coating. The particle loading and interaction effect of current density with temperature has greater significant effect on wear rate. The surface roughness is significantly affected the wear behaviour and thickness of coating.

  10. Thermo-Mechanical Characterization of Friction Stir Spot Welded AA7050 Sheets by Means of Experimental and FEM Analyses

    PubMed Central

    D’Urso, Gianluca; Giardini, Claudio

    2016-01-01

    The present study was carried out to evaluate how the friction stir spot welding (FSSW) process parameters affect the temperature distribution in the welding region, the welding forces and the mechanical properties of the joints. The experimental study was performed by means of a CNC machine tool obtaining FSSW lap joints on AA7050 aluminum alloy plates. Three thermocouples were inserted into the samples to measure the temperatures at different distance from the joint axis during the whole FSSW process. Experiments was repeated varying the process parameters, namely rotational speed, axial feed rate and plunging depth. Axial welding forces were measured during the tests using a piezoelectric load cell, while the mechanical properties of the joints were evaluated by executing shear tests on the specimens. The correlation found between process parameters and joints properties, allowed to identify the best technological window. The data collected during the experiments were used to validate a simulation model of the FSSW process, too. The model was set up using a 2D approach for the simulation of a 3D problem, in order to guarantee a very simple and practical solution for achieving results in a very short time. A specific external routine for the calculation of the thermal energy due to friction acting between pin and sheet was developed. An index for the prediction of the joint mechanical properties using the FEM simulations was finally presented and validated. PMID:28773810

  11. Thermo-Mechanical Characterization of Friction Stir Spot Welded AA7050 Sheets by Means of Experimental and FEM Analyses.

    PubMed

    D'Urso, Gianluca; Giardini, Claudio

    2016-08-11

    The present study was carried out to evaluate how the friction stir spot welding (FSSW) process parameters affect the temperature distribution in the welding region, the welding forces and the mechanical properties of the joints. The experimental study was performed by means of a CNC machine tool obtaining FSSW lap joints on AA7050 aluminum alloy plates. Three thermocouples were inserted into the samples to measure the temperatures at different distance from the joint axis during the whole FSSW process. Experiments was repeated varying the process parameters, namely rotational speed, axial feed rate and plunging depth. Axial welding forces were measured during the tests using a piezoelectric load cell, while the mechanical properties of the joints were evaluated by executing shear tests on the specimens. The correlation found between process parameters and joints properties, allowed to identify the best technological window. The data collected during the experiments were used to validate a simulation model of the FSSW process, too. The model was set up using a 2D approach for the simulation of a 3D problem, in order to guarantee a very simple and practical solution for achieving results in a very short time. A specific external routine for the calculation of the thermal energy due to friction acting between pin and sheet was developed. An index for the prediction of the joint mechanical properties using the FEM simulations was finally presented and validated.

  12. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  13. Focusing the research agenda for simulation training visual system requirements

    NASA Astrophysics Data System (ADS)

    Lloyd, Charles J.

    2014-06-01

    Advances in the capabilities of the display-related technologies with potential uses in simulation training devices continue to occur at a rapid pace. Simultaneously, ongoing reductions in defense spending stimulate the services to push a higher proportion of training into ground-based simulators to reduce their operational costs. These two trends result in increased customer expectations and desires for more capable training devices, while the money available for these devices is decreasing. Thus, there exists an increasing need to improve the efficiency of the acquisition process and to increase the probability that users get the training devices they need at the lowest practical cost. In support of this need the IDEAS program was initiated in 2010 with the goal of improving display system requirements associated with unmet user needs and expectations and disrupted acquisitions. This paper describes a process of identifying, rating, and selecting the design parameters that should receive research attention. Analyses of existing requirements documents reveal that between 40 and 50 specific design parameters (i.e., resolution, contrast, luminance, field of view, frame rate, etc.) are typically called out for the acquisition of a simulation training display system. Obviously no research effort can address the effects of this many parameters. Thus, we developed a defensible strategy for focusing limited R&D resources on a fraction of these parameters. This strategy encompasses six criteria to identify the parameters most worthy of research attention. Examples based on display design parameters recommended by stakeholders are provided.

  14. Surrogate models for sheet metal stamping problem based on the combination of proper orthogonal decomposition and radial basis function

    NASA Astrophysics Data System (ADS)

    Dang, Van Tuan; Lafon, Pascal; Labergere, Carl

    2017-10-01

    In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.

  15. Influence of formulation and processing variables on properties of itraconazole nanoparticles made by advanced evaporative precipitation into aqueous solution.

    PubMed

    Bosselmann, Stephanie; Nagao, Masao; Chow, Keat T; Williams, Robert O

    2012-09-01

    Nanoparticles, of the poorly water-soluble drug, itraconazole (ITZ), were produced by the Advanced Evaporative Precipitation into Aqueous Solution process (Advanced EPAS). This process combines emulsion templating and EPAS processing to provide improved control over the size distribution of precipitated particles. Specifically, oil-in-water emulsions containing the drug and suitable stabilizers are sprayed into a heated aqueous solution to induce precipitation of the drug in form of nanoparticles. The influence of processing parameters (temperature and volume of the heated aqueous solution; type of nozzle) and formulation aspects (stabilizer concentrations; total solid concentrations) on the size of suspended ITZ particles, as determined by laser diffraction, was investigated. Furthermore, freeze-dried ITZ nanoparticles were evaluated regarding their morphology, crystallinity, redispersibility, and dissolution behavior. Results indicate that a robust precipitation process was developed such that size distribution of dispersed nanoparticles was shown to be largely independent across the different processing and formulation parameters. Freeze-drying of colloidal dispersions resulted in micron-sized agglomerates composed of spherical, sub-300-nm particles characterized by reduced crystallinity and high ITZ potencies of up to 94% (w/w). The use of sucrose prevented particle agglomeration and resulted in powders that were readily reconstituted and reached high and sustained supersaturation levels upon dissolution in aqueous media.

  16. Architecture and settings optimization procedure of a TES frequency domain multiplexed readout firmware

    NASA Astrophysics Data System (ADS)

    Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.

    2014-11-01

    IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.

  17. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  18. Optimization of CO2 laser cutting parameters on Austenitic type Stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Parthiban, A.; Sathish, S.; Chandrasekaran, M.; Ravikumar, R.

    2017-03-01

    Thin AISI 316L stainless steel sheet widely used in sheet metal processing industries for specific applications. CO2 laser cutting is one of the most popular sheet metal cutting processes for cutting of sheets in different profile. In present work various cutting parameters such as laser power (2000 watts-4000 watts), cutting speed (3500mm/min - 5500 mm/min) and assist gas pressure (0.7 Mpa-0.9Mpa) for cutting of AISI 316L 2mm thickness stainless sheet. This experimentation was conducted based on Box-Behenken design. The aim of this work is to develop a mathematical model kerf width for straight and curved profile through response surface methodology. The developed mathematical models for straight and curved profile have been compared. The Quadratic models have the best agreement with experimental data, and also the shape of the profile a substantial role in achieving to minimize the kerf width. Finally the numerical optimization technique has been used to find out best optimum laser cutting parameter for both straight and curved profile cut.

  19. Standard cell electrical and physical variability analysis based on automatic physical measurement for design-for-manufacturing purposes

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Parag, Allon; Khmaisy, Hafez; Krispil, Uri; Adan, Ofer; Levi, Shimon; Latinski, Sergey; Schwarzband, Ishai; Rotstein, Israel

    2011-04-01

    A fully automated system for process variability analysis of high density standard cell was developed. The system consists of layout analysis with device mapping: device type, location, configuration and more. The mapping step was created by a simple DRC run-set. This database was then used as an input for choosing locations for SEM images and for specific layout parameter extraction, used by SPICE simulation. This method was used to analyze large arrays of standard cell blocks, manufactured using Tower TS013LV (Low Voltage for high-speed applications) Platforms. Variability of different physical parameters like and like Lgate, Line-width-roughness and more as well as of electrical parameters like drive current (Ion), off current (Ioff) were calculated and statistically analyzed, in order to understand the variability root cause. Comparison between transistors having the same W/L but with different layout configurations and different layout environments (around the transistor) was made in terms of performances as well as process variability. We successfully defined "robust" and "less-robust" transistors configurations, and updated guidelines for Design-for-Manufacturing (DfM).

  20. Automatic Reacquisition of Satellite Positions by Detecting Their Expected Streaks in Astronomical Images

    NASA Astrophysics Data System (ADS)

    Levesque, M.

    Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.

  1. Evolutionary algorithm for vehicle driving cycle generation.

    PubMed

    Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott

    2011-09-01

    Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.

  2. Method and apparatus for measuring coupled flow, transport, and reaction processes under liquid unsaturated flow conditions

    DOEpatents

    McGrail, Bernard P.; Martin, Paul F.; Lindenmeier, Clark W.

    1999-01-01

    The present invention is a method and apparatus for measuring coupled flow, transport and reaction processes under liquid unsaturated flow conditions. The method and apparatus of the present invention permit distinguishing individual precipitation events and their effect on dissolution behavior isolated to the specific event. The present invention is especially useful for dynamically measuring hydraulic parameters when a chemical reaction occurs between a particulate material and either liquid or gas (e.g. air) or both, causing precipitation that changes the pore structure of the test material.

  3. A Theoretical Analysis of the Perceptual Span based on SWIFT Simulations of the n + 2 Boundary Paradigm

    PubMed Central

    Risse, Sarah; Hohenstein, Sven; Kliegl, Reinhold; Engbert, Ralf

    2014-01-01

    Eye-movement experiments suggest that the perceptual span during reading is larger than the fixated word, asymmetric around the fixation position, and shrinks in size contingent on the foveal processing load. We used the SWIFT model of eye-movement control during reading to test these hypotheses and their implications under the assumption of graded parallel processing of all words inside the perceptual span. Specifically, we simulated reading in the boundary paradigm and analysed the effects of denying the model to have valid preview of a parafoveal word n + 2 two words to the right of fixation. Optimizing the model parameters for the valid preview condition only, we obtained span parameters with remarkably realistic estimates conforming to the empirical findings on the size of the perceptual span. More importantly, the SWIFT model generated parafoveal processing up to word n + 2 without fitting the model to such preview effects. Our results suggest that asymmetry and dynamic modulation are plausible properties of the perceptual span in a parallel word-processing model such as SWIFT. Moreover, they seem to guide the flexible distribution of processing resources during reading between foveal and parafoveal words. PMID:24771996

  4. Effective inactivation of Saccharomyces cerevisiae in minimally processed Makgeolli using low-pressure homogenization-based pasteurization.

    PubMed

    Bak, Jin Seop

    2015-01-01

    In order to address the limitations associated with the inefficient pasteurization platform used to make Makgeolli, such as the presence of turbid colloidal dispersions in suspension, commercially available Makgeolli was minimally processed using a low-pressure homogenization-based pasteurization (LHBP) process. This continuous process demonstrates that promptly reducing the exposure time to excessive heat using either large molecules or insoluble particles can dramatically improve internal quality and decrease irreversible damage. Specifically, optimal homogenization increased concomitantly with physical parameters such as colloidal stability (65.0% of maximum and below 25-μm particles) following two repetitions at 25.0 MPa. However, biochemical parameters such as microbial population, acidity, and the presence of fermentable sugars rarely affected Makgeolli quality. Remarkably, there was a 4.5-log reduction in the number of Saccharomyces cerevisiae target cells at 53.5°C for 70 sec in optimally homogenized Makgeolli. This value was higher than the 37.7% measured from traditionally pasteurized Makgeolli. In contrast to the analytical similarity among homogenized Makgeollis, our objective quality evaluation demonstrated significant differences between pasteurized (or unpasteurized) Makgeolli and LHBP-treated Makgeolli. Low-pressure homogenization-based pasteurization, Makgeolli, minimal processing-preservation, Saccharomyces cerevisiae, suspension stability.

  5. Plan delivery quality assurance for CyberKnife: Statistical process control analysis of 350 film-based patient-specific QAs.

    PubMed

    Bellec, J; Delaby, N; Jouyaux, F; Perdrieux, M; Bouvier, J; Sorel, S; Henry, O; Lafond, C

    2017-07-01

    Robotic radiosurgery requires plan delivery quality assurance (DQA) but there has never been a published comprehensive analysis of a patient-specific DQA process in a clinic. We proposed to evaluate 350 consecutive film-based patient-specific DQAs using statistical process control. We evaluated the performance of the process to propose achievable tolerance criteria for DQA validation and we sought to identify suboptimal DQA using control charts. DQAs were performed on a CyberKnife-M6 using Gafchromic-EBT3 films. The signal-to-dose conversion was performed using a multichannel-correction and a scanning protocol that combined measurement and calibration in a single scan. The DQA analysis comprised a gamma-index analysis at 3%/1.5mm and a separate evaluation of spatial and dosimetric accuracy of the plan delivery. Each parameter was plotted on a control chart and control limits were calculated. A capability index (Cpm) was calculated to evaluate the ability of the process to produce results within specifications. The analysis of capability showed that a gamma pass rate of 85% at 3%/1.5mm was highly achievable as acceptance criteria for DQA validation using a film-based protocol (Cpm>1.33). 3.4% of DQA were outside a control limit of 88% for gamma pass-rate. The analysis of the out-of-control DQA helped identify a dosimetric error in our institute for a specific treatment type. We have defined initial tolerance criteria for DQA validations. We have shown that the implementation of a film-based patient-specific DQA protocol with the use of control charts is an effective method to improve patient treatment safety on CyberKnife. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Three-Dimensional Finite Element Ablative Thermal Response and Thermostructural Design of Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Braun, Robert D.

    2011-01-01

    A finite element ablation and thermal response program is presented for simulation of three-dimensional transient thermostructural analysis. The three-dimensional governing differential equations and finite element formulation are summarized. A novel probabilistic design methodology for thermal protection systems is presented. The design methodology is an eight step process beginning with a parameter sensitivity study and is followed by a deterministic analysis whereby an optimum design can determined. The design process concludes with a Monte Carlo simulation where the probabilities of exceeding design specifications are estimated. The design methodology is demonstrated by applying the methodology to the carbon phenolic compression pads of the Crew Exploration Vehicle. The maximum allowed values of bondline temperature and tensile stress are used as the design specifications in this study.

  7. A case study: application of statistical process control tool for determining process capability and sigma level.

    PubMed

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.

  8. Dispersion and characterization of Thermoplastic Polyurethane/Multiwalled Carbon Nanotubes in co-rotative twin screw extruder

    NASA Astrophysics Data System (ADS)

    Benedito, Adolfo; Buezas, Ignacio; Giménez, Enrique; Galindo, Begoña

    2010-06-01

    The dispersion of multi-walled carbon nanotubes in thermoplastic polyurethanes has been done in co-rotative twin screw extruder through a melt blending process. A specific experimental design was prepared taking into account different compounding parameters such as feeding, temperature profile, screw speed, screw design, and carbon nanotube loading. The obtained samples were characterized by thermogravimetric analysis (TGA), light transmission microscopy, dynamic rheometry, and dynamic mechanical analysis. The objective of this work has been to study the dispersion quality of the carbon nanotubes and the effect of different compounding parameters to optimize them for industrial scale-up to final applications.

  9. Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J.  P.; Biedron, S.  G.; Harris, J.  R.

    This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.

  10. Theory and simulation of backbombardment in single-cell thermionic-cathode electron guns

    DOE PAGES

    Edelen, J.  P.; Biedron, S.  G.; Harris, J.  R.; ...

    2015-04-01

    This paper presents a comparison between simulation results and a first principles analytical model of electron back-bombardment developed at Colorado State University for single-cell, thermionic-cathode rf guns. While most previous work on back-bombardment has been specific to particular accelerator systems, this work is generalized to a wide variety of guns within the applicable parameter space. The merits and limits of the analytic model will be discussed. This paper identifies the three fundamental parameters that drive the back-bombardment process, and demonstrates relative accuracy in calculating the predicted back-bombardment power of a single-cell thermionic gun.

  11. High specific energy, high capacity nickel-hydrogen cell design

    NASA Technical Reports Server (NTRS)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  12. Optimization of the Electrochemical Extraction and Recovery of Metals from Electronic Waste Using Response Surface Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.

    The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less

  13. Optimization of the Electrochemical Extraction and Recovery of Metals from Electronic Waste Using Response Surface Methodology

    DOE PAGES

    Diaz, Luis A.; Clark, Gemma G.; Lister, Tedd E.

    2017-06-08

    The rapid growth of the electronic waste can be viewed both as an environmental threat and as an attractive source of minerals that can reduce the mining of natural resources, and stabilize the market of critical materials, such as rare earths. Here in this article surface response methodology was used to optimize a previously developed electrochemical recovery process for base metals from electronic waste using a mild oxidant (Fe 3+). Through this process an effective extraction of base metals can be achieved enriching the concentration of precious metals and significantly reducing environmental impacts and operational costs associated with the wastemore » generation and chemical consumption. The optimization was performed using a bench-scale system specifically designed for this process. Operational parameters such as flow rate, applied current density and iron concentration were optimized to reduce the specific energy consumption of the electrochemical recovery process to 1.94 kWh per kg of metal recovered at a processing rate of 3.3 g of electronic waste per hour.« less

  14. Development of numerical processing in children with typical and dyscalculic arithmetic skills—a longitudinal study

    PubMed Central

    Landerl, Karin

    2013-01-01

    Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310

  15. Image processing for IMRT QA dosimetry.

    PubMed

    Zaini, Mehran R; Forest, Gary J; Loshek, David D

    2005-01-01

    We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.

  16. Kinetic and thermodynamic parameters for heat denaturation of human recombinant lactoferrin from rice.

    PubMed

    Castillo, Eduardo; Pérez, María Dolores; Franco, Indira; Calvo, Miguel; Sánchez, Lourdes

    2012-06-01

    Heat denaturation of recombinant human lactoferrin (rhLf) from rice with 3 different iron-saturation degrees, holo rhLf (iron-saturated), AsIs rhLf (60% iron saturation), and apo rhLf (iron-depleted), was studied. The 3 forms of rhLf were subjected to heat treatment, and the kinetic and thermodynamic parameters of the denaturation process were determined. Thermal denaturation of rhLf was assessed by measuring the loss of reactivity against specific antibodies. D(t) values (time to reduce 90% of immunoreactivity) decreased with increasing temperature of treatment for apo and holo rhLf, those values being higher for the iron-saturated form, which indicates that iron confers thermal stability to rhLf. However, AsIs rhLf showed a different behaviour with an increase in resistance to heat between 79 °C and 84 °C, so that the kinetic parameters could not be calculated. The heat denaturation process for apo and holo rhLf was best described assuming a reaction order of 1.5. The activation energy of the denaturation process was 648.20 kJ/mol for holo rhLf and 406.94 kJ/mol for apo rhLf, confirming that iron-depleted rhLf is more sensitive to heat treatment than iron-saturated rhLf.

  17. Quantifying Uranium Isotope Ratios Using Resonance Ionization Mass Spectrometry: The Influence of Laser Parameters on Relative Ionization Probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isselhardt, Brett H.

    2011-09-01

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure relative uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process to provide a distinction between uranium atoms and potential isobars without the aid of chemical purification and separation. We explore the laser parameters critical to the ionization process and their effects on the measured isotope ratio. Specifically, the use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of 235U/ 238U ratios to decrease laser-induced isotopic fractionation. By broadening the bandwidth of the first laser inmore » a 3-color, 3-photon ionization process from a bandwidth of 1.8 GHz to about 10 GHz, the variation in sequential relative isotope abundance measurements decreased from >10% to less than 0.5%. This procedure was demonstrated for the direct interrogation of uranium oxide targets with essentially no sample preparation. A rate equation model for predicting the relative ionization probability has been developed to study the effect of variation in laser parameters on the measured isotope ratio. This work demonstrates that RIMS can be used for the robust measurement of uranium isotope ratios.« less

  18. Dual-domain mass-transfer parameters from electrical hysteresis: theory and analytical approach applied to laboratory, synthetic streambed, and groundwater experiments

    USGS Publications Warehouse

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.

    2014-01-01

    Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated “effective” parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.

  19. Dual-domain mass-transfer parameters from electrical hysteresis: Theory and analytical approach applied to laboratory, synthetic streambed, and groundwater experiments

    NASA Astrophysics Data System (ADS)

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Harvey, Judson W.; Lane, John W.

    2014-10-01

    Models of dual-domain mass transfer (DDMT) are used to explain anomalous aquifer transport behavior such as the slow release of contamination and solute tracer tailing. Traditional tracer experiments to characterize DDMT are performed at the flow path scale (meters), which inherently incorporates heterogeneous exchange processes; hence, estimated "effective" parameters are sensitive to experimental design (i.e., duration and injection velocity). Recently, electrical geophysical methods have been used to aid in the inference of DDMT parameters because, unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute dynamics and can target specific points along subsurface flow paths. Here we propose an analytical framework for graphical parameter inference based on a simple petrophysical model explaining the hysteretic relation between measurements of bulk and fluid conductivity arising in the presence of DDMT at the local scale. Analysis is graphical and involves visual inspection of hysteresis patterns to (1) determine the size of paired mobile and less-mobile porosities and (2) identify the exchange rate coefficient through simple curve fitting. We demonstrate the approach using laboratory column experimental data, synthetic streambed experimental data, and field tracer-test data. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. We show that localized electrical hysteresis patterns resulting from diffusive exchange are independent of injection velocity, indicating that repeatable parameters can be extracted under varied experimental designs, and these parameters represent the true intrinsic properties of specific volumes of porous media of aquifers and hyporheic zones.

  20. Thorough specification of the neurophysiologic processes underlying behavior and of their manifestation in EEG - demonstration with the go/no-go task.

    PubMed

    Shahaf, Goded; Pratt, Hillel

    2013-01-01

    In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.

  1. Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    PubMed Central

    Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas

    2009-01-01

    Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170

  2. Identification of Upper and Lower Level Yield Strength in Materials

    PubMed Central

    Valíček, Jan; Harničárová, Marta; Kopal, Ivan; Palková, Zuzana; Kušnerová, Milena; Panda, Anton; Šepelák, Vladimír

    2017-01-01

    This work evaluates the possibility of identifying mechanical parameters, especially upper and lower yield points, by the analytical processing of specific elements of the topography of surfaces generated with abrasive waterjet technology. We developed a new system of equations, which are connected with each other in such a way that the result of a calculation is a comprehensive mathematical–physical model, which describes numerically as well as graphically the deformation process of material cutting using an abrasive waterjet. The results of our model have been successfully checked against those obtained by means of a tensile test. The main prospect for future applications of the method presented in this article concerns the identification of mechanical parameters associated with the prediction of material behavior. The findings of this study can contribute to a more detailed understanding of the relationships: material properties—tool properties—deformation properties. PMID:28832526

  3. Mesoscale Polymer Dissolution Probed by Raman Spectroscopy and Molecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Tsun-Mei; Xantheas, Sotiris S.; Vasdekis, Andreas E.

    2016-10-13

    The diffusion of various solvents into a polystyrene (PS) matrix was probed experimentally by monitoring the temporal profiles of the Raman spectra and theoretically from molecular dynamics (MD) simulations of the binary system. The simulation results assist in providing a fundamental, molecular level connection between the mixing/dissolution processes and the difference = solvent – PS in the values of the Hildebrand parameter () between the two components of the binary systems: solvents having similar values of with PS (small ) exhibit fast diffusion into the polymer matrix, whereas the diffusion slows down considerably when the ’s are different (large ).more » To this end, the Hildebrand parameter was identified as a useful descriptor that governs the process of mixing in polymer – solvent binary systems. The experiments also provide insight into further refinements of the models specific to non-Fickian diffusion phenomena that need to be used in the simulations.« less

  4. Identification of Upper and Lower Level Yield Strength in Materials.

    PubMed

    Valíček, Jan; Harničárová, Marta; Kopal, Ivan; Palková, Zuzana; Kušnerová, Milena; Panda, Anton; Šepelák, Vladimír

    2017-08-23

    This work evaluates the possibility of identifying mechanical parameters, especially upper and lower yield points, by the analytical processing of specific elements of the topography of surfaces generated with abrasive waterjet technology. We developed a new system of equations, which are connected with each other in such a way that the result of a calculation is a comprehensive mathematical-physical model, which describes numerically as well as graphically the deformation process of material cutting using an abrasive waterjet. The results of our model have been successfully checked against those obtained by means of a tensile test. The main prospect for future applications of the method presented in this article concerns the identification of mechanical parameters associated with the prediction of material behavior. The findings of this study can contribute to a more detailed understanding of the relationships: material properties-tool properties-deformation properties.

  5. Adaptive control and noise suppression by a variable-gain gradient algorithm

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.; Mehta, R. S.

    1987-01-01

    An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.

  6. Pharmaceutical Particle Engineering via Spray Drying

    PubMed Central

    2007-01-01

    This review covers recent developments in the area of particle engineering via spray drying. The last decade has seen a shift from empirical formulation efforts to an engineering approach based on a better understanding of particle formation in the spray drying process. Microparticles with nanoscale substructures can now be designed and their functionality has contributed significantly to stability and efficacy of the particulate dosage form. The review provides concepts and a theoretical framework for particle design calculations. It reviews experimental research into parameters that influence particle formation. A classification based on dimensionless numbers is presented that can be used to estimate how excipient properties in combination with process parameters influence the morphology of the engineered particles. A wide range of pharmaceutical application examples—low density particles, composite particles, microencapsulation, and glass stabilization—is discussed, with specific emphasis on the underlying particle formation mechanisms and design concepts. PMID:18040761

  7. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  8. Application of Metagenomic Sequencing to Food Safety: Detection of Shiga Toxin-Producing Escherichia coli on Fresh Bagged Spinach

    PubMed Central

    Leonard, Susan R.; Mammel, Mark K.; Lacher, David W.

    2015-01-01

    Culture-independent diagnostics reduce the reliance on traditional (and slower) culture-based methodologies. Here we capitalize on advances in next-generation sequencing (NGS) to apply this approach to food pathogen detection utilizing NGS as an analytical tool. In this study, spiking spinach with Shiga toxin-producing Escherichia coli (STEC) following an established FDA culture-based protocol was used in conjunction with shotgun metagenomic sequencing to determine the limits of detection, sensitivity, and specificity levels and to obtain information on the microbiology of the protocol. We show that an expected level of contamination (∼10 CFU/100 g) could be adequately detected (including key virulence determinants and strain-level specificity) within 8 h of enrichment at a sequencing depth of 10,000,000 reads. We also rationalize the relative benefit of static versus shaking culture conditions and the addition of selected antimicrobial agents, thereby validating the long-standing culture-based parameters behind such protocols. Moreover, the shotgun metagenomic approach was informative regarding the dynamics of microbial communities during the enrichment process, including initial surveys of the microbial loads associated with bagged spinach; the microbes found included key genera such as Pseudomonas, Pantoea, and Exiguobacterium. Collectively, our metagenomic study highlights and considers various parameters required for transitioning to such sequencing-based diagnostics for food safety and the potential to develop better enrichment processes in a high-throughput manner not previously possible. Future studies will investigate new species-specific DNA signature target regimens, rational design of medium components in concert with judicious use of additives, such as antibiotics, and alterations in the sample processing protocol to enhance detection. PMID:26386062

  9. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  10. META-X Design Flow Tools

    DTIC Science & Technology

    2013-04-01

    Forces can be computed at specific angular positions, and geometrical parameters can be evaluated. Much higher resolution models are required, along...composition engines (C#, C++, Python, Java ) Desert operates on the CyPhy model, converting from a design space alternative structure to a set of design...consists of scripts to execute dymola, post-processing of results to create metrics, and general management of the job sequence. An earlier version created

  11. Apical polarity in three-dimensional culture systems: where to now?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inman, J.L.; Bissell, Mina

    2010-01-21

    Delineation of the mechanisms that establish and maintain the polarity of epithelial tissues is essential to understanding morphogenesis, tissue specificity and cancer. Three-dimensional culture assays provide a useful platform for dissecting these processes but, as discussed in a recent study in BMC Biology on the culture of mammary gland epithelial cells, multiple parameters that influence the model must be taken into account.

  12. Chromosomes 3B and 4D are associated with several milling and baking quality traits in a soft white spring wheat (Triticum aestivum L.) population

    USDA-ARS?s Scientific Manuscript database

    Wheat is marketed based on end-use quality characteristics and better knowledge of the underlying genetics of specific quality parameters is essential to enhance the breeding process. A set of 188 recombinant inbred lines from a ‘Louise’ by ‘Penawawa’ mapping population was grown in two crop years a...

  13. Effects of alcohol on automated and controlled driving performances.

    PubMed

    Berthelon, Catherine; Gineyt, Guy

    2014-05-01

    Alcohol is the most frequently detected substance in fatal automobile crashes, but its precise mode of action is not always clear. The present study was designed to establish the influence of blood alcohol concentration as a function of the complexity of the scenarios. Road scenarios implying automatic or controlled driving performances were manipulated in order to identify which behavioral parameters were deteriorated. A single blind counterbalanced experiment was conducted on a driving simulator. Sixteen experienced drivers (25.3 ± 2.9 years old, 8 men and 8 women) were tested with 0, 0.3, 0.5, and 0.8 g/l of alcohol. Driving scenarios varied: road tracking, car following, and an urban scenario including events inspired by real accidents. Statistical analyses were performed on driving parameters as a function of alcohol level. Automated driving parameters such as standard deviation of lateral position measured with the road tracking and car following scenarios were impaired by alcohol, notably with the highest dose. More controlled parameters such as response time to braking and number of crashes when confronted with specific events (urban scenario) were less affected by the alcohol level. Performance decrement was greater with driving scenarios involving automated processes than with scenarios involving controlled processes.

  14. One-Dimensional Transport with Inflow and Storage (OTIS): A Solute Transport Model for Streams and Rivers

    USGS Publications Warehouse

    Runkel, Robert L.

    1998-01-01

    OTIS is a mathematical simulation model used to characterize the fate and transport of water-borne solutes in streams and rivers. The governing equation underlying the model is the advection-dispersion equation with additional terms to account for transient storage, lateral inflow, first-order decay, and sorption. This equation and the associated equations describing transient storage and sorption are solved using a Crank-Nicolson finite-difference solution. OTIS may be used in conjunction with data from field-scale tracer experiments to quantify the hydrologic parameters affecting solute transport. This application typically involves a trial-and-error approach wherein parameter estimates are adjusted to obtain an acceptable match between simulated and observed tracer concentrations. Additional applications include analyses of nonconservative solutes that are subject to sorption processes or first-order decay. OTIS-P, a modified version of OTIS, couples the solution of the governing equation with a nonlinear regression package. OTIS-P determines an optimal set of parameter estimates that minimize the squared differences between the simulated and observed concentrations, thereby automating the parameter estimation process. This report details the development and application of OTIS and OTIS-P. Sections of the report describe model theory, input/output specifications, sample applications, and installation instructions.

  15. Optics Program Simplifies Analysis and Design

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Engineers at Goddard Space Flight Center partnered with software experts at Mide Technology Corporation, of Medford, Massachusetts, through a Small Business Innovation Research (SBIR) contract to design the Disturbance-Optics-Controls-Structures (DOCS) Toolbox, a software suite for performing integrated modeling for multidisciplinary analysis and design. The DOCS Toolbox integrates various discipline models into a coupled process math model that can then predict system performance as a function of subsystem design parameters. The system can be optimized for performance; design parameters can be traded; parameter uncertainties can be propagated through the math model to develop error bounds on system predictions; and the model can be updated, based on component, subsystem, or system level data. The Toolbox also allows the definition of process parameters as explicit functions of the coupled model and includes a number of functions that analyze the coupled system model and provide for redesign. The product is being sold commercially by Nightsky Systems Inc., of Raleigh, North Carolina, a spinoff company that was formed by Mide specifically to market the DOCS Toolbox. Commercial applications include use by any contractors developing large space-based optical systems, including Lockheed Martin Corporation, The Boeing Company, and Northrup Grumman Corporation, as well as companies providing technical audit services, like General Dynamics Corporation

  16. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas.

    PubMed

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-11-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).

  17. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  18. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas

    PubMed Central

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-01-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293

  19. Global optimization framework for solar building design

    NASA Astrophysics Data System (ADS)

    Silva, N.; Alves, N.; Pascoal-Faria, P.

    2017-07-01

    The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.

  20. Microwave induced plasma for solid fuels and waste processing: A review on affecting factors and performance criteria.

    PubMed

    Ho, Guan Sem; Faizal, Hasan Mohd; Ani, Farid Nasir

    2017-11-01

    High temperature thermal plasma has a major drawback which consumes high energy. Therefore, non-thermal plasma which uses comparatively lower energy, for instance, microwave plasma is more attractive to be applied in gasification process. Microwave-induced plasma gasification also carries the advantages in terms of simplicity, compactness, lightweight, uniform heating and the ability to operate under atmospheric pressure that gains attention from researchers. The present paper synthesizes the current knowledge available for microwave plasma gasification on solid fuels and waste, specifically on affecting parameters and their performance. The review starts with a brief outline on microwave plasma setup in general, and followed by the effect of various operating parameters on resulting output. Operating parameters including fuel characteristics, fuel injection position, microwave power, addition of steam, oxygen/fuel ratio and plasma working gas flow rate are discussed along with several performance criteria such as resulting syngas composition, efficiency, carbon conversion, and hydrogen production rate. Based on the present review, fuel retention time is found to be the key parameter that influences the gasification performance. Therefore, emphasis on retention time is necessary in order to improve the performance of microwave plasma gasification of solid fuels and wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Utilizing a one-dimensional multispecies model to simulate the nutrient reduction and biomass structure in two types of H2-based membrane-aeration biofilm reactors (H2-MBfR): model development and parametric analysis.

    PubMed

    Wang, Zuowei; Xia, Siqing; Xu, Xiaoyin; Wang, Chenhui

    2016-02-01

    In this study, a one-dimensional multispecies model (ODMSM) was utilized to simulate NO3(-)-N and ClO4(-) reduction performances in two kinds of H2-based membrane-aeration biofilm reactors (H2-MBfR) within different operating conditions (e.g., NO3(-)-N/ClO4(-) loading rates, H2 partial pressure, etc.). Before the simulation process, we conducted the sensitivity analysis of some key parameters which would fluctuate in different environmental conditions, then we used the experimental data to calibrate the more sensitive parameters μ1 and μ2 (maximum specific growth rates of denitrification bacteria and perchlorate reduction bacteria) in two H2-MBfRs, and the diversity of the two key parameters' values in two types of reactors may be resulted from the different carbon source fed in the reactors. From the simulation results of six different operating conditions (four in H2-MBfR 1 and two in H2-MBfR 2), the applicability of the model was approved, and the variation of the removal tendency in different operating conditions could be well simulated. Besides, the rationality of operating parameters (H2 partial pressure, etc.) could be judged especially in condition of high nutrients' loading rates. To a certain degree, the model could provide theoretical guidance to determine the operating parameters on some specific conditions in practical application.

  2. A Pipeline for Constructing a Catalog of Multi-method Models of Interacting Galaxies

    NASA Astrophysics Data System (ADS)

    Holincheck, Anthony

    Galaxies represent a fundamental unit of matter for describing the large-scale structure of the universe. One of the major processes affecting the formation and evolution of galaxies are mutual interactions. These interactions can including gravitational tidal distortion, mass transfer, and even mergers. In any hierarchical model, mergers are the key mechanism in galaxy formation and evolution. Computer simulations of interacting galaxies have evolved in the last four decades from simple restricted three-body algorithms to full n-body gravity models. These codes often included sophisticated physical mechanisms such as gas dynamics, supernova feedback, and central blackholes. As the level of complexity, and perhaps realism, increases so does the amount of computational resources needed. These advanced simulations are often used in parameter studies of interactions. They are usually only employed in an ad hoc fashion to recreate the dynamical history of specific sets of interacting galaxies. These specific models are often created with only a few dozen or at most few hundred sets of simulation parameters being attempted. This dissertation presents a prototype pipeline for modeling specific pairs of interacting galaxies in bulk. The process begins with a simple image of the current disturbed morphology and an estimate of distance to the system and mass of the galaxies. With the use of an updated restricted three-body simulation code and the help of Citizen Scientists, the pipeline is able to sample hundreds of thousands of points in parameter space for each system. Through the use of a convenient interface and innovative scoring algorithm, the pipeline aids researchers in identifying the best set of simulation parameters. This dissertation demonstrates a successful recreation of the disturbed morphologies of 62 pairs of interacting galaxies. The pipeline also provides for examining the level of convergence and uniqueness of the dynamical properties of each system. By creating a population of models for actual systems, the current research is able to compare simulation-based and observational values on a larger scale than previous efforts. Several potential relationships between star formation rate and dynamical time since closest approach are presented.

  3. Choosing the appropriate forecasting model for predictive parameter control.

    PubMed

    Aleti, Aldeida; Moser, Irene; Meedeniya, Indika; Grunske, Lars

    2014-01-01

    All commonly used stochastic optimisation algorithms have to be parameterised to perform effectively. Adaptive parameter control (APC) is an effective method used for this purpose. APC repeatedly adjusts parameter values during the optimisation process for optimal algorithm performance. The assignment of parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at parameters of evolutionary algorithms (EAs), we find that all standard EA parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive parameter control outperforms state of the art parameter control methods when the performance data adheres to the assumptions made by the prediction method. When a parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  4. Oxaliplatin loaded PLAGA microspheres: design of specific release profiles.

    PubMed

    Lagarce, F; Cruaud, O; Deuschel, C; Bayssas, M; Griffon-Etienne, G; Benoit, J

    2002-08-21

    Oxaliplatin loaded PLAGA microspheres have been prepared by solvent extraction process. Parameters affecting the release kinetics in vitro have been studied in order to design specific release profiles suitable for direct intra-tumoral injection. By varying the nature and the relative proportions of different polymers we managed to prepare microspheres with good encapsulation efficiency (75-90%) and four different release profiles: zero order kinetics (type II) and the classical sigmoïd release profile with three different sizes of plateau and burst. These results, if correlated with in vivo activity, are promising to enhance effectiveness of local tumor treatment.

  5. Parallelization and visual analysis of multidimensional fields: Application to ozone production, destruction, and transport in three dimensions

    NASA Technical Reports Server (NTRS)

    Schwan, Karsten

    1994-01-01

    Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.

  6. Food drying process by power ultrasound.

    PubMed

    de la Fuente-Blanco, S; Riera-Franco de Sarabia, E; Acosta-Aparicio, V M; Blanco-Blanco, A; Gallego-Juárez, J A

    2006-12-22

    Drying processes, which have a great significance in the food industry, are frequently based on the use of thermal energy. Nevertheless, such methods may produce structural changes in the products. Consequently, a great emphasis is presently given to novel treatments where the quality will be preserved. Such is the case of the application of high-power ultrasound which represents an emergent and promising technology. During the last few years, we have been involved in the development of an ultrasonic dehydration process, based on the application of the ultrasonic vibration in direct contact with the product. Such a process has been the object of a detailed study at laboratory stage on the influence of the different parameters involved. This paper deals with the development and testing of a prototype system for the application and evaluation of the process at a pre-industrial stage. Such prototype is based on a high-power rectangular plate transducer, working at a frequency of 20 kHz, with a power capacity of about 100 W. In order to study mechanical and thermal effects, the system is provided with a series of sensors which permit monitoring the parameters of the process. Specific software has also been developed to facilitate data collection and analysis. The system has been tested with vegetable samples.

  7. Laser Transmission Welding of CFRTP Using Filler Material

    NASA Astrophysics Data System (ADS)

    Berger, Stefan; Schmidt, Michael

    In the automotive industry the increasing environmental awareness is reflected through consistent lightweight construction. Especially the use of carbon fiber reinforced thermoplastics (CFRTP) plays an increasingly important role. Accordingto the material substitution, the demand for adequate joining technologies is growing. Therefore, laser transmission welding with filler material provides a way to combine two opaque joining partners by using process specific advantages of the laser transmission welding process. After introducing the new processing variant and the used experimental setup, this paper investigates the process itselfand conditions for a stable process. The influence of the used process parameters on weld quality and process stability is characterized by tensile shear tests. The successfully performed joining of PA 6 CF 42 organic sheets using natural PA 6 as filler material underlines the potential of the described joining method for lightweight design and other industrial applications.

  8. Experiences with the hydraulic design of the high specific speed Francis turbine

    NASA Astrophysics Data System (ADS)

    Obrovsky, J.; Zouhar, J.

    2014-03-01

    The high specific speed Francis turbine is still suitable alternative for refurbishment of older hydro power plants with lower heads and worse cavitation conditions. In the paper the design process of such kind of turbine together with the results comparison of homological model tests performed in hydraulic laboratory of ČKD Blansko Engineering is introduced. The turbine runner was designed using the optimization algorithm and considering the high specific speed hydraulic profile. It means that hydraulic profiles of the spiral case, the distributor and the draft tube were used from a Kaplan turbine. The optimization was done as the automatic cycle and was based on a simplex optimization method as well as on a genetic algorithm. The number of blades is shown as the parameter which changes the resulting specific speed of the turbine between ns=425 to 455 together with the cavitation characteristics. Minimizing of cavitation on the blade surface as well as on the inlet edge of the runner blade was taken into account during the design process. The results of CFD analyses as well as the model tests are mentioned in the paper.

  9. Finding-specific display presets for computed radiography soft-copy reading.

    PubMed

    Andriole, K P; Gould, R G; Webb, W R

    1999-05-01

    Much work has been done to optimize the display of cross-sectional modality imaging examinations for soft-copy reading (i.e., window/level tissue presets, and format presentations such as tile and stack modes, four-on-one, nine-on-one, etc). Less attention has been paid to the display of digital forms of the conventional projection x-ray. The purpose of this study is to assess the utility of providing presets for computed radiography (CR) soft-copy display, based not on the window/level settings, but on processing applied to the image optimized for visualization of specific findings, pathologies, etc (i.e., pneumothorax, tumor, tube location). It is felt that digital display of CR images based on finding-specific processing presets has the potential to: speed reading of digital projection x-ray examinations on soft copy; improve diagnostic efficacy; standardize display across examination type, clinical scenario, important key findings, and significant negatives; facilitate image comparison; and improve confidence in and acceptance of soft-copy reading. Clinical chest images are acquired using an Agfa-Gevaert (Mortsel, Belgium) ADC 70 CR scanner and Fuji (Stamford, CT) 9000 and AC2 CR scanners. Those demonstrating pertinent findings are transferred over the clinical picture archiving and communications system (PACS) network to a research image processing station (Agfa PS5000), where the optimal image-processing settings per finding, pathologic category, etc, are developed in conjunction with a thoracic radiologist, by manipulating the multiscale image contrast amplification (Agfa MUSICA) algorithm parameters. Soft-copy display of images processed with finding-specific settings are compared with the standard default image presentation for 50 cases of each category. Comparison is scored using a 5-point scale with the positive scale denoting the standard presentation is preferred over the finding-specific processing, the negative scale denoting the finding-specific processing is preferred over the standard presentation, and zero denoting no difference. Processing settings have been developed for several findings including pneumothorax and lung nodules, and clinical cases are currently being collected in preparation for formal clinical trials. Preliminary results indicate a preference for the optimized-processing presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  10. Suppression of Metastasis by Primary Tumor and Acceleration of Metastasis Following Primary Tumor Resection: A Natural Law?

    PubMed

    Hanin, Leonid; Rose, Jason

    2018-03-01

    We study metastatic cancer progression through an extremely general individual-patient mathematical model that is rooted in the contemporary understanding of the underlying biomedical processes yet is essentially free of specific biological assumptions of mechanistic nature. The model accounts for primary tumor growth and resection, shedding of metastases off the primary tumor and their selection, dormancy and growth in a given secondary site. However, functional parameters descriptive of these processes are assumed to be essentially arbitrary. In spite of such generality, the model allows for computing the distribution of site-specific sizes of detectable metastases in closed form. Under the assumption of exponential growth of metastases before and after primary tumor resection, we showed that, regardless of other model parameters and for every set of site-specific volumes of detected metastases, the model-based likelihood-maximizing scenario is always the same: complete suppression of metastatic growth before primary tumor resection followed by an abrupt growth acceleration after surgery. This scenario is commonly observed in clinical practice and is supported by a wealth of experimental and clinical studies conducted over the last 110 years. Furthermore, several biological mechanisms have been identified that could bring about suppression of metastasis by the primary tumor and accelerated vascularization and growth of metastases after primary tumor resection. To the best of our knowledge, the methodology for uncovering general biomedical principles developed in this work is new.

  11. Extraction of Polarization Parameters in the p¯p → Ω¯Ω Reaction

    NASA Astrophysics Data System (ADS)

    Perotti, E.

    2018-05-01

    A method to extract the polarization of Ω hyperons produced via the strong interaction is presented. Assuming they are spin 3/2 particles, the corresponding spin density matrix can be written in terms of seven non-zero polarization parameters, all retrievable from the angular distribution of the decay products. Moreover by considering the full decay chain Ω → ΛK → pπK the magnitude of the asymmetry parameters β Ω and γ Ω can be obtained. This method, applied here to the specific Ω case, can be generalized to any weakly decaying hyperon and is perfectly suited for the PANDA experiment where hyperon-antihyperon pairs will be copiously produced in proton-antiproton collisions. The aim is to take a step forward towards the understanding of the mechanism that reigns strangeness production in these processes.

  12. Formulation of chitosan-TPP-pDNA nanocapsules for gene therapy applications

    NASA Astrophysics Data System (ADS)

    Gaspar, V. M.; Sousa, F.; Queiroz, J. A.; Correia, I. J.

    2011-01-01

    The encapsulation of DNA inside nanoparticles meant for gene delivery applications is a challenging process where several parameters need to be modulated in order to design nanocapsules with specific tailored characteristics. The purpose of this study was to investigate and improve the formulation parameters of plasmid DNA (pDNA) loaded in chitosan nanocapsules using tripolyphosphate (TPP) as polyanionic crosslinker. Nanocapsule morphology and encapsulation efficiency were analyzed as a function of chitosan degree of deacetylation and chitosan-TPP ratio. The manipulation of these parameters influenced not only the particle size but also the encapsulation and release of pDNA. Consequently the transfection efficiency of the nanoparticulated systems was also enhanced with the optimization of the particle characteristics. Overall, the differently formulated nanoparticulated systems possess singular properties that can be employed according to the desired gene delivery application.

  13. An inverse problem for a mathematical model of aquaponic agriculture

    NASA Astrophysics Data System (ADS)

    Bobak, Carly; Kunze, Herb

    2017-01-01

    Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.

  14. Sensitivity of viscosity Arrhenius parameters to polarity of liquids

    NASA Astrophysics Data System (ADS)

    Kacem, R. B. H.; Alzamel, N. O.; Ouerfelli, N.

    2017-09-01

    Several empirical and semi-empirical equations have been proposed in the literature to estimate the liquid viscosity upon temperature. In this context, this paper aims to study the effect of polarity of liquids on the modeling of the viscosity-temperature dependence, considering particularly the Arrhenius type equations. To achieve this purpose, the solvents are classified into three groups: nonpolar, borderline polar and polar solvents. Based on adequate statistical tests, we found that there is strong evidence that the polarity of solvents affects significantly the distribution of the Arrhenius-type equation parameters and consequently the modeling of the viscosity-temperature dependence. Thus, specific estimated values of parameters for each group of liquids are proposed in this paper. In addition, the comparison of the accuracy of approximation with and without classification of liquids, using the Wilcoxon signed-rank test, shows a significant discrepancy of the borderline polar solvents. For that, we suggested in this paper new specific coefficient values of the simplified Arrhenius-type equation for better estimation accuracy. This result is important given that the accuracy in the estimation of the viscosity-temperature dependence may affect considerably the design and the optimization of several industrial processes.

  15. Assessment of DNA degradation induced by thermal and UV radiation processing: implications for quantification of genetically modified organisms.

    PubMed

    Ballari, Rajashekhar V; Martin, Asha

    2013-12-01

    DNA quality is an important parameter for the detection and quantification of genetically modified organisms (GMO's) using the polymerase chain reaction (PCR). Food processing leads to degradation of DNA, which may impair GMO detection and quantification. This study evaluated the effect of various processing treatments such as heating, baking, microwaving, autoclaving and ultraviolet (UV) irradiation on the relative transgenic content of MON 810 maize using pRSETMON-02, a dual target plasmid as a model system. Amongst all the processing treatments examined, autoclaving and UV irradiation resulted in the least recovery of the transgenic (CaMV 35S promoter) and taxon-specific (zein) target DNA sequences. Although a profound impact on DNA degradation was seen during the processing, DNA could still be reliably quantified by Real-time PCR. The measured mean DNA copy number ratios of the processed samples were in agreement with the expected values. Our study confirms the premise that the final analytical value assigned to a particular sample is independent of the degree of DNA degradation since the transgenic and the taxon-specific target sequences possessing approximately similar lengths degrade in parallel. The results of our study demonstrate that food processing does not alter the relative quantification of the transgenic content provided the quantitative assays target shorter amplicons and the difference in the amplicon size between the transgenic and taxon-specific genes is minimal. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Nuclear morphology for the detection of alterations in bronchial cells from lung cancer: an attempt to improve sensitivity and specificity.

    PubMed

    Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc

    2011-08-01

    To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.

  17. Approximate Model of Zone Sedimentation

    NASA Astrophysics Data System (ADS)

    Dzianik, František

    2011-12-01

    The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.

  18. Estimation of the specific surface area for a porous carrier.

    PubMed

    Levstek, Meta; Plazl, Igor; Rouse, Joseph D

    2010-03-01

    In biofilm systems, treatment performance is primarily dependent upon the available biofilm growth surface area in the reactor. Specific surface area is thus a parameter that allows for making comparisons between different carrier technologies used for wastewater treatment. In this study, we estimated the effective surface area for a spherical, porous polyvinyl alcohol (PVA) gel carrier (Kuraray) that has previously demonstrated effectiveness for retention of autotrophic and heterotrophic biomass. This was accomplished by applying the GPS-X modeling tool (Hydromantis) to a comparative analysis of two moving-bed biofilm reactor (MBBR) systems. One system consisted of a lab-scale reactor that was fed synthetic wastewater under autotrophic conditions where only the nitrification process was studied. The other was a pre-denitrification pilot-scale plant that was fed real, primary-settled wastewater. Calibration of an MBBR process model for both systems indicated an effective specific surface area for PVA gel of 2500 m2/m3, versus a specific surface area of 1000 m2/m3 when only the outer surface of the gel beads is considered. In addition, the maximum specific growth rates for autotrophs and heterotrophs were estimated to be 1.2/day and 6.0/day, respectively.

  19. Method of Preparation AZP4330 PR Pattern with Edge Slope 40°

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Zhao, Hongyuan; Yu, Yuanwei; Zhu, Jian

    2018-03-01

    When the edge which is under the multi-film is more steep or angular, the stress in the multilayer film near the edge is concentrated, this situation will greatly reduce the reliability of electronic components. And sometimes, we need some special structure such as a slope with a specific angle in the MEMS, so that the metal line can take the signal to the output pad through the slope instead of deep step. To cover these problems, the lithography method of preparing the structure with edge slope is studied. In this paper, based on the Kirchhoff scalar diffraction theory we try to change the contact exposure gap and the post-baking time at the specific temperature to find out the effect about the edge angle of the photoresist. After test by SEM, the results were presented by using AZP4330 photoresist, we can get the PR Pattern with edge slope 40° of the process and the specific process parameters.

  20. Effect of water volume based on water absorption and mixing time on physical properties of tapioca starch – wheat composite bread

    NASA Astrophysics Data System (ADS)

    Prameswari, I. K.; Manuhara, G. J.; Amanto, B. S.; Atmaka, W.

    2018-05-01

    Tapioca starch application in bread processing change water absorption level by the dough, while sufficient mixing time makes the optimal water absorption. This research aims to determine the effect of variations in water volume and mixing time on physical properties of tapioca starch – wheat composite bread and the best method for the composite bread processing. This research used Complete Randomized Factorial Design (CRFD) with two factors: variations of water volume (111,8 ml, 117,4 ml, 123 ml) and mixing time (16 minutes, 17 minutes 36 seconds, 19 minutes 12 seconds). The result showed that water volume significantly affected on dough volume, bread volume and specific volume, baking expansion, and crust thickness. Mixing time significantly affected on dough volume and specific volume, bread volume and specific volume, baking expansion, bread height, and crust thickness. While the combination of water volume and mixing time significantly affected for all physical properties parameters except crust thickness.

  1. On the Failure of Correlating Partitioned Electrostatic Surface Potentials Using Bader’s Atoms-in-Molecules Theory to Impact Sensitivities

    DTIC Science & Technology

    2013-04-01

    atoms labeled. ......................................................................................25 Figure A-15. Picric acid with atoms labeled...217 Table A-47. DATB atom specific Politzer parameters using PBE/6-31G**..............................218 Table A-48. Picric acid atom specific...weighted atom specific Politzer parameters using PBE/6-31G**. .....272 Table A-96. Picric acid area weighted atom specific Politzer parameters using PBE

  2. Petroleum-resource appraisal and discovery rate forecasting in partially explored regions

    USGS Publications Warehouse

    Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.

    1980-01-01

    PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.

  3. Prospect theory reflects selective allocation of attention.

    PubMed

    Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph

    2018-02-01

    There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  5. Homogenization Theory for the Prediction of Obstructed Solute Diffusivity in Macromolecular Solutions

    PubMed Central

    Donovan, Preston; Chehreghanianzabi, Yasaman; Rathinam, Muruhan; Zustiak, Silviya Petrova

    2016-01-01

    The study of diffusion in macromolecular solutions is important in many biomedical applications such as separations, drug delivery, and cell encapsulation, and key for many biological processes such as protein assembly and interstitial transport. Not surprisingly, multiple models for the a-priori prediction of diffusion in macromolecular environments have been proposed. However, most models include parameters that are not readily measurable, are specific to the polymer-solute-solvent system, or are fitted and do not have a physical meaning. Here, for the first time, we develop a homogenization theory framework for the prediction of effective solute diffusivity in macromolecular environments based on physical parameters that are easily measurable and not specific to the macromolecule-solute-solvent system. Homogenization theory is useful for situations where knowledge of fine-scale parameters is used to predict bulk system behavior. As a first approximation, we focus on a model where the solute is subjected to obstructed diffusion via stationary spherical obstacles. We find that the homogenization theory results agree well with computationally more expensive Monte Carlo simulations. Moreover, the homogenization theory agrees with effective diffusivities of a solute in dilute and semi-dilute polymer solutions measured using fluorescence correlation spectroscopy. Lastly, we provide a mathematical formula for the effective diffusivity in terms of a non-dimensional and easily measurable geometric system parameter. PMID:26731550

  6. Review of concrete biodeterioration in relation to nuclear waste.

    PubMed

    Turick, Charles E; Berry, Christopher J

    2016-01-01

    Storage of radioactive waste in concrete structures is a means of containing wastes and related radionuclides generated from nuclear operations in many countries. Previous efforts related to microbial impacts on concrete structures that are used to contain radioactive waste showed that microbial activity can play a significant role in the process of concrete degradation and ultimately structural deterioration. This literature review examines the research in this field and is focused on specific parameters that are applicable to modeling and prediction of the fate of concrete structures used to store or dispose of radioactive waste. Rates of concrete biodegradation vary with the environmental conditions, illustrating a need to understand the bioavailability of key compounds involved in microbial activity. Specific parameters require pH and osmotic pressure to be within a certain range to allow for microbial growth as well as the availability and abundance of energy sources such as components involved in sulfur, iron and nitrogen oxidation. Carbon flow and availability are also factors to consider in predicting concrete biodegradation. The microbial contribution to degradation of the concrete structures containing radioactive waste is a constant possibility. The rate and degree of concrete biodegradation is dependent on numerous physical, chemical and biological parameters. Parameters to focus on for modeling activities and possible options for mitigation that would minimize concrete biodegradation are discussed and include key conditions that drive microbial activity on concrete surfaces. Copyright © 2015. Published by Elsevier Ltd.

  7. Analyses of microstructural and elastic properties of porous SOFC cathodes based on focused ion beam tomography

    NASA Astrophysics Data System (ADS)

    Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan

    2015-01-01

    Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.

  8. On the Use of the Beta Distribution in Probabilistic Resource Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olea, Ricardo A., E-mail: olea@usgs.gov

    2011-12-15

    The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less

  9. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  10. Characterization and durability testing of plasma-sprayed zirconia-yttria and hafnia-yttria thermal barrier coatings. Part 2: Effect of spray parameters on the performance of several hafnia-yttria and zirconia-yttria coatings

    NASA Technical Reports Server (NTRS)

    Miller, Robert A.; Leissler, George W.

    1993-01-01

    This is the second of two reports which discuss initial experiments on thermal barrier coatings prepared and tested in newly upgraded plasma spray and burner rig test facilities at LeRC. The first report, part 1, describes experiments designed to establish the spray parameters for the baseline zirconia-yttria coating. Coating quality was judged primarily by the response to burner rig exposure, together with a variety of other characterization approaches including thermal diffusivity measurements. That portion of the study showed that the performance of the baseline NASA coating was not strongly sensitive to processing parameters. In this second part of the study, new hafnia-yttria coatings were evaluated with respect to both baseline and alternate zirconia-yttria coatings. The hafnia-yttria and the alternate zirconia-yttria coatings were very sensitive to plasma-spray parameters in that high-quality coatings were obtained only when specific parameters were used. The reasons for this important observation are not understood.

  11. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  12. Modeling lactose hydrolysis for efficiency and selectivity: Toward the preservation of sialyloligosaccharides in bovine colostrum whey permeate.

    PubMed

    de Moura Bell, Juliana M L N; Aquino, Leticia F M C; Liu, Yan; Cohen, Joshua L; Lee, Hyeyoung; de Melo Silva, Vitor L; Rodrigues, Maria I; Barile, Daniela

    2016-08-01

    Enzymatic hydrolysis of lactose has been shown to improve the efficiency and selectivity of membrane-based separations toward the recovery of bioactive oligosaccharides. Achieving maximum lactose hydrolysis requires intrinsic process optimization for each specific substrate, but the effects of those processing conditions on the target oligosaccharides are not well understood. Response surface methodology was used to investigate the effects of pH (3.25-8.25), temperature (35-55°C), reaction time (6 to 58 min), and amount of enzyme (0.05-0.25%) on the efficiency of lactose hydrolysis by β-galactosidase and on the preservation of biologically important sialyloligosaccharides (3'-siallylactose, 6'-siallylactose, and 6'-sialyl-N-acetyllactosamine) naturally present in bovine colostrum whey permeate. A central composite rotatable design was used. In general, β-galactosidase activity was favored at pH values ranging from 3.25 to 5.75, with other operational parameters having a less pronounced effect. A pH of 4.5 allowed for the use of a shorter reaction time (19 min), lower temperature (40°C), and reduced amount of enzyme (0.1%), but complete hydrolysis at a higher pH (5.75) required greater values for these operational parameters. The total amount of sialyloligosaccharides was not significantly altered by the reaction parameters evaluated, suggesting specificity of β-galactosidase from Aspergillus oryzae toward lactose as well as the stability of the oligosaccharides at pH, temperature, and reaction time evaluated. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Kalman Filter for Calibrating a Telescope Focal Plane

    NASA Technical Reports Server (NTRS)

    Kang, Bryan; Bayard, David

    2006-01-01

    The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.

  14. Computational dissection of human episodic memory reveals mental process-specific genetic profiles

    PubMed Central

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.

    2015-01-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317

  15. Dissolution process analysis using model-free Noyes-Whitney integral equation.

    PubMed

    Hattori, Yusuke; Haruna, Yoshimasa; Otsuka, Makoto

    2013-02-01

    Drug dissolution process of solid dosages is theoretically described by Noyes-Whitney-Nernst equation. However, the analysis of the process is demonstrated assuming some models. Normally, the model-dependent methods are idealized and require some limitations. In this study, Noyes-Whitney integral equation was proposed and applied to represent the drug dissolution profiles of a solid formulation via the non-linear least squares (NLLS) method. The integral equation is a model-free formula involving the dissolution rate constant as a parameter. In the present study, several solid formulations were prepared via changing the blending time of magnesium stearate (MgSt) with theophylline monohydrate, α-lactose monohydrate, and crystalline cellulose. The formula could excellently represent the dissolution profile, and thereby the rate constant and specific surface area could be obtained by NLLS method. Since the long time blending coated the particle surface with MgSt, it was found that the water permeation was disturbed by its layer dissociating into disintegrant particles. In the end, the solid formulations were not disintegrated; however, the specific surface area gradually increased during the process of dissolution. The X-ray CT observation supported this result and demonstrated that the rough surface was dominant as compared to dissolution, and thus, specific surface area of the solid formulation gradually increased. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    PubMed

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  17. Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.

    PubMed

    Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L

    2016-05-10

    Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.

  18. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  19. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  20. Piezoelectric Behaviour of Sputtered Aluminium Nitride Thin Film for High Frequency Ultrasonic Sensors

    NASA Astrophysics Data System (ADS)

    Herzog, T.; Walter, S.; Bartzsch, H.; Gittner, M.; Gloess, D.; Heuer, H.

    2011-06-01

    Many new materials and processes require non destructive evaluation in higher resolutions by phased array ultrasonic techniques in a frequency range up to 250 MHz. This paper presents aluminium nitride, a promising material for the use as a piezoelectric sensor material in the considered frequency range, which contains the potential for high frequency phased array application in the future. This work represents the fundamental development of piezoelectric aluminium nitride films with a thickness of up to 10 μm. We have investigated and optimized the deposition process of the aluminium nitride thin film layers regarding their piezoelectric behavior. Therefore a specific test setup and a measuring station were created to determine the piezoelectric charge constant (d33) and the electro acoustic behavior of the sensor. Single element transducers were deposited on silicon substrates with aluminium electrodes for top and bottom, using different parameters for the magnetron sputter process, like pressure and bias voltage. Afterwards acoustical measurements up to 500 MHz in pulse echo mode have been carried out and the electrical and electromechanical properties were qualified. In two different parameter sets for the sputtering process excellent piezoelectric charge constant of about 8.0 pC/N maximum were obtained.

  1. Synaptic consolidation as a temporally variable process: Uncovering the parameters modulating its time-course.

    PubMed

    Casagrande, Mirelle A; Haubrich, Josué; Pedraza, Lizeth K; Popik, Bruno; Quillfeldt, Jorge A; de Oliveira Alvares, Lucas

    2018-04-01

    Memories are not instantly created in the brain, requiring a gradual stabilization process called consolidation to be stored and persist in a long-lasting manner. However, little is known whether this time-dependent process is dynamic or static, and the factors that might modulate it. Here, we hypothesized that the time-course of consolidation could be affected by specific learning parameters, changing the time window where memory is susceptible to retroactive interference. In the rodent contextual fear conditioning paradigm, we compared weak and strong training protocols and found that in the latter memory is susceptible to post-training hippocampal inactivation for a shorter period of time. The accelerated consolidation process triggered by the strong training was mediated by glucocorticoids, since this effect was blocked by pre-training administration of metyrapone. In addition, we found that pre-exposure to the training context also accelerates fear memory consolidation. Hence, our results demonstrate that the time window in which memory is susceptible to post-training interferences varies depending on fear conditioning intensity and contextual familiarity. We propose that the time-course of memory consolidation is dynamic, being directly affected by attributes of the learning experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Use of mechanistic simulations as a quantitative risk-ranking tool within the quality by design framework.

    PubMed

    Stocker, Elena; Toschkoff, Gregor; Sacher, Stephan; Khinast, Johannes G

    2014-11-20

    The purpose of this study is to evaluate the use of computer simulations for generating quantitative knowledge as a basis for risk ranking and mechanistic process understanding, as required by ICH Q9 on quality risk management systems. In this specific publication, the main focus is the demonstration of a risk assessment workflow, including a computer simulation for the generation of mechanistic understanding of active tablet coating in a pan coater. Process parameter screening studies are statistically planned under consideration of impacts on a potentially critical quality attribute, i.e., coating mass uniformity. Based on computer simulation data the process failure mode and effects analysis of the risk factors is performed. This results in a quantitative criticality assessment of process parameters and the risk priority evaluation of failure modes. The factor for a quantitative reassessment of the criticality and risk priority is the coefficient of variation, which represents the coating mass uniformity. The major conclusion drawn from this work is a successful demonstration of the integration of computer simulation in the risk management workflow leading to an objective and quantitative risk assessment. Copyright © 2014. Published by Elsevier B.V.

  3. Influence of operational parameters on nitrogen removal efficiency and microbial communities in a full-scale activated sludge process.

    PubMed

    Kim, Young Mo; Cho, Hyun Uk; Lee, Dae Sung; Park, Donghee; Park, Jong Moon

    2011-11-01

    To improve the efficiency of total nitrogen (TN) removal, solid retention time (SRT) and internal recycling ratio controls were selected as operating parameters in a full-scale activated sludge process treating high strength industrial wastewater. Increased biomass concentration via SRT control enhanced TN removal. Also, decreasing the internal recycling ratio restored the nitrification process, which had been inhibited by phenol shock loading. Therefore, physiological alteration of the bacterial populations by application of specific operational strategies may stabilize the activated sludge process. Additionally, two dominant ammonia oxidizing bacteria (AOB) populations, Nitrosomonas europaea and Nitrosomonas nitrosa, were observed in all samples with no change in the community composition of AOB. In a nitrification tank, it was observed that the Nitrobacter populations consistently exceeded those of the Nitrospira within the nitrite oxidizing bacteria (NOB) community. Through using quantitative real-time PCR (qPCR), nirS, the nitrite reducing functional gene, was observed to predominate in the activated sludge of an anoxic tank, whereas there was the least amount of the narG gene, the nitrate reducing functional gene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Image processing analysis of geospatial uav orthophotos for palm oil plantation monitoring

    NASA Astrophysics Data System (ADS)

    Fahmi, F.; Trianda, D.; Andayani, U.; Siregar, B.

    2018-03-01

    Unmanned Aerial Vehicle (UAV) is one of the tools that can be used to monitor palm oil plantation remotely. With the geospatial orthophotos, it is possible to identify which part of the plantation land is fertile for planted crops, means to grow perfectly. It is also possible furthermore to identify less fertile in terms of growth but not perfect, and also part of plantation field that is not growing at all. This information can be easily known quickly with the use of UAV photos. In this study, we utilized image processing algorithm to process the orthophotos for more accurate and faster analysis. The resulting orthophotos image were processed using Matlab including classification of fertile, infertile, and dead palm oil plants by using Gray Level Co-Occurrence Matrix (GLCM) method. The GLCM method was developed based on four direction parameters with specific degrees 0°, 45°, 90°, and 135°. From the results of research conducted with 30 image samples, it was found that the accuracy of the system can be reached by using the features extracted from the matrix as parameters Contras, Correlation, Energy, and Homogeneity.

  5. A process evaluation of implementing a vocational enablement protocol for employees with hearing difficulties in clinical practice.

    PubMed

    Gussenhoven, Arjenne H M; Singh, Amika S; Goverts, S Theo; van Til, Marten; Anema, Johannes R; Kramer, Sophia E

    2015-08-01

    A multidisciplinary vocational rehabilitation programme, the Vocational Enablement Protocol (VEP) was developed to address the specific needs of employees with hearing difficulties. In the current study we evaluated the process of implementing the VEP in audiologic care among employees with hearing impairment. In conjunction with a randomized controlled trial, we collected and analysed data on seven process parameters: recruitment, reach, fidelity, dose delivered, dose received and implemented, satisfaction, and perceived benefit. Sixty-six employees with hearing impairment participated in the VEP. The multidisciplinary team providing the VEP comprised six professionals. The professionals performed the VEP according to the protocol. Of the recommendations delivered by the professionals, 31% were perceived as implemented by the employees. Compliance rate was highest for hearing-aid uptake (51%). Both employees and professionals were highly satisfied with the VEP. Participants rated good perceived benefit from it. Our results indicate that the VEP could be a useful treatment for employees with hearing difficulties from a process evaluation perspective. Implementation research in the audiological setting should be encouraged in order to further provide insight into parameters facilitating or hindering successful implementation of an intervention and to improve its quality and efficacy.

  6. Exploring Model Error through Post-processing and an Ensemble Kalman Filter on Fire Weather Days

    NASA Astrophysics Data System (ADS)

    Erickson, Michael J.

    The proliferation of coupling atmospheric ensemble data to models in other related fields requires a priori knowledge of atmospheric ensemble biases specific to the desired application. In that spirit, this dissertation focuses on elucidating atmospheric ensemble model bias and error through a variety of different methods specific to fire weather days (FWDs) over the Northeast United States (NEUS). Other than a handful of studies that use models to predict fire indices for single fire seasons (Molders 2008, Simpson et al. 2014), an extensive exploration of model performance specific to FWDs has not been attempted. Two unique definitions for FWDs are proposed; one that uses pre-existing fire indices (FWD1) and another from a new statistical fire weather index (FWD2) relating fire occurrence and near-surface meteorological observations. Ensemble model verification reveals FWDs to have warmer (> 1 K), moister (~ 0.4 g kg-1) and less windy (~ 1 m s-1) biases than the climatological average for both FWD1 and FWD2. These biases are not restricted to the near surface but exist through the entirety of the planetary boundary layer (PBL). Furthermore, post-processing methods are more effective when previous FWDs are incorporated into the statistical training, suggesting that model bias could be related to the synoptic flow pattern. An Ensemble Kalman Filter (EnKF) is used to explore the effectiveness of data assimilation during a period of extensive FWDs in April 2012. Model biases develop rapidly on FWDs, consistent with the FWD1 and FWD2 verification. However, the EnKF is effective at removing most biases for temperature, wind speed and specific humidity. Potential sources of error in the parameterized physics of the PBL are explored by rerunning the EnKF with simultaneous state and parameter estimation (SSPE) for two relevant parameters within the ACM2 PBL scheme. SSPE helps to reduce the cool temperature bias near the surface on FWDs, with the variability in parameter estimates exhibiting some relationship to model bias for temperature. This suggests the potential for structural model error within the ACM2 PBL scheme and could lead toward the future development of improved PBL parameterizations.

  7. Reliability Testing Using the Vehicle Durability Simulator

    DTIC Science & Technology

    2017-11-20

    remote parameter control (RPC) software. The software is specifically designed for the data collection, analysis, and simulation processes outlined in...4516. 3. TOP 02-2-505 Inspection and Preliminary Operation of Vehicles, 4 February 1987. 4. Multi-Shaker Test and Control : Design , Test, and...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 20-11-2017 2. REPORT

  8. Skin penetration of silicon dioxide microneedle arrays.

    PubMed

    Kim, Sangchae; Shetty, S; Price, D; Bhansali, S

    2006-01-01

    Out-of-plane hollow silicon dioxide microneedle arrays were fabricated and investigated to determine their efficacy for transdermal applications. The fabrication process of the SiO2 microneedles is described, and mechanical fracture forces were investigated on microneedles with different geometrical dimensions. Biomechanical characterization of the microneedles was performed to specifically test for reliable stratum corneum and skin insertion by changing the regulatory parameters such as needle width and cross-section.

  9. In Situ Chemical Oxidation for Groundwater Remediation: Site-Specific Engineering & Technology Application

    DTIC Science & Technology

    2010-10-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Colorado School of Mines,1500 Illinois St, Golden ,CO,80401 8. PERFORMING ORGANIZATION REPORT NUMBER 9...Protocol page 13 Overall ISCO Protocol Flow Diagram addition, laboratory studies may be used to select optimal chemistry parameters to maximize oxidant...Design Process 5. Because of the complexity of these oxidants’ chemistry and implementation, with much of the knowledge base residing with those

  10. Self-association of plant wax components: a thermodynamic analysis.

    PubMed

    Casado, C G; Heredia, A

    2001-01-01

    Excess specific heat, C(p)()(E), of binary mixtures of selected components of plant cuticular waxes has been determined. This thermodynamic parameter gives an explanation of the special molecular arrangement in crystalline and amorphous zones of plant waxes. C(p)()(E) values indicate that hydrogen bonding between chains results in the formation of amorphous zones. Conclusions on the self-asembly process of plant waxes have been also made.

  11. Human Systems Integration Synthesis Model for Ship Design

    DTIC Science & Technology

    2012-09-01

    this process. Specifically, I thank Dr. Paulo for both planting the seed that led to this thesis and giving me the opportunity to participate in the...manufacturing systems, refineries, and nuclear power plants must also rely on up-to-date knowledge of situation parameters and any patterns among...safety hazards were many due to exposure to toxic fuel, increased probability of fires, and steam plant explosions. In order to address the

  12. Nimbus 7 Coastal Zone Color Scanner (CZCS). Level 1 data product users' guide

    NASA Technical Reports Server (NTRS)

    Williams, S. P.; Szajna, E. F.; Hovis, W. A.

    1985-01-01

    The coastal zone color scanner (CZCS) is a scanning multispectral radiometer designed specifically for the remote sensing of Ocean Color parameters from an Earth orbiting space platform. A technical manual which is intended for users of NIMBUS 7 CZCS Level 1 data products is presented. It contains information needed by investigators and data processing personnel to operate on the data using digital computers and related equipment.

  13. Effect of Thermal Budget on the Electrical Characterization of Atomic Layer Deposited HfSiO/TiN Gate Stack MOSCAP Structure

    PubMed Central

    Khan, Z. N.; Ahmed, S.; Ali, M.

    2016-01-01

    Metal Oxide Semiconductor (MOS) capacitors (MOSCAP) have been instrumental in making CMOS nano-electronics realized for back-to-back technology nodes. High-k gate stacks including the desirable metal gate processing and its integration into CMOS technology remain an active research area projecting the solution to address the requirements of technology roadmaps. Screening, selection and deposition of high-k gate dielectrics, post-deposition thermal processing, choice of metal gate structure and its post-metal deposition annealing are important parameters to optimize the process and possibly address the energy efficiency of CMOS electronics at nano scales. Atomic layer deposition technique is used throughout this work because of its known deposition kinetics resulting in excellent electrical properties and conformal structure of the device. The dynamics of annealing greatly influence the electrical properties of the gate stack and consequently the reliability of the process as well as manufacturable device. Again, the choice of the annealing technique (migration of thermal flux into the layer), time-temperature cycle and sequence are key parameters influencing the device’s output characteristics. This work presents a careful selection of annealing process parameters to provide sufficient thermal budget to Si MOSCAP with atomic layer deposited HfSiO high-k gate dielectric and TiN gate metal. The post-process annealing temperatures in the range of 600°C -1000°C with rapid dwell time provide a better trade-off between the desirable performance of Capacitance-Voltage hysteresis and the leakage current. The defect dynamics is thought to be responsible for the evolution of electrical characteristics in this Si MOSCAP structure specifically designed to tune the trade-off at low frequency for device application. PMID:27571412

  14. An integrated process analytical technology (PAT) approach to monitoring the effect of supercooling on lyophilization product and process parameters of model monoclonal antibody formulations.

    PubMed

    Awotwe Otoo, David; Agarabi, Cyrus; Khan, Mansoor A

    2014-07-01

    The aim of the present study was to apply an integrated process analytical technology (PAT) approach to control and monitor the effect of the degree of supercooling on critical process and product parameters of a lyophilization cycle. Two concentrations of a mAb formulation were used as models for lyophilization. ControLyo™ technology was applied to control the onset of ice nucleation, whereas tunable diode laser absorption spectroscopy (TDLAS) was utilized as a noninvasive tool for the inline monitoring of the water vapor concentration and vapor flow velocity in the spool during primary drying. The instantaneous measurements were then used to determine the effect of the degree of supercooling on critical process and product parameters. Controlled nucleation resulted in uniform nucleation at lower degrees of supercooling for both formulations, higher sublimation rates, lower mass transfer resistance, lower product temperatures at the sublimation interface, and shorter primary drying times compared with the conventional shelf-ramped freezing. Controlled nucleation also resulted in lyophilized cakes with more elegant and porous structure with no visible collapse or shrinkage, lower specific surface area, and shorter reconstitution times compared with the uncontrolled nucleation. Uncontrolled nucleation however resulted in lyophilized cakes with relatively lower residual moisture contents compared with controlled nucleation. TDLAS proved to be an efficient tool to determine the endpoint of primary drying. There was good agreement between data obtained from TDLAS-based measurements and SMART™ technology. ControLyo™ technology and TDLAS showed great potential as PAT tools to achieve enhanced process monitoring and control during lyophilization cycles. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  15. Optimization on the impeller of a low-specific-speed centrifugal pump for hydraulic performance improvement

    NASA Astrophysics Data System (ADS)

    Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng

    2016-09-01

    In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.

  16. Competitive clonal hematopoiesis in mouse chimeras explained by a stochastic model of stem cell organization.

    PubMed

    Roeder, Ingo; Kamminga, Leonie M; Braesel, Katrin; Dontje, Bert; de Haan, Gerald; Loeffler, Markus

    2005-01-15

    Many current experimental results show the necessity of new conceptual approaches to understand hematopoietic stem cell organization. Recently, we proposed a novel theoretical concept and a corresponding quantitative model based on microenvironment-dependent stem cell plasticity. The objective of our present work is to subject this model to an experimental test for the situation of chimeric hematopoiesis. Investigating clonal competition processes in DBA/2-C57BL/6 mouse chimeras, we observed biphasic chimerism development with initially increasing but long-term declining DBA/2 contribution. These experimental results were used to select the parameters of the mathematical model. To validate the model beyond this specific situation, we fixed the obtained parameter configuration to simulate further experimental settings comprising variations of transplanted DBA/2-C57BL/6 proportions, secondary transplantations, and perturbation of stabilized chimeras by cytokine and cytotoxic treatment. We show that the proposed model is able to consistently describe the situation of chimeric hematopoiesis. Our results strongly support the view that the relative growth advantage of strain-specific stem cells is not a fixed cellular property but is sensitively dependent on the actual state of the entire system. We conclude that hematopoietic stem cell organization should be understood as a flexible, self-organized rather than a fixed, preprogrammed process.

  17. A Comparative Analysis of Life-Cycle Assessment Tools for ...

    EPA Pesticide Factsheets

    We identified and evaluated five life-cycle assessment tools that community decision makers can use to assess the environmental and economic impacts of end-of-life (EOL) materials management options. The tools evaluated in this report are waste reduction mode (WARM), municipal solid waste-decision support tool (MSW-DST), solid waste optimization life-cycle framework (SWOLF), environmental assessment system for environmental technologies (EASETECH), and waste and resources assessment for the environment (WRATE). WARM, MSW-DST, and SWOLF were developed for US-specific materials management strategies, while WRATE and EASETECH were developed for European-specific conditions. All of the tools (with the exception of WARM) allow specification of a wide variety of parameters (e.g., materials composition and energy mix) to a varying degree, thus allowing users to model specific EOL materials management methods even outside the geographical domain they are originally intended for. The flexibility to accept user-specified input for a large number of parameters increases the level of complexity and the skill set needed for using these tools. The tools were evaluated and compared based on a series of criteria, including general tool features, the scope of the analysis (e.g., materials and processes included), and the impact categories analyzed (e.g., climate change, acidification). A series of scenarios representing materials management problems currently relevant to c

  18. Evolution of a plastic quantitative trait in an age-structured population in a fluctuating environment.

    PubMed

    Engen, Steinar; Lande, Russell; Saether, Bernt-Erik

    2011-10-01

    We analyze weak fluctuating selection on a quantitative character in an age-structured population not subject to density regulation. We assume that early in the first year of life before selection, during a critical state of development, environments exert a plastic effect on the phenotype, which remains constant throughout the life of an individual. Age-specific selection on the character affects survival and fecundity, which have intermediate optima subject to temporal environmental fluctuations with directional selection in some age classes as special cases. Weighting individuals by their reproductive value, as suggested by Fisher, we show that the expected response per year in the weighted mean character has the same form as for models with no age structure. Environmental stochasticity generates stochastic fluctuations in the weighted mean character following a first-order autoregressive model with a temporally autocorrelated noise term and stationary variance depending on the amount of phenotypic plasticity. The parameters of the process are simple weighted averages of parameters used to describe age-specific survival and fecundity. The "age-specific selective weights" are related to the stable distribution of reproductive values among age classes. This allows partitioning of the change in the weighted mean character into age-specific components. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  19. Specificity control for read alignments using an artificial reference genome-guided false discovery rate.

    PubMed

    Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y

    2014-01-01

    Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.

  20. The effects of sleep deprivation on item and associative recognition memory.

    PubMed

    Ratcliff, Roger; Van Dongen, Hans P A

    2018-02-01

    Sleep deprivation adversely affects the ability to perform cognitive tasks, but theories range from predicting an overall decline in cognitive functioning because of reduced stability in attentional networks to specific deficits in various cognitive domains or processes. We measured the effects of sleep deprivation on two memory tasks, item recognition ("was this word in the list studied") and associative recognition ("were these two words studied in the same pair"). These tasks test memory for information encoded a few minutes earlier and so do not address effects of sleep deprivation on working memory or consolidation after sleep. A diffusion model was used to decompose accuracy and response time distributions to produce parameter estimates of components of cognitive processing. The model assumes that over time, noisy evidence from the task stimulus is accumulated to one of two decision criteria, and parameters governing this process are extracted and interpreted in terms of distinct cognitive processes. Results showed that sleep deprivation reduces drift rate (evidence used in the decision process), with little effect on the other components of the decision process. These results contrast with the effects of aging, which show little decline in item recognition but large declines in associative recognition. The results suggest that sleep deprivation degrades the quality of information stored in memory and that this may occur through degraded attentional processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Top