Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process
NASA Technical Reports Server (NTRS)
Mokashi, A. R.; Kachare, A. H.
1981-01-01
The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
System and method for networking electrochemical devices
Williams, Mark C.; Wimer, John G.; Archer, David H.
1995-01-01
An improved electrochemically active system and method including a plurality of electrochemical devices, such as fuel cells and fluid separation devices, in which the anode and cathode process-fluid flow chambers are connected in fluid-flow arrangements so that the operating parameters of each of said plurality of electrochemical devices which are dependent upon process-fluid parameters may be individually controlled to provide improved operating efficiency. The improvements in operation include improved power efficiency and improved fuel utilization in fuel cell power generating systems and reduced power consumption in fluid separation devices and the like through interstage process fluid parameter control for series networked electrochemical devices. The improved networking method includes recycling of various process flows to enhance the overall control scheme.
Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo
2017-08-01
This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Systems and methods for optimal power flow on a radial network
Low, Steven H.; Peng, Qiuyu
2018-04-24
Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.
Fabrication of Microstripline Wiring for Large Format Transition Edge Sensor Arrays
NASA Technical Reports Server (NTRS)
Chervenak, James A.; Adams, J. M.; Bailey, C. N.; Bandler, S.; Brekosky, R. P.; Eckart, M. E.; Erwin, A. E.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.;
2012-01-01
We have developed a process to integrate microstripline wiring with transition edge sensors (TES). The process includes additional layers for metal-etch stop and dielectric adhesion to enable recovery of parameters achieved in non-microstrip pixel designs. We report on device parameters in close-packed TES arrays achieved with the microstrip process including R(sub n), G, and T(sub c) uniformity. Further, we investigate limits of this method of producing high-density, microstrip wiring including critical current to determine the ultimate scalability of TES arrays with two layers of wiring.
Knowledge transmission model with differing initial transmission and retransmission process
NASA Astrophysics Data System (ADS)
Wang, Haiying; Wang, Jun; Small, Michael
2018-10-01
Knowledge transmission is a cyclic dynamic diffusion process. The rate of acceptance of knowledge differs upon whether or not the recipient has previously held the knowledge. In this paper, the knowledge transmission process is divided into an initial and a retransmission procedure, each with its own transmission and self-learning parameters. Based on epidemic spreading model, we propose a naive-evangelical-agnostic (VEA) knowledge transmission model and derive mean-field equations to describe the dynamics of knowledge transmission in homogeneous networks. Theoretical analysis identifies a criterion for the persistence of knowledge, i.e., the reproduction number R0 depends on the minor effective parameters between the initial and retransmission process. Moreover, the final size of evangelical individuals is only related to retransmission process parameters. Numerical simulations validate the theoretical analysis. Furthermore, the simulations indicate that increasing the initial transmission parameters, including first transmission and self-learning rates of naive individuals, can accelerate the velocity of knowledge transmission efficiently but have no effect on the final size of evangelical individuals. In contrast, the retransmission parameters, including retransmission and self-learning rates of agnostic individuals, have a significant effect on the rate of knowledge transmission, i.e., the larger parameters the greater final density of evangelical individuals.
Method and apparatus for assessing weld quality
Smartt, Herschel B.; Kenney, Kevin L.; Johnson, John A.; Carlson, Nancy M.; Clark, Denis E.; Taylor, Paul L.; Reutzel, Edward W.
2001-01-01
Apparatus for determining a quality of a weld produced by a welding device according to the present invention includes a sensor operatively associated with the welding device. The sensor is responsive to at least one welding process parameter during a welding process and produces a welding process parameter signal that relates to the at least one welding process parameter. A computer connected to the sensor is responsive to the welding process parameter signal produced by the sensor. A user interface operatively associated with the computer allows a user to select a desired welding process. The computer processes the welding process parameter signal produced by the sensor in accordance with one of a constant voltage algorithm, a short duration weld algorithm or a pulsed current analysis module depending on the desired welding process selected by the user. The computer produces output data indicative of the quality of the weld.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
1992-08-01
including instrumenting and dressing the subjects, monitoring the physiological parameters in the simulator, and collecting and processing data. They...also was decided to extend the recruiting process to include all helicopter aviators, even if not UH-60 qualified. There is little in the flight profile...parameter channels, and the data were processed to produce a single root mean square (RMS) error value for each channel appropriate to each of the 9
Real-time parameter optimization based on neural network for smart injection molding
NASA Astrophysics Data System (ADS)
Lee, H.; Liau, Y.; Ryu, K.
2018-03-01
The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.
Blanking and piercing theory, applications and recent experimental results
NASA Astrophysics Data System (ADS)
Zaid, Adnan l. O.
2014-06-01
Blanking and piercing are manufacturing processes by which certain geometrical shapes are sheared off a sheet metal. If the sheared off part is the one required, the processes referred to as blanking and if the remaining part in the sheet is the one required, the process is referred to as piercing. In this paper, the theory and practice of these processes are reviewed and discussed The main parameters affecting these processes are presented and discussed. These include: the radial clearance percentage, punch and die geometrical parameters, for example punch and die profile radii. The abovementioned parameters on the force and energy required to effect blanking together with their effect on the quality of the products are also presented and discussed. Recent experimental results together with photomacrographs and photomicrographs are also included and discussed. Finally, the effect of punch and die wear on the quality of the blanks is alsogiven and discussed.
Bio-oil from fast pyrolysis of lignin: Effects of process and upgrading parameters.
Fan, Liangliang; Zhang, Yaning; Liu, Shiyu; Zhou, Nan; Chen, Paul; Cheng, Yanling; Addy, Min; Lu, Qian; Omar, Muhammad Mubashar; Liu, Yuhuan; Wang, Yunpu; Dai, Leilei; Anderson, Erik; Peng, Peng; Lei, Hanwu; Ruan, Roger
2017-10-01
Effects of process parameters on the yield and chemical profile of bio-oil from fast pyrolysis of lignin and the processes for lignin-derived bio-oil upgrading were reviewed. Various process parameters including pyrolysis temperature, reactor types, lignin characteristics, residence time, and feeding rate were discussed and the optimal parameter conditions for improved bio-oil yield and quality were concluded. In terms of lignin-derived bio-oil upgrading, three routes including pretreatment of lignin, catalytic upgrading, and co-pyrolysis of hydrogen-rich materials have been investigated. Zeolite cracking and hydrodeoxygenation (HDO) treatment are two main methods for catalytic upgrading of lignin-derived bio-oil. Factors affecting zeolite activity and the main zeolite catalytic mechanisms for lignin conversion were analyzed. Noble metal-based catalysts and metal sulfide catalysts are normally used as the HDO catalysts and the conversion mechanisms associated with a series of reactions have been proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Apparatus and method for fluid analysis
Wilson, Bary W.; Peters, Timothy J.; Shepard, Chester L.; Reeves, James H.
2004-11-02
The present invention is an apparatus and method for analyzing a fluid used in a machine or in an industrial process line. The apparatus has at least one meter placed proximate the machine or process line and in contact with the machine or process fluid for measuring at least one parameter related to the fluid. The at least one parameter is a standard laboratory analysis parameter. The at least one meter includes but is not limited to viscometer, element meter, optical meter, particulate meter, and combinations thereof.
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Young, Kevin L [Idaho Falls, ID; Hungate, Kevin E [Idaho Falls, ID
2010-02-23
A system for providing operational feedback to a user of a detection probe may include an optical sensor to generate data corresponding to a position of the detection probe with respect to a surface; a microprocessor to receive the data; a software medium having code to process the data with the microprocessor and pre-programmed parameters, and making a comparison of the data to the parameters; and an indicator device to indicate results of the comparison. A method of providing operational feedback to a user of a detection probe may include generating output data with an optical sensor corresponding to the relative position with respect to a surface; processing the output data, including comparing the output data to pre-programmed parameters; and indicating results of the comparison.
A general model for attitude determination error analysis
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Seidewitz, ED; Nicholson, Mark
1988-01-01
An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.
Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran
2018-06-15
Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Relationship between the erosion properties of soils and other parameters
USDA-ARS?s Scientific Manuscript database
Soil parameters are essential for erosion process prediction and ultimately improved model development, especially as they relate to dam and levee failure. Soil parameters including soil texture and structure, soil classification, soil compaction, moisture content, and degree of saturation can play...
Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B; Sturm, Benjamin W
2014-11-11
A scintillator radiation detector system according to one embodiment includes a scintillator; and a processing device for processing pulse traces corresponding to light pulses from the scintillator, wherein pulse digitization is used to improve energy resolution of the system. A scintillator radiation detector system according to another embodiment includes a processing device for fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times and performing a direct integration of fit parameters. A method according to yet another embodiment includes processing pulse traces corresponding to light pulses from a scintillator, wherein pulse digitization is used to improve energy resolution of the system. A method in a further embodiment includes fitting digitized scintillation waveforms to an algorithm based on identifying rise and decay times; and performing a direct integration of fit parameters. Additional systems and methods are also presented.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Trajectory Dispersed Vehicle Process for Space Launch System
NASA Technical Reports Server (NTRS)
Statham, Tamara; Thompson, Seth
2017-01-01
The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.
Davis, Jesse Harper Zehring [Berkeley, CA; Stark, Jr., Douglas Paul; Kershaw, Christopher Patrick [Hayward, CA; Kyker, Ronald Dean [Livermore, CA
2008-06-10
A distributed wireless sensor network node is disclosed. The wireless sensor network node includes a plurality of sensor modules coupled to a system bus and configured to sense a parameter. The parameter may be an object, an event or any other parameter. The node collects data representative of the parameter. The node also includes a communication module coupled to the system bus and configured to allow the node to communicate with other nodes. The node also includes a processing module coupled to the system bus and adapted to receive the data from the sensor module and operable to analyze the data. The node also includes a power module connected to the system bus and operable to generate a regulated voltage.
Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah
2011-03-01
The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists
ASRM test report: Autoclave cure process development
NASA Technical Reports Server (NTRS)
Nachbar, D. L.; Mitchell, Suzanne
1992-01-01
ASRM insulated segments will be autoclave cured following insulation pre-form installation and strip wind operations. Following competitive bidding, Aerojet ASRM Division (AAD) Purchase Order 100142 was awarded to American Fuel Cell and Coated Fabrics Company, Inc. (Amfuel), Magnolia, AR, for subcontracted insulation autoclave cure process development. Autoclave cure process development test requirements were included in Task 3 of TM05514, Manufacturing Process Development Specification for Integrated Insulation Characterization and Stripwind Process Development. The test objective was to establish autoclave cure process parameters for ASRM insulated segments. Six tasks were completed to: (1) evaluate cure parameters that control acceptable vulcanization of ASRM Kevlar-filled EPDM insulation material; (2) identify first and second order impact parameters on the autoclave cure process; and (3) evaluate insulation material flow-out characteristics to support pre-form configuration design.
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover
NASA Technical Reports Server (NTRS)
Dangelo, K. R.
1974-01-01
A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.
Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia
2016-01-01
During peritoneal dialysis (PD), the peritoneal membrane undergoes ageing processes that affect its function. Here we analyzed associations of patient age and dialysis vintage with parameters of peritoneal transport of fluid and solutes, directly measured and estimated based on the pore model, for individual patients. Thirty-three patients (15 females; age 60 (21-87) years; median time on PD 19 (3-100) months) underwent sequential peritoneal equilibration test. Dialysis vintage and patient age did not correlate. Estimation of parameters of the two-pore model of peritoneal transport was performed. The estimated fluid transport parameters, including hydraulic permeability (LpS), fraction of ultrasmall pores (α u), osmotic conductance for glucose (OCG), and peritoneal absorption, were generally independent of solute transport parameters (diffusive mass transport parameters). Fluid transport parameters correlated whereas transport parameters for small solutes and proteins did not correlate with dialysis vintage and patient age. Although LpS and OCG were lower for older patients and those with long dialysis vintage, αu was higher. Thus, fluid transport parameters--rather than solute transport parameters--are linked to dialysis vintage and patient age and should therefore be included when monitoring processes linked to ageing of the peritoneal membrane.
Estimation of nonlinear pilot model parameters including time delay.
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.; Wells, W. R.
1972-01-01
Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.
NASA Data Evaluation (2015): Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies
NASA Astrophysics Data System (ADS)
Burkholder, J. B.; Sander, S. P.; Abbatt, J.; Barker, J. R.; Huie, R. E.; Kolb, C. E., Jr.; Kurylo, M. J., III; Orkin, V. L.; Wilmouth, D. M.; Wine, P. H.
2015-12-01
Atmospheric chemistry models must include a large number of processes to accurately describe the temporal and spatial behavior of atmospheric composition. They require a wide range of chemical and physical data (parameters) that describe elementary gas-phase and heterogeneous processes. The review and evaluation of chemical and physical data has, therefore, played an important role in the development of chemical models and in their use in environmental assessment activities. The NASA data panel evaluation has a broad atmospheric focus that includes Ox, O(1D), singlet O2, HOx, NOx, Organic, FOx, ClOx, BrOx, IOx, SOx, and Na reactions, three-body reactions, equilibrium constants, photochemistry, Henry's Law coefficients, aqueous chemistry, heterogeneous chemistry and processes, and thermodynamic parameters. The 2015 evaluation includes critical coverage of ~700 bimolecular reactions, 86 three-body reactions, 33 equilibrium constants, ~220 photochemical species, ~360 aqueous and heterogeneous processes, and thermodynamic parameters for ~800 species with over 5000 literature citations reviewed. Each evaluation includes (1) recommended values (e.g. rate coefficients, absorption cross sections, solubilities, and uptake coefficients) with estimated uncertainty factors and (2) a note describing the available experimental and theoretical data and an explanation for the recommendation. This presentation highlights some of the recent additions to the evaluation that include: (1) expansion of thermochemical parameters, including Hg species, (2) CH2OO (Criegee) chemistry, (3) Isoprene and its major degradation product chemistry, (4) halocarbon chemistry, (5) Henry's law solubility data, and (6) uptake coefficients. In addition, a listing of complete references with the evaluation notes has been implemented. Users of the data evaluation are encouraged to suggest potential improvements and ways that the evaluation can better serve the atmospheric chemistry community.
Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) PARM tape user's guide
NASA Technical Reports Server (NTRS)
Han, D.; Gloersen, P.; Kim, S. T.; Fu, C. C.; Cebula, R. P.; Macmillan, D.
1992-01-01
The Scanning Multichannel Microwave Radiometer (SMMR) instrument, onboard the Nimbus-7 spacecraft, collected data from Oct. 1978 until Jun. 1986. The data were processed to physical parameter level products. Geophysical parameters retrieved include the following: sea-surface temperatures, sea-surface windspeed, total column water vapor, and sea-ice parameters. These products are stored on PARM-LO, PARM-SS, and PARM-30 tapes. The geophysical parameter retrieval algorithms and the quality of these products are described for the period between Nov. 1978 and Oct 1985. Additionally, data formats and data availability are included.
Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchheit, Thomas E.; Wilcox, Ian Zachary; Sandoval, Andrew J
This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction andmore » portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.« less
NASA Astrophysics Data System (ADS)
Sharifi, P.; Jamali, J.; Sadayappan, K.; Wood, J. T.
2018-05-01
A quantitative experimental study of the effects of process parameters on the formation of defects during solidification of high-pressure die cast magnesium alloy components is presented. The parameters studied are slow-stage velocity, fast-stage velocity, intensification pressure, and die temperature. The amount of various defects are quantitatively characterized. Multiple runs of the commercial casting simulation package, ProCAST™, are used to model the mold-filling and solidification events. Several locations in the component including knit lines, last-to-fill region, and last-to-solidify region are identified as the critical regions that have a high concentration of defects. The area fractions of total porosity, shrinkage porosity, gas porosity, and externally solidified grains are separately measured. This study shows that the process parameters, fluid flow and local solidification conditions, play major roles in the formation of defects during HPDC process.
The input variables for a numerical model of reactive solute transport in groundwater include both transport parameters, such as hydraulic conductivity and infiltration, and reaction parameters that describe the important chemical and biological processes in the system. These pa...
42 CFR 493.1256 - Standard: Control procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for having control procedures that monitor the accuracy and precision of the complete analytic process..., include two control materials, including one that is capable of detecting errors in the extraction process... control materials having previously determined statistical parameters. (e) For reagent, media, and supply...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aita, C.R.
1993-09-30
The research developed process parameter-growth environment-film property relations (phase maps) for model sputter-deposited transition metal oxides, nitrides, and oxynitrides grown by reactive sputter deposition at low temperature. Optical emission spectrometry was used for plasma diagnostics. The results summarized here include the role of sputtered metal-oxygen molecular flux in oxide film growth; structural differences in highest valence oxides including conditions for amorphous growth; and using fundamental optical absorption edge features to probe short range structural disorder. Eight appendices containing sixteen journal articles are included.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
Mathematical Model Of Variable-Polarity Plasma Arc Welding
NASA Technical Reports Server (NTRS)
Hung, R. J.
1996-01-01
Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.
PMMA/PS coaxial electrospinning: a statistical analysis on processing parameters
NASA Astrophysics Data System (ADS)
Rahmani, Shahrzad; Arefazar, Ahmad; Latifi, Masoud
2017-08-01
Coaxial electrospinning, as a versatile method for producing core-shell fibers, is known to be very sensitive to two classes of influential factors including material and processing parameters. Although coaxial electrospinning has been the focus of many studies, the effects of processing parameters on the outcomes of this method have not yet been well investigated. A good knowledge of the impacts of processing parameters and their interactions on coaxial electrospinning can make it possible to better control and optimize this process. Hence, in this study, the statistical technique of response surface method (RSM) using the design of experiments on four processing factors of voltage, distance, core and shell flow rates was applied. Transmission electron microscopy (TEM), scanning electron microscopy (SEM), oil immersion and Fluorescent microscopy were used to characterize fiber morphology. The core and shell diameters of fibers were measured and the effects of all factors and their interactions were discussed. Two polynomial models with acceptable R-squares were proposed to describe the core and shell diameters as functions of the processing parameters. Voltage and distance were recognized as the most significant and influential factors on shell diameter, while core diameter was mainly under the influence of core and shell flow rates besides the voltage.
NASA Astrophysics Data System (ADS)
Pomeroy, J. W.; Fang, X.
2014-12-01
The vast effort in hydrology devoted to parameter calibration as a means to improve model performance assumes that the models concerned are not fundamentally wrong. By focussing on finding optimal parameter sets and ascribing poor model performance to parameter or data uncertainty, these efforts may fail to consider the need to improve models with more intelligent descriptions of hydrological processes. To test this hypothesis, a flexible physically based hydrological model including a full suite of snow hydrology processes as well as warm season, hillslope and groundwater hydrology was applied to Marmot Creek Research Basin, Canadian Rocky Mountains where excellent driving meteorology and basin biophysical descriptions exist. Model parameters were set from values found in the basin or from similar environments; no parameters were calibrated. The model was tested against snow surveys and streamflow observations. The model used algorithms that describe snow redistribution, sublimation and forest canopy effects on snowmelt and evaporative processes that are rarely implemented in hydrological models. To investigate the contribution of these processes to model predictive capability, the model was "falsified" by deleting parameterisations for forest canopy snow mass and energy, blowing snow, intercepted rain evaporation, and sublimation. Model falsification by ignoring forest canopy processes contributed to a large increase in SWE errors for forested portions of the research basin with RMSE increasing from 19 to 55 mm and mean bias (MB) increasing from 0.004 to 0.62. In the alpine tundra portion, removing blowing processes resulted in an increase in model SWE MB from 0.04 to 2.55 on north-facing slopes and -0.006 to -0.48 on south-facing slopes. Eliminating these algorithms degraded streamflow prediction with the Nash Sutcliffe efficiency dropping from 0.58 to 0.22 and MB increasing from 0.01 to 0.09. These results show dramatic model improvements by including snow redistribution and melt processes associated with wind transport and forest canopies. As most hydrological models do not currently include these processes, it is suggested that modellers first improve the realism of model structures before trying to optimise what are inherently inadequate simulations of hydrology.
NASA Astrophysics Data System (ADS)
Ganje, Mohammad; Jafari, Seid Mahdi; Farzaneh, Vahid; Malekjani, Narges
2018-06-01
To study the kinetics of color degradation, the tomato paste was designed to be processed at three different temperatures including 60, 70 and 80 °C for 25, 50, 75 and 100 min. a/b ratio, total color difference, saturation index and hue angle were calculated with the use of three main color parameters including L (lightness), a (redness-greenness) and b (yellowness-blueness) values. Kinetics of color degradation was developed by Arrhenius equation and the alterations were modelled with the use of response surface methodology (RSM). It was detected that all of the studied responses followed a first order reaction kinetics with an exception in TCD parameter (zeroth order). TCD and a/b respectively with the highest and lowest activation energy presented the highest sensitivity to the temperature alterations. The maximum and minimum rates of alterations were observed by TCD and b parameters, respectively. It was obviously determined that all of the studied parameters (responses) were affected by the selected independent parameters.
Surveillance of industrial processes with correlated parameters
White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.
1996-01-01
A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
Investigation into the influence of build parameters on failure of 3D printed parts
NASA Astrophysics Data System (ADS)
Fornasini, Giacomo
Additive manufacturing, including fused deposition modeling (FDM), is transforming the built world and engineering education. Deep understanding of parts created through FDM technology has lagged behind its adoption in home, work, and academic environments. Properties of parts created from bulk materials through traditional manufacturing are understood well enough to accurately predict their behavior through analytical models. Unfortunately, Additive Manufacturing (AM) process parameters create anisotropy on a scale that fundamentally affects the part properties. Understanding AM process parameters (implemented by program algorithms called slicers) is necessary to predict part behavior. Investigating algorithms controlling print parameters (slicers) revealed stark differences between the generation of part layers. In this work, tensile testing experiments, including a full factorial design, determined that three key factors, width, thickness, infill density, and their interactions, significantly affect the tensile properties of 3D printed test samples.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.
2014-06-01
We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram
2018-06-08
Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.
A holistic approach towards defined product attributes by Maillard-type food processing.
Davidek, Tomas; Illmann, Silke; Rytz, Andreas; Blank, Imre
2013-07-01
A fractional factorial experimental design was used to quantify the impact of process and recipe parameters on selected product attributes of extruded products (colour, viscosity, acrylamide, and the flavour marker 4-hydroxy-2,5-dimethyl-3(2H)-furanone, HDMF). The study has shown that recipe parameters (lysine, phosphate) can be used to modulate the HDMF level without changing the specific mechanical energy (SME) and consequently the texture of the product, while processing parameters (temperature, moisture) impact both HDMF and SME in parallel. Similarly, several parameters, including phosphate level, temperature and moisture, simultaneously impact both HDMF and acrylamide formation, while pH and addition of lysine showed different trends. Therefore, the latter two options can be used to mitigate acrylamide without a negative impact on flavour. Such a holistic approach has been shown as a powerful tool to optimize various product attributes upon food processing.
NASA Astrophysics Data System (ADS)
Liu, Huaming; Qin, Xunpeng; Huang, Song; Hu, Zeqi; Ni, Mao
2018-01-01
This paper presents an investigation on the relationship between the process parameters and geometrical characteristics of the sectional profile for the single track cladding (STC) deposited by High Power Diode Laser (HPDL) with rectangle beam spot (RBS). To obtain the geometry parameters, namely cladding width Wc and height Hc of the sectional profile, a full factorial design (FFD) of experiment was used to conduct the experiments with a total of 27. The pre-placed powder technique has been employed during laser cladding. The influence of the process parameters including laser power, powder thickness and scanning speed on the Wc and Hc was analyzed in detail. A nonlinear fitting model was used to fit the relationship between the process parameters and geometry parameters. And a circular arc was adopted to describe the geometry profile of the cross-section of STC. The above models were confirmed by all the experiments. The results indicated that the geometrical characteristics of the sectional profile of STC can be described as the circular arc, and the other geometry parameters of the sectional profile can be calculated only using Wc and Hc. Meanwhile, the Wc and Hc can be predicted through the process parameters.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Sin, Gürkan; Gernaey, Krist V
2017-03-01
A mechanistic model-based soft sensor is developed and validated for 550L filamentous fungus fermentations operated at Novozymes A/S. The soft sensor is comprised of a parameter estimation block based on a stoichiometric balance, coupled to a dynamic process model. The on-line parameter estimation block models the changing rates of formation of product, biomass, and water, and the rate of consumption of feed using standard, available on-line measurements. This parameter estimation block, is coupled to a mechanistic process model, which solves the current states of biomass, product, substrate, dissolved oxygen and mass, as well as other process parameters including k L a, viscosity and partial pressure of CO 2 . State estimation at this scale requires a robust mass model including evaporation, which is a factor not often considered at smaller scales of operation. The model is developed using a historical data set of 11 batches from the fermentation pilot plant (550L) at Novozymes A/S. The model is then implemented on-line in 550L fermentation processes operated at Novozymes A/S in order to validate the state estimator model on 14 new batches utilizing a new strain. The product concentration in the validation batches was predicted with an average root mean sum of squared error (RMSSE) of 16.6%. In addition, calculation of the Janus coefficient for the validation batches shows a suitably calibrated model. The robustness of the model prediction is assessed with respect to the accuracy of the input data. Parameter estimation uncertainty is also carried out. The application of this on-line state estimator allows for on-line monitoring of pilot scale batches, including real-time estimates of multiple parameters which are not able to be monitored on-line. With successful application of a soft sensor at this scale, this allows for improved process monitoring, as well as opening up further possibilities for on-line control algorithms, utilizing these on-line model outputs. Biotechnol. Bioeng. 2017;114: 589-599. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bian, X. X.; Gu, Y. Z.; Sun, J.; Li, M.; Liu, W. P.; Zhang, Z. G.
2013-10-01
In this study, the effects of processing temperature and vacuum applying rate on the forming quality of C-shaped carbon fiber reinforced epoxy resin matrix composite laminates during hot diaphragm forming process were investigated. C-shaped prepreg preforms were produced using a home-made hot diaphragm forming equipment. The thickness variations of the preforms and the manufacturing defects after diaphragm forming process, including fiber wrinkling and voids, were evaluated to understand the forming mechanism. Furthermore, both interlaminar slipping friction and compaction behavior of the prepreg stacks were experimentally analyzed for showing the importance of the processing parameters. In addition, autoclave processing was used to cure the C-shaped preforms to investigate the changes of the defects before and after cure process. The results show that the C-shaped prepreg preforms with good forming quality can be achieved through increasing processing temperature and reducing vacuum applying rate, which obviously promote prepreg interlaminar slipping process. The process temperature and forming rate in hot diaphragm forming process strongly influence prepreg interply frictional force, and the maximum interlaminar frictional force can be taken as a key parameter for processing parameter optimization. Autoclave process is effective in eliminating voids in the preforms and can alleviate fiber wrinkles to a certain extent.
Surveillance of industrial processes with correlated parameters
White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.
1996-12-17
A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.
Closed-Loop Process Control for Electron Beam Freeform Fabrication and Deposition Processes
NASA Technical Reports Server (NTRS)
Taminger, Karen M. (Inventor); Hofmeister, William H. (Inventor); Martin, Richard E. (Inventor); Hafley, Robert A. (Inventor)
2013-01-01
A closed-loop control method for an electron beam freeform fabrication (EBF(sup 3)) process includes detecting a feature of interest during the process using a sensor(s), continuously evaluating the feature of interest to determine, in real time, a change occurring therein, and automatically modifying control parameters to control the EBF(sup 3) process. An apparatus provides closed-loop control method of the process, and includes an electron gun for generating an electron beam, a wire feeder for feeding a wire toward a substrate, wherein the wire is melted and progressively deposited in layers onto the substrate, a sensor(s), and a host machine. The sensor(s) measure the feature of interest during the process, and the host machine continuously evaluates the feature of interest to determine, in real time, a change occurring therein. The host machine automatically modifies control parameters to the EBF(sup 3) apparatus to control the EBF(sup 3) process in a closed-loop manner.
Guo, Chaohua; Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.
Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489
Important parameters for smoke plume rise simulation with Daysmoke
L. Liu; G.L. Achtemeier; S.L. Goodrick; W. Jackson
2010-01-01
Daysmoke is a local smoke transport model and has been used to provide smoke plume rise information. It includes a large number of parameters describing the dynamic and stochastic processes of particle upward movement, fallout, fluctuation, and burn emissions. This study identifies the important parameters for Daysmoke simulations of plume rise and seeks to understand...
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, Brian; Wood, Richard T.
1997-01-01
A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.
NASA Astrophysics Data System (ADS)
Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.
2016-12-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Fuchs, Sabine; Henschke, Cornelia; Blümel, Miriam; Busse, Reinhard
2014-06-27
Disease management programs (DMPs) are intended to improve the care of persons with chronic diseases. Despite numerous studies there is no unequivocal evidence about the effectiveness of DMPs in Germany. We conducted a systematic literature review in the MEDLINE, EMBASE, Cochrane Library, and CCMed databases. Our analysis included all controlled studies in which patients with type 2 diabetes enrolled in a DMP were compared to type 2 diabetes patients receiving routine care with respect to process, outcome, and economic parameters. The 9 studies included in the analysis were highly divergent with respect to their characteristics and the process and outcome parameters studied in each. No study had data beyond the year 2008. In 3 publications, the DMP patients had a lower mortality than the control patients (2.3%, 11.3%, and 7.17% versus 4.7%, 14.4%, and 14.72%). In 2 publications, DMP participation was found to be associated with a mean survival time of 1044.94 (± 189.87) days, as against 985.02 (± 264.68) in the control group. No consistent effect was seen with respect to morbidity, quality of life, or economic parameters. 7 publications from 5 studies revealed positive effects on process parameters for DMP participants. The observed beneficial trends with respect to mortality and survival time, as well as improvements in process parameters, indicate that DMPs can, in fact, improve the care of patients with diabetes. Further evaluation is needed, because some changes in outcome parameters (an important indicator of the quality of care) may only be observable over a longer period of time.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Ground Truth Events with Source Geometry in Eurasia and the Middle East
2016-06-02
source properties, including seismic moment, corner frequency, radiated energy , and stress drop have been obtained using spectra for S waves following...PARAMETERS Other source parameters, including radiated energy , corner frequency, seismic moment, and static stress drop were calculated using a spectral...technique (Richardson & Jordan, 2002; Andrews, 1986). The process entails separating event and station spectra and median- stacking each event’s
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Intrinsic thermodynamics of ethoxzolamide inhibitor binding to human carbonic anhydrase XIII
2012-01-01
Background Human carbonic anhydrases (CAs) play crucial role in various physiological processes including carbon dioxide and hydrocarbon transport, acid homeostasis, biosynthetic reactions, and various pathological processes, especially tumor progression. Therefore, CAs are interesting targets for pharmaceutical research. The structure-activity relationships (SAR) of designed inhibitors require detailed thermodynamic and structural characterization of the binding reaction. Unfortunately, most publications list only the observed thermodynamic parameters that are significantly different from the intrinsic parameters. However, only intrinsic parameters could be used in the rational design and SAR of the novel compounds. Results Intrinsic binding parameters for several inhibitors, including ethoxzolamide, trifluoromethanesulfonamide, and acetazolamide, binding to recombinant human CA XIII isozyme were determined. The parameters were the intrinsic Gibbs free energy, enthalpy, entropy, and the heat capacity. They were determined by titration calorimetry and thermal shift assay in a wide pH and temperature range to dissect all linked protonation reaction contributions. Conclusions Precise determination of the inhibitor binding thermodynamics enabled correct intrinsic affinity and enthalpy ranking of the compounds and provided the means for SAR analysis of other rationally designed CA inhibitors. PMID:22676044
NASA Astrophysics Data System (ADS)
Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas
2016-04-01
Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.
A Novel Scale Up Model for Prediction of Pharmaceutical Film Coating Process Parameters.
Suzuki, Yasuhiro; Suzuki, Tatsuya; Minami, Hidemi; Terada, Katsuhide
2016-01-01
In the pharmaceutical tablet film coating process, we clarified that a difference in exhaust air relative humidity can be used to detect differences in process parameters values, the relative humidity of exhaust air was different under different atmospheric air humidity conditions even though all setting values of the manufacturing process parameters were the same, and the water content of tablets was correlated with the exhaust air relative humidity. Based on this experimental data, the exhaust air relative humidity index (EHI), which is an empirical equation that includes as functional parameters the pan coater type, heated air flow rate, spray rate of coating suspension, saturated water vapor pressure at heated air temperature, and partial water vapor pressure at atmospheric air pressure, was developed. The predictive values of exhaust relative humidity using EHI were in good correlation with the experimental data (correlation coefficient of 0.966) in all datasets. EHI was verified using the date of seven different drug products of different manufacturing scales. The EHI model will support formulation researchers by enabling them to set film coating process parameters when the batch size or pan coater type changes, and without the time and expense of further extensive testing.
NASA Astrophysics Data System (ADS)
Adalarasan, R.; Santhanakumar, M.
2015-01-01
In the present work, yield strength, ultimate strength and micro-hardness of the lap joints formed with Al 6061 alloy sheets by using the processes of Tungsten Inert Gas (TIG) welding and Metal Inert Gas (MIG) welding were studied for various combinations of the welding parameters. The parameters taken for study include welding current, voltage, welding speed and inert gas flow rate. Taguchi's L9 orthogonal array was used to conduct the experiments and an integrated technique of desirability grey relational analysis was disclosed for optimizing the welding parameters. The ignored robustness in desirability approach is compensated by the grey relational approach to predict the optimal setting of input parameters for the TIG and MIG welding processes which were validated through the confirmation experiments.
Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia
2016-01-01
During peritoneal dialysis (PD), the peritoneal membrane undergoes ageing processes that affect its function. Here we analyzed associations of patient age and dialysis vintage with parameters of peritoneal transport of fluid and solutes, directly measured and estimated based on the pore model, for individual patients. Thirty-three patients (15 females; age 60 (21–87) years; median time on PD 19 (3–100) months) underwent sequential peritoneal equilibration test. Dialysis vintage and patient age did not correlate. Estimation of parameters of the two-pore model of peritoneal transport was performed. The estimated fluid transport parameters, including hydraulic permeability (LpS), fraction of ultrasmall pores (α u), osmotic conductance for glucose (OCG), and peritoneal absorption, were generally independent of solute transport parameters (diffusive mass transport parameters). Fluid transport parameters correlated whereas transport parameters for small solutes and proteins did not correlate with dialysis vintage and patient age. Although LpS and OCG were lower for older patients and those with long dialysis vintage, αu was higher. Thus, fluid transport parameters—rather than solute transport parameters—are linked to dialysis vintage and patient age and should therefore be included when monitoring processes linked to ageing of the peritoneal membrane. PMID:26989432
A fluidized bed technique for estimating soil critical shear stress
USDA-ARS?s Scientific Manuscript database
Soil erosion models, depending on how they are formulated, always have erodibilitiy parameters in the erosion equations. For a process-based model like the Water Erosion Prediction Project (WEPP) model, the erodibility parameters include rill and interrill erodibility and critical shear stress. Thes...
Efficient extraction strategies of tea (Camellia sinensis) biomolecules.
Banerjee, Satarupa; Chatterjee, Jyotirmoy
2015-06-01
Tea is a popular daily beverage worldwide. Modulation and modifications of its basic components like catechins, alkaloids, proteins and carbohydrate during fermentation or extraction process changes organoleptic, gustatory and medicinal properties of tea. Through these processes increase or decrease in yield of desired components are evident. Considering the varied impacts of parameters in tea production, storage and processes that affect the yield, extraction of tea biomolecules at optimized condition is thought to be challenging. Implementation of technological advancements in green chemistry approaches can minimize the deviation retaining maximum qualitative properties in environment friendly way. Existed extraction processes with optimization parameters of tea have been discussed in this paper including its prospects and limitations. This exhaustive review of various extraction parameters, decaffeination process of tea and large scale cost effective isolation of tea components with aid of modern technology can assist people to choose extraction condition of tea according to necessity.
NASA Technical Reports Server (NTRS)
Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.
2015-01-01
Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Calculation tool for transported geothermal energy using two-step absorption process
Kyle Gluesenkamp
2016-02-01
This spreadsheet allows the user to calculate parameters relevant to techno-economic performance of a two-step absorption process to transport low temperature geothermal heat some distance (1-20 miles) for use in building air conditioning. The parameters included are (1) energy density of aqueous LiBr and LiCl solutions, (2) transportation cost of trucking solution, and (3) equipment cost for the required chillers and cooling towers in the two-step absorption approach. More information is available in the included public report: "A Technical and Economic Analysis of an Innovative Two-Step Absorption System for Utilizing Low-Temperature Geothermal Resources to Condition Commercial Buildings"
NASA Astrophysics Data System (ADS)
Kuz`michev, V. S.; Filinov, E. P.; Ostapyuk, Ya A.
2018-01-01
This article describes how the thrust level influences the turbojet architecture (types of turbomachines that provide the maximum efficiency) and its working process parameters (turbine inlet temperature (TIT) and overall pressure ratio (OPR)). Functional gasdynamic and strength constraints were included, total mass of fuel and the engine required for mission and the specific fuel consumption (SFC) were considered optimization criteria. Radial and axial turbines and compressors were considered. The results show that as the engine thrust decreases, optimal values of working process parameters decrease too, and the regions of compromise shrink. Optimal engine architecture and values of working process parameters are suggested for turbojets with thrust varying from 100N to 100kN. The results show that for the thrust below 25kN the engine scale factor should be taken into the account, as the low flow rates begin to influence the efficiency of engine elements substantially.
Tahmasebian, Shahram; Ghazisaeedi, Marjan; Langarizadeh, Mostafa; Mokhtaran, Mehrshad; Mahdavi-Mazdeh, Mitra; Javadian, Parisa
2017-01-01
Introduction: Chronic kidney disease (CKD) includes a wide range of pathophysiological processes which will be observed along with abnormal function of kidneys and progressive decrease in glomerular filtration rate (GFR). According to the definition decreasing GFR must have been present for at least three months. CKD will eventually result in end-stage kidney disease. In this process different factors play role and finding the relations between effective parameters in this regard can help to prevent or slow progression of this disease. There are always a lot of data being collected from the patients' medical records. This huge array of data can be considered a valuable source for analyzing, exploring and discovering information. Objectives: Using the data mining techniques, the present study tries to specify the effective parameters and also aims to determine their relations with each other in Iranian patients with CKD. Material and Methods: The study population includes 31996 patients with CKD. First, all of the data is registered in the database. Then data mining tools were used to find the hidden rules and relationships between parameters in collected data. Results: After data cleaning based on CRISP-DM (Cross Industry Standard Process for Data Mining) methodology and running mining algorithms on the data in the database the relationships between the effective parameters was specified. Conclusion: This study was done using the data mining method pertaining to the effective factors on patients with CKD.
Tahmasebian, Shahram; Ghazisaeedi, Marjan; Langarizadeh, Mostafa; Mokhtaran, Mehrshad; Mahdavi-Mazdeh, Mitra; Javadian, Parisa
2017-01-01
Introduction: Chronic kidney disease (CKD) includes a wide range of pathophysiological processes which will be observed along with abnormal function of kidneys and progressive decrease in glomerular filtration rate (GFR). According to the definition decreasing GFR must have been present for at least three months. CKD will eventually result in end-stage kidney disease. In this process different factors play role and finding the relations between effective parameters in this regard can help to prevent or slow progression of this disease. There are always a lot of data being collected from the patients’ medical records. This huge array of data can be considered a valuable source for analyzing, exploring and discovering information. Objectives: Using the data mining techniques, the present study tries to specify the effective parameters and also aims to determine their relations with each other in Iranian patients with CKD. Material and Methods: The study population includes 31996 patients with CKD. First, all of the data is registered in the database. Then data mining tools were used to find the hidden rules and relationships between parameters in collected data. Results: After data cleaning based on CRISP-DM (Cross Industry Standard Process for Data Mining) methodology and running mining algorithms on the data in the database the relationships between the effective parameters was specified. Conclusion: This study was done using the data mining method pertaining to the effective factors on patients with CKD. PMID:28497080
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Emami, Fereshteh; Maeder, Marcel; Abdollahi, Hamid
2015-05-07
Thermodynamic studies of equilibrium chemical reactions linked with kinetic procedures are mostly impossible by traditional approaches. In this work, the new concept of generalized kinetic study of thermodynamic parameters is introduced for dynamic data. The examples of equilibria intertwined with kinetic chemical mechanisms include molecular charge transfer complex formation reactions, pH-dependent degradation of chemical compounds and tautomerization kinetics in micellar solutions. Model-based global analysis with the possibility of calculating and embedding the equilibrium and kinetic parameters into the fitting algorithm has allowed the complete analysis of the complex reaction mechanisms. After the fitting process, the optimal equilibrium and kinetic parameters together with an estimate of their standard deviations have been obtained. This work opens up a promising new avenue for obtaining equilibrium constants through the kinetic data analysis for the kinetic reactions that involve equilibrium processes.
Color separation in forensic image processing using interactive differential evolution.
Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb
2015-01-01
Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
NASA Astrophysics Data System (ADS)
Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.
2011-12-01
Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Chung, J
Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Karimi, Mohammad H; Asemani, Davud
2014-05-01
Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Cinque, Kathy; Jayasuriya, Niranjali
2010-12-01
To ensure the protection of drinking water an understanding of the catchment processes which can affect water quality is important as it enables targeted catchment management actions to be implemented. In this study factor analysis (FA) and comparing event mean concentrations (EMCs) with baseline values were techniques used to asses the relationships between water quality parameters and linking those parameters to processes within an agricultural drinking water catchment. FA found that 55% of the variance in the water quality data could be explained by the first factor, which was dominated by parameters usually associated with erosion. Inclusion of pathogenic indicators in an additional FA showed that Enterococcus and Clostridium perfringens (C. perfringens) were also related to the erosion factor. Analysis of the EMCs found that most parameters were significantly higher during periods of rainfall runoff. This study shows that the most dominant processes in an agricultural catchment are surface runoff and erosion. It also shows that it is these processes which mobilise pathogenic indicators and are therefore most likely to influence the transport of pathogens. Catchment management efforts need to focus on reducing the effect of these processes on water quality.
MontePython 3: Parameter inference code for cosmology
NASA Astrophysics Data System (ADS)
Brinckmann, Thejs; Lesgourgues, Julien; Audren, Benjamin; Benabed, Karim; Prunet, Simon
2018-05-01
MontePython 3 provides numerous ways to explore parameter space using Monte Carlo Markov Chain (MCMC) sampling, including Metropolis-Hastings, Nested Sampling, Cosmo Hammer, and a Fisher sampling method. This improved version of the Monte Python (ascl:1307.002) parameter inference code for cosmology offers new ingredients that improve the performance of Metropolis-Hastings sampling, speeding up convergence and offering significant time improvement in difficult runs. Additional likelihoods and plotting options are available, as are post-processing algorithms such as Importance Sampling and Adding Derived Parameter.
40 CFR 63.11925 - What are my initial and continuous compliance requirements for process vents?
Code of Federal Regulations, 2013 CFR
2013-07-01
... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... scale. (iv) Engineering assessment including, but not limited to, the following: (A) Previous test..., and procedures used in the engineering assessment shall be documented. (3) For miscellaneous process...
40 CFR 63.11925 - What are my initial and continuous compliance requirements for process vents?
Code of Federal Regulations, 2014 CFR
2014-07-01
... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... scale. (iv) Engineering assessment including, but not limited to, the following: (A) Previous test..., and procedures used in the engineering assessment shall be documented. (3) For miscellaneous process...
40 CFR 63.11925 - What are my initial and continuous compliance requirements for process vents?
Code of Federal Regulations, 2012 CFR
2012-07-01
... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... scale. (iv) Engineering assessment including, but not limited to, the following: (A) Previous test..., and procedures used in the engineering assessment shall be documented. (3) For miscellaneous process...
NASA Technical Reports Server (NTRS)
Knightly, W. F.
1980-01-01
Computer generated data on the performance of the cogeneration energy conversion system are presented. Performance parameters included fuel consumption and savings, capital costs, economics, and emissions of residual fired process boilers.
Pulsed electric fields for pasteurization: defining processing conditions
USDA-ARS?s Scientific Manuscript database
Application of pulsed electric fields (PEF) technology in food pasteurization has been extensively studied. Optimal PEF treatment conditions for maximum microbial inactivation depend on multiple factors including PEF processing conditions, production parameters and product properties. In order for...
Diamond Deposition and Defect Chemistry Studied via Solid State NMR
1994-06-30
system can be found elsewhere (121. The flame characteristics depend on a number of parameters . The flame conditions depend on (a) equivalence ratio...b) pressure, (c) cold gas velocity, and (d) diluent. The effect of the various parameters are described briefly. This quantity describes the carbon...important parameter that must be controlled carefully. Many chemical processes in flames, including those in which collision activation or stabilization
V2S: Voice to Sign Language Translation System for Malaysian Deaf People
NASA Astrophysics Data System (ADS)
Mean Foong, Oi; Low, Tang Jung; La, Wai Wan
The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-01-01
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801
Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio
2017-03-03
The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.
Varet, Hugo; Brillet-Guéguen, Loraine; Coppée, Jean-Yves; Dillies, Marie-Agnès
2016-01-01
Several R packages exist for the detection of differentially expressed genes from RNA-Seq data. The analysis process includes three main steps, namely normalization, dispersion estimation and test for differential expression. Quality control steps along this process are recommended but not mandatory, and failing to check the characteristics of the dataset may lead to spurious results. In addition, normalization methods and statistical models are not exchangeable across the packages without adequate transformations the users are often not aware of. Thus, dedicated analysis pipelines are needed to include systematic quality control steps and prevent errors from misusing the proposed methods. SARTools is an R pipeline for differential analysis of RNA-Seq count data. It can handle designs involving two or more conditions of a single biological factor with or without a blocking factor (such as a batch effect or a sample pairing). It is based on DESeq2 and edgeR and is composed of an R package and two R script templates (for DESeq2 and edgeR respectively). Tuning a small number of parameters and executing one of the R scripts, users have access to the full results of the analysis, including lists of differentially expressed genes and a HTML report that (i) displays diagnostic plots for quality control and model hypotheses checking and (ii) keeps track of the whole analysis process, parameter values and versions of the R packages used. SARTools provides systematic quality controls of the dataset as well as diagnostic plots that help to tune the model parameters. It gives access to the main parameters of DESeq2 and edgeR and prevents untrained users from misusing some functionalities of both packages. By keeping track of all the parameters of the analysis process it fits the requirements of reproducible research.
Respiration and enzymatic activities as indicators of stabilization of sewage sludge composting.
Nikaeen, Mahnaz; Nafez, Amir Hossein; Bina, Bijan; Nabavi, BiBi Fatemeh; Hassanzadeh, Akbar
2015-05-01
The objective of this work was to study the evolution of physico-chemical and microbial parameters in the composting process of sewage sludge (SS) with pruning wastes (PW) in order to compare these parameters with respect to their applicability in the evaluation of organic matter (OM) stabilization. To evaluate the composting process and organic matter stability, different microbial activities were compared during composting of anaerobically digested SS with two volumetric ratios, 1:1 and 3:1 of PW:SS and two aeration techniques including aerated static piles (ASP) and turned windrows (TW). Dehydrogenase activity, fluorescein diacetate hydrolysis, and specific oxygen uptake rate (SOUR) were used as microbial activity indices. These indices were compared with traditional parameters, including temperature, pH, moisture content, organic matter, and C/N ratio. The results showed that the TW method and 3:1 (PW:SS) proportion was superior to the ASP method and 1:1 proportion, since the former accelerate the composting process by catalyzing the OM stabilization. Enzymatic activities and SOUR, which reflect microbial activity, correlated well with temperature fluctuations. Based on these results it appears that SOUR and the enzymatic activities are useful parameters to monitor the stabilization of SS compost. Copyright © 2015 Elsevier Ltd. All rights reserved.
Detection and Imaging of Moving Targets with LiMIT SAR Data
2017-03-03
include space time adaptive processing (STAP) or displaced phase center antenna (DPCA) [4]–[7]. Page et al. combined constant acceleration target...motion focusing with space-time adaptive processing (STAP), and included the refocusing parameters in the STAP steering vector. Due to inhomogenous...wavelength λ and slow time t, of a moving target after matched filter and passband equalization processing can be expressed as: P (t) = exp ( −j 4π λ ||~rp
Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.
Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S
2017-01-01
Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.
Electronic filters, signal conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1994-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
Rapid permeation measurement system for the production control of monolayer and multilayer films
NASA Astrophysics Data System (ADS)
Botos, J.; Müller, K.; Heidemeyer, P.; Kretschmer, K.; Bastian, M.; Hochrein, T.
2014-05-01
Plastics have been used for packaging films for a long time. Until now the development of new formulations for film applications, including process optimization, has been a time-consuming and cost-intensive process for gases like oxygen (O2) or carbon dioxide (CO2). By using helium (He) the permeation measurement can be accelerated from hours or days to a few minutes. Therefore a manometric measuring system for tests according to ISO 15105-1 is coupled with a mass spectrometer to determine the helium flow rate and to calculate the helium permeation rate. Due to the accelerated determination the permeation quality of monolayer and multilayer films can be measured atline. Such a system can be used to predict for example the helium permeation rate of filled polymer films. Defined quality limits for the permeation rate can be specified as well as the prompt correction of process parameters if the results do not meet the specification. This method for process control was tested on a pilot line with a corotating twin-screw extruder for monolayer films. Selected process parameters were varied iteratively without changing the material formulation to obtain the best process parameter set and thus the lowest permeation rate. Beyond that the influence of different parameters on the helium permeation rate was examined on monolayer films. The results were evaluated conventional as well as with artificial neuronal networks in order to determine the non-linear correlation between all process parameters.
Kanojia, Gaurav; Willems, Geert-Jan; Frijlink, Henderik W; Kersten, Gideon F A; Soema, Peter C; Amorij, Jean-Pierre
2016-09-25
Spray dried vaccine formulations might be an alternative to traditional lyophilized vaccines. Compared to lyophilization, spray drying is a fast and cheap process extensively used for drying biologicals. The current study provides an approach that utilizes Design of Experiments for spray drying process to stabilize whole inactivated influenza virus (WIV) vaccine. The approach included systematically screening and optimizing the spray drying process variables, determining the desired process parameters and predicting product quality parameters. The process parameters inlet air temperature, nozzle gas flow rate and feed flow rate and their effect on WIV vaccine powder characteristics such as particle size, residual moisture content (RMC) and powder yield were investigated. Vaccine powders with a broad range of physical characteristics (RMC 1.2-4.9%, particle size 2.4-8.5μm and powder yield 42-82%) were obtained. WIV showed no significant loss in antigenicity as revealed by hemagglutination test. Furthermore, descriptive models generated by DoE software could be used to determine and select (set) spray drying process parameter. This was used to generate a dried WIV powder with predefined (predicted) characteristics. Moreover, the spray dried vaccine powders retained their antigenic stability even after storage for 3 months at 60°C. The approach used here enabled the generation of a thermostable, antigenic WIV vaccine powder with desired physical characteristics that could be potentially used for pulmonary administration. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
On selecting satellite conjunction filter parameters
NASA Astrophysics Data System (ADS)
Alfano, Salvatore; Finkleman, David
2014-06-01
This paper extends concepts of signal detection theory to predict the performance of conjunction screening techniques and guiding the selection of keepout and screening thresholds. The most efficient way to identify satellites likely to collide is to employ filters to identify orbiting pairs that should not come close enough over a prescribed time period to be considered hazardous. Such pairings can then be eliminated from further computation to accelerate overall processing time. Approximations inherent in filtering techniques include screening using only unperturbed Newtonian two body astrodynamics and uncertainties in orbit elements. Therefore, every filtering process is vulnerable to including objects that are not threats and excluding some that are threats, Type I and Type II errors. The approach in this paper guides selection of the best operating point for the filters suited to a user's tolerance for false alarms and unwarned threats. We demonstrate the approach using three archetypal filters with an initial three-day span, select filter parameters based on performance, and then test those parameters using eight historical snapshots of the space catalog. This work provides a mechanism for selecting filter parameters but the choices depend on the circumstances.
A Module Experimental Process System Development Unit (MEPSDU)
NASA Technical Reports Server (NTRS)
1981-01-01
The purpose of this program is to demonstrate the technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per watt peak. Program efforts included: preliminary design review, preliminary cell fabrication using the proposed process sequence, verification of sandblasting back cleanup, study of resist parameters, evaluation of pull strength of the proposed metallization, measurement of contact resistance of Electroless Ni contacts, optimization of process parameter, design of the MEPSDU module, identification and testing of insulator tapes, development of a lamination process sequence, identification, discussions, demonstrations and visits with candidate equipment vendors, evaluation of proposals for tabbing and stringing machine.
NASA Astrophysics Data System (ADS)
Yoo, C. J.; Shin, B. S.; Kang, B. S.; Yun, D. H.; You, D. B.; Hong, S. M.
2017-09-01
In this paper, we propose a new porous polymer printing technology based on CBA(chemical blowing agent), and describe the optimization process according to the process parameters. By mixing polypropylene (PP) and CBA, a hybrid CBA filament was manufactured; the diameter of the filament ranged between 1.60 mm and 1.75 mm. A porous polymer structure was manufactured based on the traditional fused deposition modelling (FDM) method. The process parameters of the three-dimensional (3D) porous polymer printing (PPP) process included nozzle temperature, printing speed, and CBA density. Porosity increase with an increase in nozzle temperature and CBA density. On the contrary, porosity increase with a decrease in the printing speed. For porous structures, it has excellent mechanical properties. We manufactured a simple shape in 3D using 3D PPP technology. In the future, we will study the excellent mechanical properties of 3D PPP technology and apply them to various safety fields.
Birnhack, Liat; Nir, Oded; Telzhenski, Marina; Lahav, Ori
2015-01-01
Deliberate struvite (MgNH4PO4) precipitation from wastewater streams has been the topic of extensive research in the last two decades and is expected to gather worldwide momentum in the near future as a P-reuse technique. A wide range of operational alternatives has been reported for struvite precipitation, including the application of various Mg(II) sources, two pH elevation techniques and several Mg:P ratios and pH values. The choice of each operational parameter within the struvite precipitation process affects process efficiency, the overall cost and also the choice of other operational parameters. Thus, a comprehensive simulation program that takes all these parameters into account is essential for process design. This paper introduces a systematic decision-supporting tool which accepts a wide range of possible operational parameters, including unconventional Mg(II) sources (i.e. seawater and seawater nanofiltration brines). The study is supplied with a free-of-charge computerized tool (http://tx.technion.ac.il/~agrengn/agr/Struvite_Program.zip) which links two computer platforms (Python and PHREEQC) for executing thermodynamic calculations according to predefined kinetic considerations. The model can be (inter alia) used for optimizing the struvite-fluidized bed reactor process operation with respect to P removal efficiency, struvite purity and economic feasibility of the chosen alternative. The paper describes the algorithm and its underlying assumptions, and shows results (i.e. effluent water quality, cost breakdown and P removal efficiency) of several case studies consisting of typical wastewaters treated at various operational conditions.
Design and performance study of an orthopaedic surgery robotized module for automatic bone drilling.
Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Kotev, Vladimir; Delchev, Kamen; Zagurski, Kazimir; Vitkov, Vladimir
2013-12-01
Many orthopaedic operations involve drilling and tapping before the insertion of screws into a bone. This drilling is usually performed manually, thus introducing many problems. These include attaining a specific drilling accuracy, preventing blood vessels from breaking, and minimizing drill oscillations that would widen the hole. Bone overheating is the most important problem. To avoid such problems and reduce the subjective factor, automated drilling is recommended. Because numerous parameters influence the drilling process, this study examined some experimental methods. These concerned the experimental identification of technical drilling parameters, including the bone resistance force and temperature in the drilling process. During the drilling process, the following parameters were monitored: time, linear velocity, angular velocity, resistance force, penetration depth, and temperature. Specific drilling effects were revealed during the experiments. The accuracy was improved at the starting point of the drilling, and the error for the entire process was less than 0.2 mm. The temperature deviations were kept within tolerable limits. The results of various experiments with different drilling velocities, drill bit diameters, and penetration depths are presented in tables, as well as the curves of the resistance force and temperature with respect to time. Real-time digital indications of the progress of the drilling process are shown. Automatic bone drilling could entirely solve the problems that usually arise during manual drilling. An experimental setup was designed to identify bone drilling parameters such as the resistance force arising from variable bone density, appropriate mechanical drilling torque, linear speed of the drill, and electromechanical characteristics of the motors, drives, and corresponding controllers. Automatic drilling guarantees greater safety for the patient. Moreover, the robot presented is user-friendly because it is simple to set robot tasks, and process data are collected in real time. Copyright © 2013 John Wiley & Sons, Ltd.
Advanced optic fabrication using ultrafast laser radiation
NASA Astrophysics Data System (ADS)
Taylor, Lauren L.; Qiao, Jun; Qiao, Jie
2016-03-01
Advanced fabrication and finishing techniques are desired for freeform optics and integrated photonics. Methods including grinding, polishing and magnetorheological finishing used for final figuring and polishing of such optics are time consuming, expensive, and may be unsuitable for complex surface features while common photonics fabrication techniques often limit devices to planar geometries. Laser processing has been investigated as an alternative method for optic forming, surface polishing, structure writing, and welding, as direct tuning of laser parameters and flexible beam delivery are advantageous for complex freeform or photonics elements and material-specific processing. Continuous wave and pulsed laser radiation down to the nanosecond regime have been implemented to achieve nanoscale surface finishes through localized material melting, but the temporal extent of the laser-material interaction often results in the formation of a sub-surface heat affected zone. The temporal brevity of ultrafast laser radiation can allow for the direct vaporization of rough surface asperities with minimal melting, offering the potential for smooth, final surface quality with negligible heat affected material. High intensities achieved in focused ultrafast laser radiation can easily induce phase changes in the bulk of materials for processing applications. We have experimentally tested the effectiveness of ultrafast laser radiation as an alternative laser source for surface processing of monocrystalline silicon. Simulation of material heating associated with ultrafast laser-material interaction has been performed and used to investigate optimized processing parameters including repetition rate. The parameter optimization process and results of experimental processing will be presented.
A combinaison of UV curing technology with ATL process
NASA Astrophysics Data System (ADS)
Balbzioui, I.; Hasiaoui, B.; Barbier, G.; L'hostis, G.; Laurent, F.; Ibrahim, A.; Durand, B.
2017-10-01
In order to reduce the time and the cost of manufacturing composite, UV curing technology combined with automated tape placement process (ATL) based on reverse approach by working with a fixed head was studied in this article. First, a brief description of the developed head placement is presented. Mechanical properties are then evaluated by varying process parameters, including compaction force and tape placement speed. Finally, a parametric study is carried out to identify suitable materials and process parameters to manufacture a photo composite material with high mechanical performances. The obtained results show that UV curing is a very good alternative for thermal polymerization because of its fast cure speed due to less dependency on temperature.
Electrochemical reduction of CerMet fuels for transmutation using surrogate CeO2-Mo pellets
NASA Astrophysics Data System (ADS)
Claux, B.; Souček, P.; Malmbeck, R.; Rodrigues, A.; Glatz, J.-P.
2017-08-01
One of the concepts chosen for the transmutation of minor actinides in Accelerator Driven Systems or fast reactors proposes the use of fuels and targets containing minor actinides oxides embedded in an inert matrix either composed of molybdenum metal (CerMet fuel) or of ceramic magnesium oxide (CerCer fuel). Since the sufficient transmutation cannot be achieved in a single step, it requires multi-recycling of the fuel including recovery of the not transmuted minor actinides. In the present work, a pyrochemical process for treatment of Mo metal inert matrix based CerMet fuels is studied, particularly the electroreduction in molten chloride salt as a head-end step required prior the main separation process. At the initial stage, different inactive pellets simulating the fuel containing CeO2 as minor actinide surrogates were examined. The main studied parameters of the process efficiency were the porosity and composition of the pellets and the process parameters as current density and passed charge. The results indicated the feasibility of the process, gave insight into its limiting parameters and defined the parameters for the future experiment on minor actinide containing material.
Alikhani, Jamal; Takacs, Imre; Al-Omari, Ahmed; Murthy, Sudhir; Massoudieh, Arash
2017-03-01
A parameter estimation framework was used to evaluate the ability of observed data from a full-scale nitrification-denitrification bioreactor to reduce the uncertainty associated with the bio-kinetic and stoichiometric parameters of an activated sludge model (ASM). Samples collected over a period of 150 days from the effluent as well as from the reactor tanks were used. A hybrid genetic algorithm and Bayesian inference were used to perform deterministic and parameter estimations, respectively. The main goal was to assess the ability of the data to obtain reliable parameter estimates for a modified version of the ASM. The modified ASM model includes methylotrophic processes which play the main role in methanol-fed denitrification. Sensitivity analysis was also used to explain the ability of the data to provide information about each of the parameters. The results showed that the uncertainty in the estimates of the most sensitive parameters (including growth rate, decay rate, and yield coefficients) decreased with respect to the prior information.
Optics Program Simplifies Analysis and Design
NASA Technical Reports Server (NTRS)
2007-01-01
Engineers at Goddard Space Flight Center partnered with software experts at Mide Technology Corporation, of Medford, Massachusetts, through a Small Business Innovation Research (SBIR) contract to design the Disturbance-Optics-Controls-Structures (DOCS) Toolbox, a software suite for performing integrated modeling for multidisciplinary analysis and design. The DOCS Toolbox integrates various discipline models into a coupled process math model that can then predict system performance as a function of subsystem design parameters. The system can be optimized for performance; design parameters can be traded; parameter uncertainties can be propagated through the math model to develop error bounds on system predictions; and the model can be updated, based on component, subsystem, or system level data. The Toolbox also allows the definition of process parameters as explicit functions of the coupled model and includes a number of functions that analyze the coupled system model and provide for redesign. The product is being sold commercially by Nightsky Systems Inc., of Raleigh, North Carolina, a spinoff company that was formed by Mide specifically to market the DOCS Toolbox. Commercial applications include use by any contractors developing large space-based optical systems, including Lockheed Martin Corporation, The Boeing Company, and Northrup Grumman Corporation, as well as companies providing technical audit services, like General Dynamics Corporation
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.
2015-12-01
Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.
A method to investigate the diffusion properties of nuclear calcium.
Queisser, Gillian; Wittum, Gabriel
2011-10-01
Modeling biophysical processes in general requires knowledge about underlying biological parameters. The quality of simulation results is strongly influenced by the accuracy of these parameters, hence the identification of parameter values that the model includes is a major part of simulating biophysical processes. In many cases, secondary data can be gathered by experimental setups, which are exploitable by mathematical inverse modeling techniques. Here we describe a method for parameter identification of diffusion properties of calcium in the nuclei of rat hippocampal neurons. The method is based on a Gauss-Newton method for solving a least-squares minimization problem and was formulated in such a way that it is ideally implementable in the simulation platform uG. Making use of independently published space- and time-dependent calcium imaging data, generated from laser-assisted calcium uncaging experiments, here we could identify the diffusion properties of nuclear calcium and were able to validate a previously published model that describes nuclear calcium dynamics as a diffusion process.
Talaeipour, M; Nouri, J; Hassani, A H; Mahvi, A H
2017-01-01
As an appropriate tool, membrane process is used for desalination of brackish water, in the production of drinking water. The present study aims to investigate desalination processes of brackish water of Qom Province in Iran. This study was carried out at the central laboratory of Water and Wastewater Company of the studied area. To this aim, membrane processes, including nanofiltration (NF) and reverse osmosis (RO), separately and also their hybrid process were applied. Moreover, water physical and chemical parameters, including salinity, total dissolved solids (TDS), electric conductivity (EC), Na +1 and Cl -1 were also measured. Afterward, the rejection percent of each parameter was investigated and compared using nanofiltration and reverse osmosis separately and also by their hybrid process. The treatment process was performed by Luna domestic desalination device, which its membrane was replaced by two NF90 and TW30 membranes for nanofiltration and reverse osmosis processes, respectively. All collected brackish water samples were fed through membranes NF90-2540, TW30-1821-100(RO) and Hybrid (NF/RO) which were installed on desalination household scale pilot (Luna water 100GPD). Then, to study the effects of pressure on permeable quality of membranes, the simulation software model ROSA was applied. Results showed that percent of the salinity rejection was recorded as 50.21%; 72.82 and 78.56% in NF, RO and hybrid processes, respectively. During the study, in order to simulate the performance of nanofiltartion, reverse osmosis and hybrid by pressure drive, reverse osmosis system analysis (ROSA) model was applied. The experiments were conducted at performance three methods of desalination to remove physic-chemical parameters as percentage of rejections in the pilot plant are: in the NF system the salinity 50.21, TDS 43.41, EC 43.62, Cl 21.1, Na 36.15, and in the RO membrane the salinity 72.02, TDS 60.26, EC 60.33, Cl 43.08, Na 54.41. Also in case of the rejection in hybrid system of those parameters and ions included salinity 78.65, TDS 76.52, EC 76.42, Cl 63.95, and Na 70.91. Comparing rejection percent in three above-mentioned methods, it could be concluded that, in reverse osmosis process, ions and non-ion parameters rejection ability were rather better than nanofiltration process, and also better in hybrid compared to reverse osmosis process. The results reported in this paper indicate that the integration of membrane nanofiltration with reverse osmosis (hybrid NF/RO) can be completed by each other probably to remove salinity, TDS, EC, Cl, and Na.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-01-01
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro–meso–scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy–enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy–enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates. PMID:28869520
Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong
2017-09-03
Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.
NASA Technical Reports Server (NTRS)
Ryan, Robert
1993-01-01
The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.
Complete Michel parameter analysis of the inclusive semileptonic b{yields}c transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dassinger, Benjamin; Feger, Robert; Mannel, Thomas
2009-04-01
We perform a complete 'Michel parameter' analysis of all possible helicity structures which can appear in the process B{yields}X{sub c}l{nu}{sub l}. We take into account the full set of operators parametrizing the effective Hamiltonian and include the complete one-loop QCD corrections as well as the nonperturbative contributions. The moments of the leptonic energy as well as the combined moments of the hadronic energy and hadronic invariant mass are calculated including the nonstandard contributions.
Simulation of aerobic and anaerobic biodegradation processes at a crude oil spill site
Essaid, Hedeff I.; Bekins, Barbara A.; Godsy, E. Michael; Warren, Ean; Baedecker, Mary Jo; Cozzarelli, Isabelle M.
1995-01-01
A two-dimensional, multispecies reactive solute transport model with sequential aerobic and anaerobic degradation processes was developed and tested. The model was used to study the field-scale solute transport and degradation processes at the Bemidji, Minnesota, crude oil spill site. The simulations included the biodegradation of volatile and nonvolatile fractions of dissolved organic carbon by aerobic processes, manganese and iron reduction, and methanogenesis. Model parameter estimates were constrained by published Monod kinetic parameters, theoretical yield estimates, and field biomass measurements. Despite the considerable uncertainty in the model parameter estimates, results of simulations reproduced the general features of the observed groundwater plume and the measured bacterial concentrations. In the simulation, 46% of the total dissolved organic carbon (TDOC) introduced into the aquifer was degraded. Aerobic degradation accounted for 40% of the TDOC degraded. Anaerobic processes accounted for the remaining 60% of degradation of TDOC: 5% by Mn reduction, 19% by Fe reduction, and 36% by methanogenesis. Thus anaerobic processes account for more than half of the removal of DOC at this site.
Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh
2010-02-01
Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.
Managing Credit Card Expenses: Nova Southeastern University Shares Cost-Saving Techniques.
ERIC Educational Resources Information Center
Peskin, Carol Ann
1994-01-01
Nova Southeastern University, Florida, has implemented a variety of techniques of cost containment for campus credit card transactions. These include restricted card acceptance parameters, careful merchant rate negotiation, increased automation of transaction processing, and sophisticated processing techniques. The university has demonstrated…
NASA Technical Reports Server (NTRS)
Keen, Jill M.; Evans, Kurt B.; Schiffman, Robert L.; Deweese, C. Darrell; Prince, Michael E.
1995-01-01
Experimental design testing was conducted to identify critical parameters of an aqueous spray process intended for cleaning solid rocket motor metal components (steel and aluminum). A two-level, six-parameter, fractional factorial matrix was constructed and conducted for two cleaners, Brulin 815 GD and Diversey Jettacin. The matrix parameters included cleaner temperature and concentration, wash density, wash pressure, rinse pressure, and dishwasher type. Other spray parameters: nozzle stand-off, rinse water temperature, wash and rinse time, dry conditions, and type of rinse water (deionized) were held constant. Matrix response testing utilized discriminating bond specimens (fracture energy and tensile adhesion strength) which represent critical production bond lines. Overall, Jettacin spray cleaning was insensitive to the range of conditions tested for all parameters and exhibited bond strengths significantly above the TCA test baseline for all bond lines tested. Brulin 815 was sensitive to cleaning temperature, but produced bond strengths above the TCA test baseline even at the lower temperatures. Ultimately, the experimental design database was utilized to recommend process parameter settings for future aqueous spray cleaning characterization work.
Ferreira, Joaquim J; Santos, Ana T; Domingos, Josefa; Matthews, Helen; Isaacs, Tom; Duffen, Joy; Al-Jawad, Ahmed; Larsen, Frank; Artur Serrano, J; Weber, Peter; Thoms, Andrea; Sollinger, Stefan; Graessner, Holm; Maetzler, Walter
2015-01-01
Parkinson's disease (PD) is a neurodegenerative disorder with fluctuating symptoms. To aid the development of a system to evaluate people with PD (PwP) at home (SENSE-PARK system) there was a need to define parameters and tools to be applied in the assessment of 6 domains: gait, bradykinesia/hypokinesia, tremor, sleep, balance and cognition. To identify relevant parameters and assessment tools of the 6 domains, from the perspective of PwP, caregivers and movement disorders specialists. A 2-round Delphi study was conducted to select a core of parameters and assessment tools to be applied. This process included PwP, caregivers and movement disorders specialists. Two hundred and thirty-three PwP, caregivers and physicians completed the first round questionnaire, and 50 the second. Results allowed the identification of parameters and assessment tools to be added to the SENSE-PARK system. The most consensual parameters were: Falls and Near Falls; Capability to Perform Activities of Daily Living; Interference with Activities of Daily Living; Capability to Process Tasks; and Capability to Recall and Retrieve Information. The most cited assessment strategies included Walkers; the Evaluation of Performance Doing Fine Motor Movements; Capability to Eat; Assessment of Sleep Quality; Identification of Circumstances and Triggers for Loose of Balance and Memory Assessment. An agreed set of measuring parameters, tests, tools and devices was achieved to be part of a system to evaluate PwP at home. A pattern of different perspectives was identified for each stakeholder.
Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.
Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash
2014-03-01
One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.
Overview of Icing Physics Relevant to Scaling
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2005-01-01
An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.
Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
A "total parameter estimation" method in the varification of distributed hydrological models
NASA Astrophysics Data System (ADS)
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.
Etching Behavior of Aluminum Alloy Extrusions
NASA Astrophysics Data System (ADS)
Zhu, Hanliang
2014-11-01
The etching treatment is an important process step in influencing the surface quality of anodized aluminum alloy extrusions. The aim of etching is to produce a homogeneously matte surface. However, in the etching process, further surface imperfections can be generated on the extrusion surface due to uneven materials loss from different microstructural components. These surface imperfections formed prior to anodizing can significantly influence the surface quality of the final anodized extrusion products. In this article, various factors that influence the materials loss during alkaline etching of aluminum alloy extrusions are investigated. The influencing variables considered include etching process parameters, Fe-rich particles, Mg-Si precipitates, and extrusion profiles. This study provides a basis for improving the surface quality in industrial extrusion products by optimizing various process parameters.
NASA Astrophysics Data System (ADS)
Torabi, Amir; Kolahan, Farhad
2018-07-01
Pulsed laser welding is a powerful technique especially suitable for joining thin sheet metals. In this study, based on experimental data, pulsed laser welding of thin AISI316L austenitic stainless steel sheet has been modeled and optimized. The experimental data required for modeling are gathered as per Central Composite Design matrix in Response Surface Methodology (RSM) with full replication of 31 runs. Ultimate Tensile Strength (UTS) is considered as the main quality measure in laser welding. Furthermore, the important process parameters including peak power, pulse duration, pulse frequency and welding speed are selected as input process parameters. The relation between input parameters and the output response is established via full quadratic response surface regression with confidence level of 95%. The adequacy of the regression model was verified using Analysis of Variance technique results. The main effects of each factor and the interactions effects with other factors were analyzed graphically in contour and surface plot. Next, to maximum joint UTS, the best combinations of parameters levels were specified using RSM. Moreover, the mathematical model is implanted into a Simulated Annealing (SA) optimization algorithm to determine the optimal values of process parameters. The results obtained by both SA and RSM optimization techniques are in good agreement. The optimal parameters settings for peak power of 1800 W, pulse duration of 4.5 ms, frequency of 4.2 Hz and welding speed of 0.5 mm/s would result in a welded joint with 96% of the base metal UTS. Computational results clearly demonstrate that the proposed modeling and optimization procedures perform quite well for pulsed laser welding process.
Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance
NASA Astrophysics Data System (ADS)
Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola
2013-04-01
Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.
Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique
2016-01-01
High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Jerel G.; Kruzic, Michael; Castillo, Carlos
2013-07-01
Chalk River Laboratory (CRL), located in Ontario Canada, has a large number of remediation projects currently in the Nuclear Legacy Liabilities Program (NLLP), including hundreds of facility decommissioning projects and over one hundred environmental remediation projects, all to be executed over the next 70 years. Atomic Energy of Canada Limited (AECL) utilized WorleyParsons to prioritize the NLLP projects at the CRL through a risk-based prioritization and ranking process, using the WorleyParsons Sequencing Unit Prioritization and Estimating Risk Model (SUPERmodel). The prioritization project made use of the SUPERmodel which has been previously used for other large-scale site prioritization and sequencing ofmore » facilities at nuclear laboratories in the United States. The process included development and vetting of risk parameter matrices as well as confirmation/validation of project risks. Detailed sensitivity studies were also conducted to understand the impacts that risk parameter weighting and scoring had on prioritization. The repeatable prioritization process yielded an objective, risk-based and technically defendable process for prioritization that gained concurrence from all stakeholders, including Natural Resources Canada (NRCan) who is responsible for the oversight of the NLLP. (authors)« less
Numerical modeling of laser assisted tape winding process
NASA Astrophysics Data System (ADS)
Zaami, Amin; Baran, Ismet; Akkerman, Remko
2017-10-01
Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L.
2014-12-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.
Development of process parameters for 22 nm PMOS using 2-D analytical modeling
NASA Astrophysics Data System (ADS)
Maheran, A. H. Afifah; Menon, P. S.; Ahmad, I.; Shaari, S.; Faizah, Z. A. Noor
2015-04-01
The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (ILEAK) on PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO2) and tungsten silicide (WSix). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum ILEAK where the maximum predicted ILEAK value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device's leakage current. The absolute process parameters combination results in ILEAK mean value of 3.96821 nA/µm where is far lower than the predicted value.
Development of process parameters for 22 nm PMOS using 2-D analytical modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheran, A. H. Afifah; Menon, P. S.; Shaari, S.
2015-04-24
The complementary metal-oxide-semiconductor field effect transistor (CMOSFET) has become major challenge to scaling and integration. Innovation in transistor structures and integration of novel materials are necessary to sustain this performance trend. CMOS variability in the scaling technology becoming very important concern due to limitation of process control; over statistically variability related to the fundamental discreteness and materials. Minimizing the transistor variation through technology optimization and ensuring robust product functionality and performance is the major issue.In this article, the continuation study on process parameters variations is extended and delivered thoroughly in order to achieve a minimum leakage current (I{sub LEAK}) onmore » PMOS planar transistor at 22 nm gate length. Several device parameters are varies significantly using Taguchi method to predict the optimum combination of process parameters fabrication. A combination of high permittivity material (high-k) and metal gate are utilized accordingly as gate structure where the materials include titanium dioxide (TiO{sub 2}) and tungsten silicide (WSi{sub x}). Then the L9 of the Taguchi Orthogonal array is used to analyze the device simulation where the results of signal-to-noise ratio (SNR) of Smaller-the-Better (STB) scheme are studied through the percentage influences of the process parameters. This is to achieve a minimum I{sub LEAK} where the maximum predicted I{sub LEAK} value by International Technology Roadmap for Semiconductors (ITRS) 2011 is said to should not above 100 nA/µm. Final results shows that the compensation implantation dose acts as the dominant factor with 68.49% contribution in lowering the device’s leakage current. The absolute process parameters combination results in I{sub LEAK} mean value of 3.96821 nA/µm where is far lower than the predicted value.« less
Afolabi, Afolawemi; Akinlabi, Olakemi; Bilgili, Ecevit
2014-01-23
Wet stirred media milling has proven to be a robust process for producing nanoparticle suspensions of poorly water-soluble drugs. As the process is expensive and energy-intensive, it is important to study the breakage kinetics, which determines the cycle time and production rate for a desired fineness. Although the impact of process parameters on the properties of final product suspensions has been investigated, scant information is available regarding their impact on the breakage kinetics. Here, we elucidate the impact of stirrer speed, bead concentration, and drug loading on the breakage kinetics via a microhydrodynamic model for the bead-bead collisions. Suspensions of griseofulvin, a model poorly water-soluble drug, were prepared in the presence of two stabilizers: hydroxypropyl cellulose and sodium dodecyl sulfate. Laser diffraction, scanning electron microscopy, and rheometry were used to characterize them. Various microhydrodynamic parameters including a newly defined milling intensity factor was calculated. An increase in either the stirrer speed or the bead concentration led to an increase in the specific energy and the milling intensity factor, consequently faster breakage. On the other hand, an increase in the drug loading led to a decrease in these parameters and consequently slower breakage. While all microhydrodynamic parameters provided significant physical insight, only the milling intensity factor was capable of explaining the influence of all parameters directly through its strong correlation with the process time constant. Besides guiding process optimization, the analysis rationalizes the preparation of a single high drug-loaded batch (20% or higher) instead of multiple dilute batches. Copyright © 2013 Elsevier B.V. All rights reserved.
Influence of process parameters on the effectiveness of photooxidative treatment of pharmaceuticals.
Markic, Marinko; Cvetnic, Matija; Ukic, Sime; Kusic, Hrvoje; Bolanca, Tomislav; Bozic, Ana Loncaric
2018-03-21
In this study, UV-C/H 2 O 2 and UV-C/[Formula: see text] processes as photooxidative Advanced oxidation processes were applied for the treatment of seven pharmaceuticals, either already included in the Directive 2013/39/EU "watch list" (17α- ethynylestradiol, 17β-estradiol) or with potential to be added in the near future due to environmental properties and increasing consumption (azithromycin, carbamazepine, dexamethasone, erythromycin and oxytetracycline). The influence of process parameters (pH, oxidant concentration and type) on the pharmaceuticals degradation was studied through employed response surface modelling approach. It was established that degradation obeys first-order kinetic regime regardless structural differences and over entire range of studied process parameters. The results revealed that the effectiveness of UV-C/H 2 O 2 process is highly dependent on both initial pH and oxidant concentration. It was found that UV-C/[Formula: see text] process, exhibiting several times faster degradation of studied pharmaceuticals, is less sensitive to pH changes providing practical benefit to its utilization. The influence of water matrix on degradation kinetics of studied pharmaceuticals was studied through natural organic matter effects on single component and mixture systems.
Wang, Yi; Lee, Sui Mae; Dykes, Gary
2015-01-01
Bacterial attachment to abiotic surfaces can be explained as a physicochemical process. Mechanisms of the process have been widely studied but are not yet well understood due to their complexity. Physicochemical processes can be influenced by various interactions and factors in attachment systems, including, but not limited to, hydrophobic interactions, electrostatic interactions and substratum surface roughness. Mechanistic models and control strategies for bacterial attachment to abiotic surfaces have been established based on the current understanding of the attachment process and the interactions involved. Due to a lack of process control and standardization in the methodologies used to study the mechanisms of bacterial attachment, however, various challenges are apparent in the development of models and control strategies. In this review, the physicochemical mechanisms, interactions and factors affecting the process of bacterial attachment to abiotic surfaces are described. Mechanistic models established based on these parameters are discussed in terms of their limitations. Currently employed methods to study these parameters and bacterial attachment are critically compared. The roles of these parameters in the development of control strategies for bacterial attachment are reviewed, and the challenges that arise in developing mechanistic models and control strategies are assessed.
Effects Of Thermal Exchange On Material Flow During Steel Thixoextrusion Process
NASA Astrophysics Data System (ADS)
Eric, Becker; Guochao, Gu; Laurent, Langlois; Raphaël, Pesci; Régis, Bigot
2011-01-01
Semisolid processing is an innovative technology for near net-shape production of components, where the metallic alloys are processed in the semisolid state. Taking advantage of the thixotropic behavior of alloys in the semisolid state, significant progress has been made in semisolid processing. However, the consequences of such behavior on the flow during thixoforming are still not completely understood. To explore and better understand the influence of the different parameters on material flow during thixoextrusion process, thixoextrusion experiments were performed using the low carbon steel C38. The billet was partially melted at high solid fraction. Effects of various process parameters including the initial billet temperature, the temperature of die, the punch speed during process and the presence of a Ceraspray layer at the interface of tool and billet were investigated through experiments and simulation. After analyzing the results thus obtained, it was identified that the aforementioned parameters mainly affect thermal exchanges between die and part. The Ceraspray layer not only plays a lubricant role, but also acts as a thermal barrier at the interface of tool and billet. Furthermore, the thermal effects can affect the material flow which is composed of various distinct zones.
Potential for Remotely Sensed Soil Moisture Data in Hydrologic Modeling
NASA Technical Reports Server (NTRS)
Engman, Edwin T.
1997-01-01
Many hydrologic processes display a unique signature that is detectable with microwave remote sensing. These signatures are in the form of the spatial and temporal distributions of surface soil moisture and portray the spatial heterogeneity of hydrologic processes and properties that one encounters in drainage basins. The hydrologic processes that may be detected include ground water recharge and discharge zones, storm runoff contributing areas, regions of potential and less than potential ET, and information about the hydrologic properties of soils and heterogeneity of hydrologic parameters. Microwave remote sensing has the potential to detect these signatures within a basin in the form of volumetric soil moisture measurements in the top few cm. These signatures should provide information on how and where to apply soil physical parameters in distributed and lumped parameter models and how to subdivide drainage basins into hydrologically similar sub-basins.
Modeling the cooperative and competitive contagions in online social networks
NASA Astrophysics Data System (ADS)
Zhuang, Yun-Bei; Chen, J. J.; Li, Zhi-hong
2017-10-01
The wide adoption of social media has increased the interaction among different pieces of information, and this interaction includes cooperation and competition for our finite attention. While previous research focus on fully competition, this paper extends the interaction to be both "cooperation" and "competition", by employing an IS1S2 R model. To explore how two different pieces of information interact with each other, the IS1S2 R model splits the agents into four parts-(Ignorant-Spreader I-Spreader II-Stifler), based on SIR epidemic spreading model. Using real data from Weibo.com, a social network site similar to Twitter, we find some parameters, like decaying rates, can both influence the cooperative diffusion process and the competitive process, while other parameters, like infectious rates only have influence on the competitive diffusion process. Besides, the parameters' effect are more significant in the competitive diffusion than in the cooperative diffusion.
Autonomous sensor particle for parameter tracking in large vessels
NASA Astrophysics Data System (ADS)
Thiele, Sebastian; Da Silva, Marco Jose; Hampel, Uwe
2010-08-01
A self-powered and neutrally buoyant sensor particle has been developed for the long-term measurement of spatially distributed process parameters in the chemically harsh environments of large vessels. One intended application is the measurement of flow parameters in stirred fermentation biogas reactors. The prototype sensor particle is a robust and neutrally buoyant capsule, which allows free movement with the flow. It contains measurement devices that log the temperature, absolute pressure (immersion depth) and 3D-acceleration data. A careful calibration including an uncertainty analysis has been performed. Furthermore, autonomous operation of the developed prototype was successfully proven in a flow experiment in a stirred reactor model. It showed that the sensor particle is feasible for future application in fermentation reactors and other industrial processes.
NASA Astrophysics Data System (ADS)
Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal
2013-07-01
The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.
McGuire, Jennifer T.; Smith, Erik W.; Long, David T.; Hyndman, David W.; Haack, Sheridan K.; Klug, Michael J.; Velbel, Michael A.
2000-01-01
A fundamental issue in aquifer biogeochemistry is the means by which solute transport, geochemical processes, and microbiological activity combine to produce spatial and temporal variations in redox zonation. In this paper, we describe the temporal variability of TEAP conditions in shallow groundwater contaminated with both waste fuel and chlorinated solvents. TEAP parameters (including methane, dissolved iron, and dissolved hydrogen) were measured to characterize the contaminant plume over a 3-year period. We observed that concentrations of TEAP parameters changed on different time scales and appear to be related, in part, to recharge events. Changes in all TEAP parameters were observed on short time scales (months), and over a longer 3-year period. The results indicate that (1) interpretations of TEAP conditions in aquifers contaminated with a variety of organic chemicals, such as those with petroleum hydrocarbons and chlorinated solvents, must consider additional hydrogen-consuming reactions (e.g., dehalogenation); (2) interpretations must consider the roles of both in situ (at the sampling point) biogeochemical and solute transport processes; and (3) determinations of microbial communities are often necessary to confirm the interpretations made from geochemical and hydrogeological measurements on these processes.
SU-C-BRD-03: Analysis of Accelerator Generated Text Logs for Preemptive Maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, CM; Baydush, AH; Nguyen, C
2014-06-15
Purpose: To develop a model to analyze medical accelerator generated parameter and performance data that will provide an early warning of performance degradation and impending component failure. Methods: A robust 6 MV VMAT quality assurance treatment delivery was used to test the constancy of accelerator performance. The generated text log files were decoded and analyzed using statistical process control (SPC) methodology. The text file data is a single snapshot of energy specific and overall systems parameters. A total of 36 system parameters were monitored which include RF generation, electron gun control, energy control, beam uniformity control, DC voltage generation, andmore » cooling systems. The parameters were analyzed using Individual and Moving Range (I/MR) charts. The chart limits were calculated using a hybrid technique that included the use of the standard 3σ limits and the parameter/system specification. Synthetic errors/changes were introduced to determine the initial effectiveness of I/MR charts in detecting relevant changes in operating parameters. The magnitude of the synthetic errors/changes was based on: the value of 1 standard deviation from the mean operating parameter of 483 TB systems, a small fraction (≤ 5%) of the operating range, or a fraction of the minor fault deviation. Results: There were 34 parameters in which synthetic errors were introduced. There were 2 parameters (radial position steering coil, and positive 24V DC) in which the errors did not exceed the limit of the I/MR chart. The I chart limit was exceeded for all of the remaining parameters (94.2%). The MR chart limit was exceeded in 29 of the 32 parameters (85.3%) in which the I chart limit was exceeded. Conclusion: Statistical process control I/MR evaluation of text log file parameters may be effective in providing an early warning of performance degradation or component failure for digital medical accelerator systems. Research is Supported by Varian Medical Systems, Inc.« less
MODEST - JPL GEODETIC AND ASTROMETRIC VLBI MODELING AND PARAMETER ESTIMATION PROGRAM
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1994-01-01
Observations of extragalactic radio sources in the gigahertz region of the radio frequency spectrum by two or more antennas, separated by a baseline as long as the diameter of the Earth, can be reduced, by radio interferometry techniques, to yield time delays and their rates of change. The Very Long Baseline Interferometric (VLBI) observables can be processed by the MODEST software to yield geodetic and astrometric parameters of interest in areas such as geophysical satellite and spacecraft tracking applications and geodynamics. As the accuracy of radio interferometry has improved, increasingly complete models of the delay and delay rate observables have been developed. MODEST is a delay model (MOD) and parameter estimation (EST) program that takes into account delay effects such as geometry, clock, troposphere, and the ionosphere. MODEST includes all known effects at the centimeter level in modeling. As the field evolves and new effects are discovered, these can be included in the model. In general, the model includes contributions to the observables from Earth orientation, antenna motion, clock behavior, atmospheric effects, and radio source structure. Within each of these categories, a number of unknown parameters may be estimated from the observations. Since all parts of the time delay model contain nearly linear parameter terms, a square-root-information filter (SRIF) linear least-squares algorithm is employed in parameter estimation. Flexibility (via dynamic memory allocation) in the MODEST code ensures that the same executable can process a wide array of problems. These range from a few hundred observations on a single baseline, yielding estimates of tens of parameters, to global solutions estimating tens of thousands of parameters from hundreds of thousands of observations at antennas widely distributed over the Earth's surface. Depending on memory and disk storage availability, large problems may be subdivided into more tractable pieces that are processed sequentially. MODEST is written in FORTRAN 77, C-language, and VAX ASSEMBLER for DEC VAX series computers running VMS. It requires 6Mb of RAM for execution. The standard distribution medium for this package is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Instructions for use and sample input and output data are available on the distribution media. This program was released in 1993 and is a copyrighted work with all copyright vested in NASA.
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
Mueller, Christina J; White, Corey N; Kuchinke, Lars
2017-11-27
The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.
Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods
NASA Technical Reports Server (NTRS)
Morley, Jr., Robert E. (Inventor); Engebretson, A. Maynard (Inventor); Engel, George L. (Inventor); Sullivan, Thomas J. (Inventor)
1993-01-01
An electronic filter for filtering an electrical signal. Signal processing circuitry therein includes a logarithmic filter having a series of filter stages with inputs and outputs in cascade and respective circuits associated with the filter stages for storing electrical representations of filter parameters. The filter stages include circuits for respectively adding the electrical representations of the filter parameters to the electrical signal to be filtered thereby producing a set of filter sum signals. At least one of the filter stages includes circuitry for producing a filter signal in substantially logarithmic form at its output by combining a filter sum signal for that filter stage with a signal from an output of another filter stage. The signal processing circuitry produces an intermediate output signal, and a multiplexer connected to the signal processing circuit multiplexes the intermediate output signal with the electrical signal to be filtered so that the logarithmic filter operates as both a logarithmic prefilter and a logarithmic postfilter. Other electronic filters, signal conversion apparatus, electroacoustic systems, hearing aids and methods are also disclosed.
NASA Astrophysics Data System (ADS)
Miao, Changyun; Shi, Boya; Li, Hongqiang
2008-12-01
A human physiological parameters intelligent clothing is researched with FBG sensor technology. In this paper, the principles and methods of measuring human physiological parameters including body temperature and heart rate in intelligent clothing with distributed FBG are studied, the mathematical models of human physiological parameters measurement are built; the processing method of body temperature and heart rate detection signals is presented; human physiological parameters detection module is designed, the interference signals are filtered out, and the measurement accuracy is improved; the integration of the intelligent clothing is given. The intelligent clothing can implement real-time measurement, processing, storage and output of body temperature and heart rate. It has accurate measurement, portability, low cost, real-time monitoring, and other advantages. The intelligent clothing can realize the non-contact monitoring between doctors and patients, timely find the diseases such as cancer and infectious diseases, and make patients get timely treatment. It has great significance and value for ensuring the health of the elders and the children with language dysfunction.
NASA Astrophysics Data System (ADS)
Singh, Jagdeep; Sharma, Rajiv Kumar
2016-12-01
Electrical discharge machining (EDM) is a well-known nontraditional manufacturing process to machine the difficult-to-machine (DTM) materials which have unique hardness properties. Researchers have successfully performed hybridization to improve this process by incorporating powders into the EDM process known as powder-mixed EDM process. This process drastically improves process efficiency by increasing material removal rate, micro-hardness, as well as reducing the tool wear rate and surface roughness. EDM also has some input parameters, including pulse-on time, dielectric levels and its type, current setting, flushing pressure, and so on, which have a significant effect on EDM performance. However, despite their positive influence, investigating the effects of these parameters on environmental conditions is necessary. Most studies demonstrate the use of kerosene oil as dielectric fluid. Nevertheless, in this work, the authors highlight the findings with respect to three different dielectric fluids, including kerosene oil, EDM oil, and distilled water using one-variable-at-a-time approach for machining as well as environmental aspects. The hazard and operability analysis is employed to identify the inherent safety factors associated with powder-mixed EDM of WC-Co.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Robust Online Hamiltonian Learning
NASA Astrophysics Data System (ADS)
Granade, Christopher; Ferrie, Christopher; Wiebe, Nathan; Cory, David
2013-05-01
In this talk, we introduce a machine-learning algorithm for the problem of inferring the dynamical parameters of a quantum system, and discuss this algorithm in the example of estimating the precession frequency of a single qubit in a static field. Our algorithm is designed with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online, during experimental data collection, or can be used as a tool for post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. Finally, we discuss the performance of the our algorithm by appeal to the Cramer-Rao bound. This work was financially supported by the Canadian government through NSERC and CERC and by the United States government through DARPA. NW would like to acknowledge funding from USARO-DTO.
Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B.; Sturm, Benjamin W.
2016-02-09
According to one embodiment, a scintillator radiation detector system includes a scintillator, and a processing device for processing pulse traces corresponding to light pulses from the scintillator, where the processing device is configured to: process each pulse trace over at least two temporal windows and to use pulse digitization to improve energy resolution of the system. According to another embodiment, a scintillator radiation detector system includes a processing device configured to: fit digitized scintillation waveforms to an algorithm, perform a direct integration of fit parameters, process multiple integration windows for each digitized scintillation waveform to determine a correction factor, and apply the correction factor to each digitized scintillation waveform.
Shao, Dongyan; Atungulu, Griffiths G; Pan, Zhongli; Yue, Tianli; Zhang, Ang; Li, Xuan
2012-08-01
Value of tomato seed has not been fully recognized. The objectives of this research were to establish suitable processing conditions for extracting oil from tomato seed by using solvent, determine the impact of processing conditions on yield and antioxidant activity of extracted oil, and elucidate kinetics of the oil extraction process. Four processing parameters, including time, temperature, solvent-to-solid ratio and particle size were studied. A second order model was established to describe the oil extraction process. Based on the results, increasing temperature, solvent-to-solid ratio, and extraction time increased oil yield. In contrast, larger particle size reduced the oil yield. The recommended oil extraction conditions were 8 min of extraction time at temperature of 25 °C, solvent-to-solids ratio of 5/1 (v/w) and particle size of 0.38 mm, which gave oil yield of 20.32% with recovery rate of 78.56%. The DPPH scavenging activity of extracted oil was not significantly affected by the extraction parameters. The inhibitory concentration (IC(50) ) of tomato seed oil was 8.67 mg/mL which was notably low compared to most vegetable oils. A 2nd order model successfully described the kinetics of tomato oil extraction process and parameters of extraction kinetics including initial extraction rate (h), equilibrium concentration of oil (C(s) ), and the extraction rate constant (k) could be precisely predicted with R(2) of at least 0.957. The study revealed that tomato seed which is typically treated as a low value byproduct of tomato processing has great potential in producing oil with high antioxidant capability. The impact of processing conditions including time, temperature, solvent-to-solid ratio and particle size on yield, and antioxidant activity of extracted tomato seed oil are reported. Optimal conditions and models which describe the extraction process are recommended. The information is vital for determining the extraction processing conditions for industrial production of high quality tomato seed oil. Journal of Food Science © 2012 Institute of Food Technologists® No claim to original US government works.
Optical air data systems and methods
NASA Technical Reports Server (NTRS)
Caldwell, Loren M. (Inventor); Tang, Shoou-yu (Inventor); O'Brien, Martin (Inventor)
2010-01-01
Systems and methods for sensing air outside a moving aircraft are presented. In one embodiment, a system includes a laser for generating laser energy. The system also includes one or more transceivers for projecting the laser energy as laser radiation to the air. Subsequently, each transceiver receives laser energy as it is backscattered from the air. A computer processes signals from the transceivers to distinguish molecular scattered laser radiation from aerosol scattered laser radiation and determines one or more air parameters based on the scattered laser radiation. Such air parameters may include air speed, air pressure, air temperature and aircraft orientation angle, such as yaw, angle of attack and sideslip.
Optical air data systems and methods
NASA Technical Reports Server (NTRS)
Caldwell, Loren M. (Inventor); O'Brien, Martin J. (Inventor); Weimer, Carl S. (Inventor); Nelson, Loren D. (Inventor)
2008-01-01
Systems and methods for sensing air outside a moving aircraft are presented. In one embodiment, a system includes a laser for generating laser energy. The system also includes one or more transceivers for projecting the laser energy as laser radiation to the air. Subsequently, each transceiver receives laser energy as it is backscattered from the air. A computer processes signals from the transceivers to distinguish molecular scattered laser radiation from aerosol scattered laser radiation and determines one or more air parameters based on the scattered laser radiation. Such air parameters may include air speed, air pressure, air temperature and aircraft orientation angle, such as yaw, angle of attack and sideslip.
Optical air data systems and methods
NASA Technical Reports Server (NTRS)
Caldwell, Loren M. (Inventor); O'Brien, Martin J. (Inventor); Weimer, Carl S. (Inventor); Nelson, Loren D. (Inventor)
2005-01-01
Systems and methods for sensing air outside a moving aircraft are presented. In one embodiment, a system includes a laser for generating laser energy. The system also includes one or more transceivers for projecting the laser energy as laser radiation to the air. Subsequently, each transceiver receives laser energy as it is backscattered from the air. A computer processes signals from the transceivers to distinguish molecular scattered laser radiation from aerosol scattered laser radiation and determines one or more air parameters based on the scattered laser radiation. Such air parameters may include air speed, air pressure, air temperature and aircraft orientation angle, such as yaw, angle of attack and sideslip.
Prioritized Contact Transport Stream
NASA Technical Reports Server (NTRS)
Hunt, Walter Lee, Jr. (Inventor)
2015-01-01
A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
Formation of enriched black tea extract loaded chitosan nanoparticles via electrospraying
NASA Astrophysics Data System (ADS)
Hammond, Samuel James
Creating nanoparticles of beneficial nutraceuticals and pharmaceuticals has had a large surge of research due to the enhancement of absorption and bioavailability by decreasing their size. One of these ways is by electrohydrodynamic atomization, also known as electrospraying. In general, this novel process is done by forcing a liquid through a capillary nozzle and which is subjected to an electrical field. While there are different ways to create nanoparticles, the novel method of electrospraying can be beneficial over other types of nanoparticle formation. Reasons include high control over particle size and distribution by altering electrospray parameters (voltage, flow rate, distance, and time), higher encapsulation efficiency than other methods, and also it is a one step process without exposure to extreme conditions (Gomez-Estaca et. al. 2012, Jaworek and Sobcyzk 2008). The current study aimed to create a chitosan encapsulated theaflavin-2 enriched black tea extract (BTE) nanoparticles via electrospraying. The first step of this process was to create the smallest chitosan nanoparticles possible by altering the electrospray parameters and the chitosan-acetic acid solution parameters. The solution properties altered include chitosan molecular weight, acetic acid concentration, and chitosan concentration. Specifically, the electrospray parameters such as voltage, flow rate and distance from syringe to collector are the most important in determining particle size. After creating the smallest chitosan particles, the TF-2 enriched black tea extract was added to the chitosan-acetic acid solution to be electrosprayed. The particles were assessed with the following procedures: Atomic force microscopy (AFM) and scanning electron microscopy (SEM) for particle morphology and size, and loading efficiency with ultraviolet--visible spectrophotometer (UV-VIS). Chitosan-BTE nanoparticles were successfully created in a one step process. Diameter of the particles on average ranged from 255 nm to 560 nm. Encapsulation efficiency was above 95% for all but one sample set. Future work includes MTT assay and cellular uptake.
Hattori, Yusuke; Otsuka, Makoto
2017-05-30
In the pharmaceutical industry, the implementation of continuous manufacturing has been widely promoted in lieu of the traditional batch manufacturing approach. More specially, in recent years, the innovative concept of feed-forward control has been introduced in relation to process analytical technology. In the present study, we successfully developed a feed-forward control model for the tablet compression process by integrating data obtained from near-infrared (NIR) spectra and the physical properties of granules. In the pharmaceutical industry, batch manufacturing routinely allows for the preparation of granules with the desired properties through the manual control of process parameters. On the other hand, continuous manufacturing demands the automatic determination of these process parameters. Here, we proposed the development of a control model using the partial least squares regression (PLSR) method. The most significant feature of this method is the use of dataset integrating both the NIR spectra and the physical properties of the granules. Using our model, we determined that the properties of products, such as tablet weight and thickness, need to be included as independent variables in the PLSR analysis in order to predict unknown process parameters. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistical error model for a solar electric propulsion thrust subsystem
NASA Technical Reports Server (NTRS)
Bantell, M. H.
1973-01-01
The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.
Guo, Songfeng; Qi, Shengwen; Zou, Yu; Zheng, Bowen
2017-01-01
In rocks or rock-like materials, the constituents, e.g. quartz, calcite and biotite, as well as the microdefects have considerably different mechanical properties that make such materials heterogeneous at different degrees. The failure of materials subjected to external loads is a cracking process accompanied with stress redistribution due to material heterogeneity. However, the latter cannot be observed from the experiments in laboratory directly. In this study, the cracking and stress features during uniaxial compression process are numerically studied based on a presented approach. A plastic strain dependent strength model is implemented into the continuous numerical tool—Fast Lagrangian Analysis of Continua in three Dimensions (FLAC3D), and the Gaussian statistical function is adopted to depict the heterogeneity of mechanical parameters including elastic modulus, friction angle, cohesion and tensile strength. The mean parameter μ and the coefficient of variance (hcv, the ratio of mean parameter to standard deviation) in the function are used to define the mean value and heterogeneity degree of the parameters, respectively. The results show that this numerical approach can perfectly capture the general features of brittle materials including fracturing process, AE events as well as stress-strain curves. Furthermore, the local stress disturbance is analyzed and the crack initiation stress threshold is identified based on the AE events process and stress-strain curves. It is shown that the stress concentration always appears in the undamaged elements near the boundary of damaged sites. The peak stress and crack initiation stress are both heterogeneity dependent, i.e., a linear relation exists between the two stress thresholds and hcv. The range of hcv is suggested as 0.12 to 0.21 for most rocks. The stress concentration degree is represented by a stress concentration factor and found also heterogeneity dominant. Finally, it is found that there exists a consistent tendency between the local stress difference and the AE events process. PMID:28772738
Guo, Songfeng; Qi, Shengwen; Zou, Yu; Zheng, Bowen
2017-04-01
In rocks or rock-like materials, the constituents, e.g. quartz, calcite and biotite, as well as the microdefects have considerably different mechanical properties that make such materials heterogeneous at different degrees. The failure of materials subjected to external loads is a cracking process accompanied with stress redistribution due to material heterogeneity. However, the latter cannot be observed from the experiments in laboratory directly. In this study, the cracking and stress features during uniaxial compression process are numerically studied based on a presented approach. A plastic strain dependent strength model is implemented into the continuous numerical tool-Fast Lagrangian Analysis of Continua in three Dimensions (FLAC 3D ), and the Gaussian statistical function is adopted to depict the heterogeneity of mechanical parameters including elastic modulus, friction angle, cohesion and tensile strength. The mean parameter μ and the coefficient of variance ( h cv , the ratio of mean parameter to standard deviation) in the function are used to define the mean value and heterogeneity degree of the parameters, respectively. The results show that this numerical approach can perfectly capture the general features of brittle materials including fracturing process, AE events as well as stress-strain curves. Furthermore, the local stress disturbance is analyzed and the crack initiation stress threshold is identified based on the AE events process and stress-strain curves. It is shown that the stress concentration always appears in the undamaged elements near the boundary of damaged sites. The peak stress and crack initiation stress are both heterogeneity dependent, i.e., a linear relation exists between the two stress thresholds and h cv . The range of h cv is suggested as 0.12 to 0.21 for most rocks. The stress concentration degree is represented by a stress concentration factor and found also heterogeneity dominant. Finally, it is found that there exists a consistent tendency between the local stress difference and the AE events process.
NASA Technical Reports Server (NTRS)
Boorstyn, R. R.
1973-01-01
Research is reported dealing with problems of digital data transmission and computer communications networks. The results of four individual studies are presented which include: (1) signal processing with finite state machines, (2) signal parameter estimation from discrete-time observations, (3) digital filtering for radar signal processing applications, and (4) multiple server queues where all servers are not identical.
40 CFR 63.645 - Test methods and procedures for miscellaneous process vents.
Code of Federal Regulations, 2012 CFR
2012-07-01
... analysis based on accepted chemical engineering principles, measurable process parameters, or physical or... minute, at a temperature of 20 °C. (g) Engineering assessment may be used to determine the TOC emission...) Engineering assessment includes, but is not limited to, the following: (i) Previous test results provided the...
40 CFR 63.645 - Test methods and procedures for miscellaneous process vents.
Code of Federal Regulations, 2013 CFR
2013-07-01
... analysis based on accepted chemical engineering principles, measurable process parameters, or physical or... minute, at a temperature of 20 °C. (g) Engineering assessment may be used to determine the TOC emission...) Engineering assessment includes, but is not limited to, the following: (i) Previous test results provided the...
40 CFR 63.645 - Test methods and procedures for miscellaneous process vents.
Code of Federal Regulations, 2011 CFR
2011-07-01
... analysis based on accepted chemical engineering principles, measurable process parameters, or physical or... minute, at a temperature of 20 °C. (g) Engineering assessment may be used to determine the TOC emission...) Engineering assessment includes, but is not limited to, the following: (i) Previous test results provided the...
A Process Dynamics and Control Experiment for the Undergraduate Laboratory
ERIC Educational Resources Information Center
Spencer, Jordan L.
2009-01-01
This paper describes a process control experiment. The apparatus includes a three-vessel glass flow system with a variable flow configuration, means for feeding dye solution controlled by a stepper-motor driven valve, and a flow spectrophotometer. Students use impulse response data and nonlinear regression to estimate three parameters of a model…
A Multinomial Model of Event-Based Prospective Memory
ERIC Educational Resources Information Center
Smith, Rebekah E.; Bayen, Ute J.
2004-01-01
Prospective memory is remembering to perform an action in the future. The authors introduce the 1st formal model of event-based prospective memory, namely, a multinomial model that includes 2 separate parameters related to prospective memory processes. The 1st measures preparatory attentional processes, and the 2nd measures retrospective memory…
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Wolf Creek Research Basin Cold REgion Process Studies - 1992-2003
NASA Astrophysics Data System (ADS)
Janowicz, R.; Hedstrom, N.; Pomeroy, J.; Granger, R.; Carey, S.
2004-12-01
The development of hydrological models in northern regions are complicated by cold region processes. Sparse vegetation influences snowpack accumulation, redistribution and melt, frozen ground effects infiltration and runoff and cold soils in the summer effect evapotranspiration rates. Situated in the upper Yukon River watershed, the 195 km2 Wolf Creek Research Basin was instrumented in 1992 to calibrate hydrologic flow models, and has since evolved into a comprehensive study of cold region processes and linkages, contributing significantly to hydrological and climate change modelling. Studies include those of precipitation distribution, snowpack accumulation and redistribution, energy balance, snowmelt infiltration, and water balance. Studies of the spatial variability of hydrometeorological data demonstrate the importance of physical parameters on their distribution and control on runoff processes. Many studies have also identified the complex interaction of several of the physical parameters, including topography, vegetation and frozen ground (seasonal or permafrost) as important. They also show that there is a fundamental, underlying spatial structure to the watershed that must be adequately represented in parameterization schemes for scaling and watershed modelling. The specific results of numerous studies are presented.
Liu, Jianguo; Yang, Bo; Chen, Changzhen
2013-02-01
The optimization of operating parameters for the isolation of peroxidase from horseradish (Armoracia rusticana) roots with ultrafiltration (UF) technology was systemically studied. The effects of UF operating conditions on the transmission of proteins were quantified using the parameter scanning UF. These conditions included solution pH, ionic strength, stirring speed and permeate flux. Under optimized conditions, the purity of horseradish peroxidase (HRP) obtained was greater than 84 % after a two-stage UF process and the recovery of HRP from the feedstock was close to 90 %. The resulting peroxidase product was then analysed by isoelectric focusing, SDS-PAGE and circular dichroism, to confirm its isoelectric point, molecular weight and molecular secondary structure. The effects of calcium ion on HRP specific activities were also experimentally determined.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Modelling of the combustion velocity in UIT-85 on sustainable alternative gas fuel
NASA Astrophysics Data System (ADS)
Smolenskaya, N. M.; Korneev, N. V.
2017-05-01
The flame propagation velocity is one of the determining parameters characterizing the intensity of combustion process in the cylinder of an engine with spark ignition. Strengthening of requirements for toxicity and efficiency of the ICE contributes to gradual transition to sustainable alternative fuels, which include the mixture of natural gas with hydrogen. Currently, studies of conditions and regularities of combustion of this fuel to improve efficiency of its application are carried out in many countries. Therefore, the work is devoted to modeling the average propagation velocities of natural gas flame front laced with hydrogen to 15% by weight of the fuel, and determining the possibility of assessing the heat release characteristics on the average velocities of the flame front propagation in the primary and secondary phases of combustion. Experimental studies, conducted the on single cylinder universal installation UIT-85, showed the presence of relationship of the heat release characteristics with the parameters of the flame front propagation. Based on the analysis of experimental data, the empirical dependences for determination of average velocities of flame front propagation in the first and main phases of combustion, taking into account the change in various parameters of engine operation with spark ignition, were obtained. The obtained results allow to determine the characteristics of heat dissipation and to assess the impact of addition of hydrogen to the natural gas combustion process, that is needed to identify ways of improvement of the combustion process efficiency, including when you change the throttling parameters.
The effects of DRIE operational parameters on vertically aligned micropillar arrays
NASA Astrophysics Data System (ADS)
Miller, Kane; Li, Mingxiao; Walsh, Kevin M.; Fu, Xiao-An
2013-03-01
Vertically aligned silicon micropillar arrays have been created by deep reactive ion etching (DRIE) and used for a number of microfabricated devices including microfluidic devices, micropreconcentrators and photovoltaic cells. This paper delineates an experimental design performed on the Bosch process of DRIE of micropillar arrays. The arrays are fabricated with direct-write optical lithography without photomask, and the effects of DRIE process parameters, including etch cycle time, passivation cycle time, platen power and coil power on profile angle, scallop depth and scallop peak-to-peak distance are studied by statistical design of experiments. Scanning electron microscope images are used for measuring the resultant profile angles and characterizing the scalloping effect on the pillar sidewalls. The experimental results indicate the effects of the determining factors, etch cycle time, passivation cycle time and platen power, on the micropillar profile angles and scallop depths. An optimized DRIE process recipe for creating nearly 90° and smooth surface (invisible scalloping) has been obtained as a result of the statistical design of experiments.
Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data
NASA Technical Reports Server (NTRS)
1981-01-01
Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.
Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc
2014-09-15
Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.
Adelson, Stewart L
2011-10-01
The American Academy of Child and Adolescent Psychiatry (AACAP) is preparing a publication, Practice Parameter on Gay, Lesbian or Bisexual Sexual Orientation, Gender-Nonconformity, and Gender Discordance in Children and Adolescents. This article discusses the development of the part of the parameter related to gender nonconformity and gender discordance and describes the practice parameter preparation process,rationale, key scientific evidence, and methodology. Also discussed are terminology considerations, related clinical issues and practice skills, and overall organization of information including influences on gender development, gender role behavior, gender nonconformity and gender discordance, and their relationship to the development of sexual orientation.
LAGEOS geodetic analysis-SL7.1
NASA Technical Reports Server (NTRS)
Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Klosko, S. M.; Robbins, J. W.; Torrence, M. H.; Williamson, R. G.; Pavlis, E. C.; Douglas, N. B.; Fricke, S. K.
1991-01-01
Laser ranging measurements to the LAGEOS satellite from 1976 through 1989 are related via geodetic and orbital theories to a variety of geodetic and geodynamic parameters. The SL7.1 analyses are explained of this data set including the estimation process for geodetic parameters such as Earth's gravitational constant (GM), those describing the Earth's elasticity properties (Love numbers), and the temporally varying geodetic parameters such as Earth's orientation (polar motion and Delta UT1) and tracking site horizontal tectonic motions. Descriptions of the reference systems, tectonic models, and adopted geodetic constants are provided; these are the framework within which the SL7.1 solution takes place. Estimates of temporal variations in non-conservative force parameters are included in these SL7.1 analyses as well as parameters describing the orbital states at monthly epochs. This information is useful in further refining models used to describe close-Earth satellite behavior. Estimates of intersite motions and individual tracking site motions computed through the network adjustment scheme are given. Tabulations of tracking site eccentricities, data summaries, estimated monthly orbital and force model parameters, polar motion, Earth rotation, and tracking station coordinate results are also provided.
NASA Technical Reports Server (NTRS)
Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.
1996-01-01
This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.
Discrete element weld model, phase 2
NASA Technical Reports Server (NTRS)
Prakash, C.; Samonds, M.; Singhal, A. K.
1987-01-01
A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.
Identification and stochastic control of helicopter dynamic modes
NASA Technical Reports Server (NTRS)
Molusis, J. A.; Bar-Shalom, Y.
1983-01-01
A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.
Multisite EPR oximetry from multiple quadrature harmonics.
Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C
2012-01-01
Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.
Efficient packet forwarding using cyber-security aware policies
Ros-Giralt, Jordi
2017-04-04
For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.
Efficient packet forwarding using cyber-security aware policies
Ros-Giralt, Jordi
2017-10-25
For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor.
Thermodynamic and economic analysis of heat pumps for energy recovery in industrial processes
NASA Astrophysics Data System (ADS)
Urdaneta-B, A. H.; Schmidt, P. S.
1980-09-01
A computer code has been developed for analyzing the thermodynamic performance, cost and economic return for heat pump applications in industrial heat recovery. Starting with basic defining characteristics of the waste heat stream and the desired heat sink, the algorithm first evaluates the potential for conventional heat recovery with heat exchangers, and if applicable, sizes the exchanger. A heat pump system is then designed to process the residual heating and cooling requirements of the streams. In configuring the heat pump, the program searches a number of parameters, including condenser temperature, evaporator temperature, and condenser and evaporator approaches. All system components are sized for each set of parameters, and economic return is estimated and compared with system economics for conventional processing of the heated and cooled streams (i.e., with process heaters and coolers). Two case studies are evaluated, one in a food processing application and the other in an oil refinery unit.
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
NASA Astrophysics Data System (ADS)
Nasr, M.; Anwar, S.; El-Tamimi, A.; Pervaiz, S.
2018-04-01
Titanium and its alloys e.g. Ti6Al4V have widespread applications in aerospace, automotive and medical industry. At the same time titanium and its alloys are regarded as difficult to machine materials due to their high strength and low thermal conductivity. Significant efforts have been dispensed to improve the accuracy of the machining processes for Ti6Al4V. The current study present the use of the rotary ultrasonic drilling (RUD) process for machining high quality holes in Ti6Al4V. The study takes into account the effects of the main RUD input parameters including spindle speed, ultrasonic power, feed rate and tool diameter on the key output responses related to the accuracy of the drilled holes including cylindricity and overcut errors. Analysis of variance (ANOVA) was employed to study the influence of the input parameters on cylindricity and overcut error. Later, regression models were developed to find the optimal set of input parameters to minimize the cylindricity and overcut errors.
NASA Astrophysics Data System (ADS)
Levesque, M.
Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.
Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Son, Woo-Sung
2015-01-01
This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6-18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R (2) had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status.
Post-processing of seismic parameter data based on valid seismic event determination
McEvilly, Thomas V.
1985-01-01
An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.
Sun, Li-Qiong; Wang, Shu-Yao; Li, Yan-Jing; Wang, Yong-Xiang; Wang, Zhen-Zhong; Huang, Wen-Zhe; Wang, Yue-Sheng; Bi, Yu-An; Ding, Gang; Xiao, Wei
2016-01-01
The present study was designed to determine the relationships between the performance of ethanol precipitation and seven process parameters in the ethanol precipitation process of Re Du Ning Injections, including concentrate density, concentrate temperature, ethanol content, flow rate and stir rate in the addition of ethanol, precipitation time, and precipitation temperature. Under the experimental and simulated production conditions, a series of precipitated resultants were prepared by changing these variables one by one, and then examined by HPLC fingerprint analyses. Different from the traditional evaluation model based on single or a few constituents, the fingerprint data of every parameter fluctuation test was processed with Principal Component Analysis (PCA) to comprehensively assess the performance of ethanol precipitation. Our results showed that concentrate density, ethanol content, and precipitation time were the most important parameters that influence the recovery of active compounds in precipitation resultants. The present study would provide some reference for pharmaceutical scientists engaged in research on pharmaceutical process optimization and help pharmaceutical enterprises adapt a scientific and reasonable cost-effective approach to ensure the batch-to-batch quality consistency of the final products. Copyright © 2016 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
A novel approach for calculating shelf life of minimally processed vegetables.
Corbo, Maria Rosaria; Del Nobile, Matteo Alessandro; Sinigaglia, Milena
2006-01-15
Shelf life of minimally processed vegetables is often calculated by using the kinetic parameters of Gompertz equation as modified by Zwietering et al. [Zwietering, M.H., Jongenburger, F.M., Roumbouts, M., van't Riet, K., 1990. Modelling of the bacterial growth curve. Applied and Environmental Microbiology 56, 1875-1881.] taking 5x10(7) CFU/g as the maximum acceptable contamination value consistent with acceptable quality of these products. As this method does not allow estimation of the standard errors of the shelf life, in this paper the modified Gompertz equation was re-parameterized to directly include the shelf life as a fitting parameter among the Gompertz parameters. Being the shelf life a fitting parameter is possible to determine its confidence interval by fitting the proposed equation to the experimental data. The goodness-of-fit of this new equation was tested by using mesophilic bacteria cell loads from different minimally processed vegetables (packaged fresh-cut lettuce, fennel and shredded carrots) that differed for some process operations or for package atmosphere. The new equation was able to describe the data well and to estimate the shelf life. The results obtained emphasize the importance of using the standard errors for the shelf life value to show significant differences among the samples.
NASA Astrophysics Data System (ADS)
Zhuang, Jyun-Rong; Lee, Yee-Ting; Hsieh, Wen-Hsin; Yang, An-Shik
2018-07-01
Selective laser melting (SLM) shows a positive prospect as an additive manufacturing (AM) technique for fabrication of 3D parts with complicated structures. A transient thermal model was developed by the finite element method (FEM) to simulate the thermal behavior for predicting the time evolution of temperature field and melt pool dimensions of Ti6Al4V powder during SLM. The FEM predictions were then compared with published experimental measurements and calculation results for model validation. This study applied the design of experiment (DOE) scheme together with the response surface method (RSM) to conduct the regression analysis based on four processing parameters (exactly, the laser power, scanning speed, preheating temperature and hatch space) for predicting the dimensions of the melt pool in SLM. The preliminary RSM results were used to quantify the effects of those parameters on the melt pool size. The process window was further implemented via two criteria of the width and depth of the molten pool to screen impractical conditions of four parameters for including the practical ranges of processing parameters. The FEM simulations confirmed the good accuracy of the critical RSM models in the predictions of melt pool dimensions for three typical SLM working scenarios.
Localised anodic oxidation of aluminium material using a continuous electrolyte jet
NASA Astrophysics Data System (ADS)
Kuhn, D.; Martin, A.; Eckart, C.; Sieber, M.; Morgenstern, R.; Hackert-Oschätzchen, M.; Lampke, T.; Schubert, A.
2017-03-01
Anodic oxidation of aluminium and its alloys is often used as protection against material wearout and corrosion. Therefore, anodic oxidation of aluminium is applied to produce functional oxide layers. The structure and properties of the oxide layers can be influenced by various factors. These factors include for example the properties of the substrate material, like alloy elements and heat treatment or process parameters, like operating temperature, electric parameters or the type of the used electrolyte. In order to avoid damage to the work-piece surface caused by covering materials in masking applications, to minimize the use of resources and to modify the surface in a targeted manner, the anodic oxidation has to be localised to partial areas. Within this study a proper alternative without preparing the substrate by a mask is investigated for generating locally limited anodic oxidation by using a continuous electrolyte jet. Therefore aluminium material EN AW 7075 is machined by applying a continuous electrolyte jet of oxalic acid. Experiments were carried out by varying process parameters like voltage or processing time. The realised oxide spots on the aluminium surface were investigated by optical microscopy, SEM and EDX line scanning. Furthermore, the dependencies of the oxide layer properties from the process parameters are shown.
Lean energy analysis of CNC lathe
NASA Astrophysics Data System (ADS)
Liana, N. A.; Amsyar, N.; Hilmy, I.; Yusof, MD
2018-01-01
The industrial sector in Malaysia is one of the main sectors that have high percentage of energy demand compared to other sector and this problem may lead to the future power shortage and increasing the production cost of a company. Suitable initiatives should be implemented by the industrial sectors to solve the issues such as by improving the machining system. In the past, the majority of the energy consumption in industry focus on lighting, HVAC and office section usage. Future trend, manufacturing process is also considered to be included in the energy analysis. A study on Lean Energy Analysis in a machining process is presented. Improving the energy efficiency in a lathe machine by enhancing the cutting parameters of turning process is discussed. Energy consumption of a lathe machine was analyzed in order to identify the effect of cutting parameters towards energy consumption. It was found that the combination of parameters for third run (spindle speed: 1065 rpm, depth of cut: 1.5 mm, feed rate: 0.3 mm/rev) was the most preferred and ideal to be used during the turning machining process as it consumed less energy usage.
NASA Technical Reports Server (NTRS)
Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.
2014-01-01
As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.
NASA Astrophysics Data System (ADS)
Rezaei Ashtiani, Hamid Reza; Zarandooz, Roozbeh
2015-09-01
A 2D axisymmetric electro-thermo-mechanical finite element (FE) model is developed to investigate the effect of current intensity, welding time, and electrode tip diameter on temperature distributions and nugget size in resistance spot welding (RSW) process of Inconel 625 superalloy sheets using ABAQUS commercial software package. The coupled electro-thermal analysis and uncoupled thermal-mechanical analysis are used for modeling process. In order to improve accuracy of simulation, material properties including physical, thermal, and mechanical properties have been considered to be temperature dependent. The thickness and diameter of computed weld nuggets are compared with experimental results and good agreement is observed. So, FE model developed in this paper provides prediction of quality and shape of the weld nuggets and temperature distributions with variation of each process parameter, suitably. Utilizing this FE model assists in adjusting RSW parameters, so that expensive experimental process can be avoided. The results show that increasing welding time and current intensity lead to an increase in the nugget size and electrode indentation, whereas increasing electrode tip diameter decreases nugget size and electrode indentation.
Cespi, Marco; Perinelli, Diego R; Casettari, Luca; Bonacucina, Giulia; Caporicci, Giuseppe; Rendina, Filippo; Palmieri, Giovanni F
2014-12-30
The use of process analytical technologies (PAT) to ensure final product quality is by now a well established practice in pharmaceutical industry. To date, most of the efforts in this field have focused on development of analytical methods using spectroscopic techniques (i.e., NIR, Raman, etc.). This work evaluated the possibility of using the parameters derived from the processing of in-line raw compaction data (the forces and displacement of the punches) as a PAT tool for controlling the tableting process. To reach this goal, two commercially available formulations were used, changing the quantitative composition and compressing them on a fully instrumented rotary pressing machine. The Heckel yield pressure and the compaction energies, together with the tablets hardness and compaction pressure, were selected and evaluated as discriminating parameters in all the prepared formulations. The apparent yield pressure, as shown in the obtained results, has the necessary sensitivity to be effectively included in a PAT strategy to monitor the tableting process. Additional investigations were performed to understand the criticalities and the mechanisms beyond this performing parameter and the associated implications. Specifically, it was discovered that the efficiency of the apparent yield pressure depends on the nominal drug title, the drug densification mechanism and the error in pycnometric density. In this study, the potential of using some parameters derived from the compaction raw data has been demonstrated to be an attractive alternative and complementary method to the well established spectroscopic techniques to monitor and control the tableting process. The compaction data monitoring method is also easy to set up and very cost effective. Copyright © 2014 Elsevier B.V. All rights reserved.
Wopereis, Suzan; Stroeve, Johanna H M; Stafleu, Annette; Bakker, Gertruud C M; Burggraaf, Jacobus; van Erk, Marjan J; Pellis, Linette; Boessen, Ruud; Kardinaal, Alwine A F; van Ommen, Ben
2017-01-01
A key feature of metabolic health is the ability to adapt upon dietary perturbations. Recently, it was shown that metabolic challenge tests in combination with the new generation biomarkers allow the simultaneous quantification of major metabolic health processes. Currently, applied challenge tests are largely non-standardized. A systematic review defined an optimal nutritional challenge test, the "PhenFlex test" (PFT). This study aimed to prove that PFT modulates all relevant processes governing metabolic health thereby allowing to distinguish subjects with different metabolic health status. Therefore, 20 healthy and 20 type 2 diabetic (T2D) male subjects were challenged both by PFT and oral glucose tolerance test (OGTT). During the 8-h response time course, 132 parameters were quantified that report on 26 metabolic processes distributed over 7 organs (gut, liver, adipose, pancreas, vasculature, muscle, kidney) and systemic stress. In healthy subjects, 110 of the 132 parameters showed a time course response. Patients with T2D showed 18 parameters to be significantly different after overnight fasting compared to healthy subjects, while 58 parameters were different in the post-challenge time course after the PFT. This demonstrates the added value of PFT in distinguishing subjects with different health status. The OGTT and PFT response was highly comparable for glucose metabolism as identical amounts of glucose were present in both challenge tests. Yet the PFT reports on additional processes, including vasculature, systemic stress, and metabolic flexibility. The PFT enables the quantification of all relevant metabolic processes involved in maintaining or regaining homeostasis of metabolic health. Studying both healthy subjects and subjects with impaired metabolic health showed that the PFT revealed new processes laying underneath health. This study provides the first evidence towards adopting the PFT as gold standard in nutrition research.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.
2015-01-01
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.
2015-03-15
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
NASA Astrophysics Data System (ADS)
Bondarenko, J. A.; Fedorenko, M. A.; Pogonin, A. A.
2018-03-01
Large parts can be treated without disassembling machines using “Extra”, having technological and design challenges, which differ from the challenges in the processing of these components on the stationary machine. Extension machines are used to restore large parts up to the condition allowing one to use them in a production environment. To achieve the desired accuracy and surface roughness parameters, the surface after rotary grinding becomes recoverable, which greatly increases complexity. In order to improve production efficiency and productivity of the process, the qualitative rotary processing of the machined surface is applied. The rotary cutting process includes a continuous change of the cutting edge surfaces. The kinematic parameters of a rotary cutting define its main features and patterns, the cutting operation of the rotary cutting instrument.
Process Simulation of Gas Metal Arc Welding Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, Paul E.
2005-09-06
ARCWELDER is a Windows-based application that simulates gas metal arc welding (GMAW) of steel and aluminum. The software simulates the welding process in an accurate and efficient manner, provides menu items for process parameter selection, and includes a graphical user interface with the option to animate the process. The user enters the base and electrode material, open circuit voltage, wire diameter, wire feed speed, welding speed, and standoff distance. The program computes the size and shape of a square-groove or V-groove weld in the flat position. The program also computes the current, arc voltage, arc length, electrode extension, transfer ofmore » droplets, heat input, filler metal deposition, base metal dilution, and centerline cooling rate, in English or SI units. The simulation may be used to select welding parameters that lead to desired operation conditions.« less
NASA Technical Reports Server (NTRS)
Kelly, G. M.; Mcconnell, J. G.; Findlay, J. T.; Heck, M. L.; Henry, M. W.
1984-01-01
The STS-11 (41-B) postflight data processing is completed and the results published. The final reconstructed entry trajectory is presented. The various atmospheric sources available for this flight are discussed. Aerodynamic Best Estimate of Trajectory BET generation and plots from this file are presented. A definition of the major maneuvers effected is given. Physical constants, including spacecraft mass properties; final residuals from the reconstruction process; trajectory parameter listings; and an archival section are included.
NASA Technical Reports Server (NTRS)
Entekhabi, D.; Eagleson, P. S.
1989-01-01
Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.
Modeling of microstructure evolution in direct metal laser sintering: A phase field approach
NASA Astrophysics Data System (ADS)
Nandy, Jyotirmoy; Sarangi, Hrushikesh; Sahoo, Seshadev
2017-02-01
Direct Metal Laser Sintering (DMLS) is a new technology in the field of additive manufacturing, which builds metal parts in a layer by layer fashion directly from the powder bed. The process occurs within a very short time period with rapid solidification rate. Slight variations in the process parameters may cause enormous change in the final build parts. The physical and mechanical properties of the final build parts are dependent on the solidification rate which directly affects the microstructure of the material. Thus, the evolving of microstructure plays a vital role in the process parameters optimization. Nowadays, the increase in computational power allows for direct simulations of microstructures during materials processing for specific manufacturing conditions. In this study, modeling of microstructure evolution of Al-Si-10Mg powder in DMLS process was carried out by using a phase field approach. A MATLAB code was developed to solve the set of phase field equations, where simulation parameters include temperature gradient, laser scan speed and laser power. The effects of temperature gradient on microstructure evolution were studied and found that with increase in temperature gradient, the dendritic tip grows at a faster rate.
Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul
2012-11-26
The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.
Friction Pull Plug Welding in Aluminum Alloys
NASA Technical Reports Server (NTRS)
Brooke, Shane A.; Bradford, Vann; Burkholder, Jonathon
2011-01-01
NASA fs Marshall Space Flight Center (MSFC) has recently invested much time and effort into the process development of Friction Pull Plug Welding (FPPW). FPPW, is a welding process similar to Friction Push Plug Welding in that, there is a small rotating part (plug) being spun and simultaneously pulled (forged) into a larger part. These two processes differ, in that push plug welding requires an internal reaction support, while pull plug welding reacts to the load externally. FPPW was originally conceived as a post proof repair technique for External Tank. FPPW was easily selected as the primary process used to close out the termination hole on the Constellation Program fs ARES I Upper Stage circumferential Self ] Reacting Friction Stir Welds (SR ]FSW). The versatility of FPPW allows it to also be used as a repair technique for both SR ]FSW and Conventional Friction Stir Welds. To date, all MSFC led development has been concentrated on aluminum alloys (2195, 2219, and 2014). Much work has been done to fully understand and characterize the process fs limitations. A heavy emphasis has been spent on plug design, to match the various weldland thicknesses and alloy combinations. This presentation will summarize these development efforts including weld parameter development, process control, parameter sensitivity studies, plug repair techniques, material properties including tensile, fracture and failure analysis.
Friction Pull Plug Welding in Aluminum Alloys
NASA Technical Reports Server (NTRS)
Brooke, Shane A.; Bradford, Vann
2012-01-01
NASA's Marshall Space Flight Center (MSFC) has recently invested much time and effort into the process development of Friction Pull Plug Welding (FPPW). FPPW, is a welding process similar to Friction Push Plug Welding in that, there is a small rotating part (plug) being spun and simultaneously pulled (forged) into a larger part. These two processes differ, in that push plug welding requires an internal reaction support, while pull plug welding reacts to the load externally. FPPW was originally conceived as a post proof repair technique for the Space Shuttle fs External Tank. FPPW was easily selected as the primary weld process used to close out the termination hole on the Constellation Program's ARES I Upper Stage circumferential Self-Reacting Friction Stir Welds (SR-FSW). The versatility of FPPW allows it to also be used as a repair technique for both SR-FSW and Conventional Friction Stir Welds. To date, all MSFC led development has been concentrated on aluminum alloys (2195, 2219, and 2014). Much work has been done to fully understand and characterize the process's limitations. A heavy emphasis has been spent on plug design, to match the various weldland thicknesses and alloy combinations. This presentation will summarize these development efforts including weld parameter development, process control, parameter sensitivity studies, plug repair techniques, material properties including tensile, fracture and failure analysis.
Sensor-Web Operations Explorer
NASA Technical Reports Server (NTRS)
Meemong, Lee; Miller, Charles; Bowman, Kevin; Weidner, Richard
2008-01-01
Understanding the atmospheric state and its impact on air quality requires observations of trace gases, aerosols, clouds, and physical parameters across temporal and spatial scales that range from minutes to days and from meters to more than 10,000 kilometers. Observations include continuous local monitoring for particle formation; field campaigns for emissions, local transport, and chemistry; and periodic global measurements for continental transport and chemistry. Understanding includes global data assimilation framework capable of hierarchical coupling, dynamic integration of chemical data and atmospheric models, and feedback loops between models and observations. The objective of the sensor-web system is to observe trace gases, aerosols, clouds, and physical parameters, an integrated observation infrastructure composed of space-borne, air-borne, and in-situ sensors will be simulated based on their measurement physics properties. The objective of the sensor-web operation is to optimally plan for heterogeneous multiple sensors, the sampling strategies will be explored and science impact will be analyzed based on comprehensive modeling of atmospheric phenomena including convection, transport, and chemical process. Topics include system architecture, software architecture, hardware architecture, process flow, technology infusion, challenges, and future direction.
Akbarzadeh, Rosa; Yousefi, Azizeh-Mitra
2014-08-01
Tissue engineering makes use of 3D scaffolds to sustain three-dimensional growth of cells and guide new tissue formation. To meet the multiple requirements for regeneration of biological tissues and organs, a wide range of scaffold fabrication techniques have been developed, aiming to produce porous constructs with the desired pore size range and pore morphology. Among different scaffold fabrication techniques, thermally induced phase separation (TIPS) method has been widely used in recent years because of its potential to produce highly porous scaffolds with interconnected pore morphology. The scaffold architecture can be closely controlled by adjusting the process parameters, including polymer type and concentration, solvent composition, quenching temperature and time, coarsening process, and incorporation of inorganic particles. The objective of this review is to provide information pertaining to the effect of these parameters on the architecture and properties of the scaffolds fabricated by the TIPS technique. © 2014 Wiley Periodicals, Inc.
Effective Parameters in Axial Injection Suspension Plasma Spray Process of Alumina-Zirconia Ceramics
NASA Astrophysics Data System (ADS)
Tarasi, F.; Medraj, M.; Dolatabadi, A.; Oberste-Berghaus, J.; Moreau, C.
2008-12-01
Suspension plasma spray (SPS) is a novel process for producing nano-structured coatings with metastable phases using significantly smaller particles as compared to conventional thermal spraying. Considering the complexity of the system there is an extensive need to better understand the relationship between plasma spray conditions and resulting coating microstructure and defects. In this study, an alumina/8 wt.% yttria-stabilized zirconia was deposited by axial injection SPS process. The effects of principal deposition parameters on the microstructural features are evaluated using the Taguchi design of experiment. The microstructural features include microcracks, porosities, and deposition rate. To better understand the role of the spray parameters, in-flight particle characteristics, i.e., temperature and velocity were also measured. The role of the porosity in this multicomponent structure is studied as well. The results indicate that thermal diffusivity of the coatings, an important property for potential thermal barrier applications, is barely affected by the changes in porosity content.
Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis
NASA Astrophysics Data System (ADS)
Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.
2005-12-01
The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.
Predictive process simulation of cryogenic implants for leading edge transistor design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh
2012-11-06
Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less
Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter
2009-01-01
A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
Calibration process of highly parameterized semi-distributed hydrological model
NASA Astrophysics Data System (ADS)
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.
In-situ acoustic signature monitoring in additive manufacturing processes
NASA Astrophysics Data System (ADS)
Koester, Lucas W.; Taheri, Hossein; Bigelow, Timothy A.; Bond, Leonard J.; Faierson, Eric J.
2018-04-01
Additive manufacturing is a rapidly maturing process for the production of complex metallic, ceramic, polymeric, and composite components. The processes used are numerous, and with the complex geometries involved this can make quality control and standardization of the process and inspection difficult. Acoustic emission measurements have been used previously to monitor a number of processes including machining and welding. The authors have identified acoustic signature measurement as a potential means of monitoring metal additive manufacturing processes using process noise characteristics and those discrete acoustic emission events characteristic of defect growth, including cracks and delamination. Results of acoustic monitoring for a metal additive manufacturing process (directed energy deposition) are reported. The work investigated correlations between acoustic emissions and process noise with variations in machine state and deposition parameters, and provided proof of concept data that such correlations do exist.
Suzuki, Kazumichi; Gillin, Michael T; Sahoo, Narayan; Zhu, X Ronald; Lee, Andrew K; Lippy, Denise
2011-07-01
To evaluate patient census, equipment clinical availability, maximum daily treatment capacity, use factor for major beam delivery parameters, and treatment process time for actual treatments delivered by proton therapy systems. The authors have been recording all beam delivery parameters, including delivered dose, energy, range, spread-out Bragg peak widths, gantry angles, and couch angles for every treatment field in an electronic medical record system. We analyzed delivery system downtimes that had been recorded for every equipment failure and associated incidents. These data were used to evaluate the use factor of beam delivery parameters, the size of the patient census, and the equipment clinical availability of the facility. The duration of each treatment session from patient walk-in and to patient walk-out of the treatment room was measured for 82 patients with cancers at various sites. The yearly average equipment clinical availability in the last 3 yrs (June 2007-August 2010) was 97%, which exceeded the target of 95%. Approximately 2200 patients had been treated as of August 2010. The major disease sites were genitourinary (49%), thoracic (25%), central nervous system (22%), and gastrointestinal (2%). Beams have been delivered in approximately 8300 treatment fields. The use factor for six beam delivery parameters was also evaluated. Analysis of the treatment process times indicated that approximately 80% of this time was spent for patient and equipment setup. The other 20% was spent waiting for beam delivery and beam on. The total treatment process time can be expressed by a quadratic polynomial of the number of fields per session. The maximum daily treatment capacity of our facility using the current treatment processes was estimated to be 133 +/- 35 patients. This analysis shows that the facility has operated at a high performance level and has treated a large number of patients with a variety of diseases. The use factor of beam delivery parameters varies by disease site. Further improvements in efficiency may be realized in the equipment- and patient-related processes of treatment.
TSPP - A Collection of FORTRAN Programs for Processing and Manipulating Time Series
Boore, David M.
2008-01-01
This report lists a number of FORTRAN programs that I have developed over the years for processing and manipulating strong-motion accelerograms. The collection is titled TSPP, which stands for Time Series Processing Programs. I have excluded 'strong-motion accelerograms' from the title, however, as the boundary between 'strong' and 'weak' motion has become blurred with the advent of broadband sensors and high-dynamic range dataloggers, and many of the programs can be used with any evenly spaced time series, not just acceleration time series. This version of the report is relatively brief, consisting primarily of an annotated list of the programs, with two examples of processing, and a few comments on usage. I do not include a parameter-by-parameter guide to the programs. Future versions might include more examples of processing, illustrating the various parameter choices in the programs. Although these programs have been used by the U.S. Geological Survey, no warranty, expressed or implied, is made by the USGS as to the accuracy or functioning of the programs and related program material, nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith. The programs are distributed on an 'as is' basis, with no warranty of support from me. These programs were written for my use and are being publically distributed in the hope that others might find them as useful as I have. I would, however, appreciate being informed about bugs, and I always welcome suggestions for improvements to the codes. Please note that I have made little effort to optimize the coding of the programs or to include a user-friendly interface (many of the programs in this collection have been included in the software usdp (Utility Software for Data Processing), being developed by Akkar et al. (personal communication, 2008); usdp includes a graphical user interface). Speed of execution has been sacrificed in favor of a code that is intended to be easy to understand, although on modern computers speed of execution is rarely a problem. I will be pleased if users incorporate portions of my programs into their own applications; I only ask that reference be made to this report as the source of the programs.
NASA Astrophysics Data System (ADS)
Shamshuddin, MD.; Anwar Bég, O.; Sunder Ram, M.; Kadir, A.
2018-02-01
Non-Newtonian flows arise in numerous industrial transport processes including materials fabrication systems. Micropolar theory offers an excellent mechanism for exploring the fluid dynamics of new non-Newtonian materials which possess internal microstructure. Magnetic fields may also be used for controlling electrically-conducting polymeric flows. To explore numerical simulation of transport in rheological materials processing, in the current paper, a finite element computational solution is presented for magnetohydrodynamic, incompressible, dissipative, radiative and chemically-reacting micropolar fluid flow, heat and mass transfer adjacent to an inclined porous plate embedded in a saturated homogenous porous medium. Heat generation/absorption effects are included. Rosseland's diffusion approximation is used to describe the radiative heat flux in the energy equation. A Darcy model is employed to simulate drag effects in the porous medium. The governing transport equations are rendered into non-dimensional form under the assumption of low Reynolds number and also low magnetic Reynolds number. Using a Galerkin formulation with a weighted residual scheme, finite element solutions are presented to the boundary value problem. The influence of plate inclination, Eringen coupling number, radiation-conduction number, heat absorption/generation parameter, chemical reaction parameter, plate moving velocity parameter, magnetic parameter, thermal Grashof number, species (solutal) Grashof number, permeability parameter, Eckert number on linear velocity, micro-rotation, temperature and concentration profiles. Furthermore, the influence of selected thermo-physical parameters on friction factor, surface heat transfer and mass transfer rate is also tabulated. The finite element solutions are verified with solutions from several limiting cases in the literature. Interesting features in the flow are identified and interpreted.
J.C. Rowland; D.R. Harp; C.J. Wilson; A.L. Atchley; V.E. Romanovsky; E.T. Coon; S.L. Painter
2016-02-02
This Modeling Archive is in support of an NGEE Arctic publication available at doi:10.5194/tc-10-341-2016. This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.
Unified Ecoregions of Alaska: 2001
Nowacki, Gregory J.; Spencer, Page; Fleming, Michael; Brock, Terry; Jorgenson, Torre
2003-01-01
Major ecosystems have been mapped and described for the State of Alaska and nearby areas. Ecoregion units are based on newly available datasets and field experience of ecologists, biologists, geologists and regional experts. Recently derived datasets for Alaska included climate parameters, vegetation, surficial geology and topography. Additional datasets incorporated in the mapping process were lithology, soils, permafrost, hydrography, fire regime and glaciation. Thirty two units are mapped using a combination of the approaches of Bailey (hierarchial), and Omernick (integrated). The ecoregions are grouped into two higher levels using a 'tri-archy' based on climate parameters, vegetation response and disturbance processes. The ecoregions are described with text, photos and tables on the published map.
Sludge stabilization through aerobic digestion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, R.B.; Smith, D.G.; Bennett, E.R.
1979-10-01
The aerobic digestion process with certain modifications is evaluated as an alternative for sludge processing capable of developing a product with characteristics required for land application. Environmental conditions, including temperature, solids concentration, and digestion time, that affect the aerobic digestion of a mixed primary sludge-trickling filter humus are investigated. Variations in these parameters that influence the characteristics of digested sludge are determined, and the parameters are optimized to: provide the maximum rate of volatile solids reduction; develop a stable, nonodorous product sludge; and provide the maximum rate of oxidation of the nitrogenous material present in the feed sludge. (3 diagrams,more » 9 graphs, 15 references, 3 tables)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Self-organization in psychotherapy: testing the synergetic model of change processes
Schiepek, Günter K.; Tominschek, Igor; Heinzel, Stephan
2014-01-01
In recent years, models have been developed that conceive psychotherapy as a self-organizing process of bio-psycho-social systems. These models originate from the theory of self-organization (Synergetics), from the theory of deterministic chaos, or from the approach of self-organized criticality. This process-outcome study examines several hypotheses mainly derived from Synergetics, including the assumption of discontinuous changes in psychotherapy (instead of linear incremental gains), the occurrence of critical instabilities in temporal proximity of pattern transitions, the hypothesis of necessary stable boundary conditions during destabilization processes, and of motivation to change playing the role of a control parameter for psychotherapeutic self-organization. Our study was realized at a day treatment center; 23 patients with obsessive compulsive disorder (OCD) were included. Client self-assessment was performed by an Internet-based process monitoring (referred to as the Synergetic Navigation System), whereby daily ratings were recorded through administering the Therapy Process Questionnaire (TPQ). The process measures of the study were extracted from the subscale dynamics (including the dynamic complexity of their time series) of the TPQ. The outcome criterion was measured by the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) which was completed pre-post and on a bi-weekly schedule by all patients. A second outcome criterion was based on the symptom severity subscale of the TPQ. Results supported the hypothesis of discontinuous changes (pattern transitions), the occurrence of critical instabilities preparing pattern transitions, and of stable boundary conditions as prerequisites for such transitions, but not the assumption of motivation to change as a control parameter. PMID:25324801
Self-organization in psychotherapy: testing the synergetic model of change processes.
Schiepek, Günter K; Tominschek, Igor; Heinzel, Stephan
2014-01-01
In recent years, models have been developed that conceive psychotherapy as a self-organizing process of bio-psycho-social systems. These models originate from the theory of self-organization (Synergetics), from the theory of deterministic chaos, or from the approach of self-organized criticality. This process-outcome study examines several hypotheses mainly derived from Synergetics, including the assumption of discontinuous changes in psychotherapy (instead of linear incremental gains), the occurrence of critical instabilities in temporal proximity of pattern transitions, the hypothesis of necessary stable boundary conditions during destabilization processes, and of motivation to change playing the role of a control parameter for psychotherapeutic self-organization. Our study was realized at a day treatment center; 23 patients with obsessive compulsive disorder (OCD) were included. Client self-assessment was performed by an Internet-based process monitoring (referred to as the Synergetic Navigation System), whereby daily ratings were recorded through administering the Therapy Process Questionnaire (TPQ). The process measures of the study were extracted from the subscale dynamics (including the dynamic complexity of their time series) of the TPQ. The outcome criterion was measured by the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) which was completed pre-post and on a bi-weekly schedule by all patients. A second outcome criterion was based on the symptom severity subscale of the TPQ. Results supported the hypothesis of discontinuous changes (pattern transitions), the occurrence of critical instabilities preparing pattern transitions, and of stable boundary conditions as prerequisites for such transitions, but not the assumption of motivation to change as a control parameter.
Simulation study of spheroidal dust gains charging: Applicable to dust grain alignment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahed, H.; Sobhanian, S.; Mahmoodi, J.
2006-09-15
The charging process of nonspherical dust grains in an unmagnetized plasma as well as in the presence of a magnetic field is studied. It is shown that unlike the spherical dust grain, due to nonhomogeneity of charge distribution on the spheroidal dust surface, the resultant electric forces on electrons and ions are different. This process produces some surface charge density gradient on the nonspherical grain surface. Effects of a magnetic field and other plasma parameters on the properties of the dust particulate are studied. It has been shown that the alignment direction could be changed or even reversed with themore » magnetic field and plasma parameters. Finally, the charge distribution on the spheroidal grain surface is studied for different ambient parameters including plasma temperature, neutral collision frequency, and the magnitude of the magnetic field.« less
Continued Data Acquisition Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwellenbach, David
This task focused on improving techniques for integrating data acquisition of secondary particles correlated in time with detected cosmic-ray muons. Scintillation detectors with Pulse Shape Discrimination (PSD) capability show the most promise as a detector technology based on work in FY13. Typically PSD parameters are determined prior to an experiment and the results are based on these parameters. By saving data in list mode, including the fully digitized waveform, any experiment can effectively be replayed to adjust PSD and other parameters for the best data capture. List mode requires time synchronization of two independent data acquisitions (DAQ) systems: the muonmore » tracker and the particle detector system. Techniques to synchronize these systems were studied. Two basic techniques were identified: real time mode and sequential mode. Real time mode is the preferred system but has proven to be a significant challenge since two FPGA systems with different clocking parameters must be synchronized. Sequential processing is expected to work with virtually any DAQ but requires more post processing to extract the data.« less
HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.
Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C
2004-07-01
A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.
A Workflow for Global Sensitivity Analysis of PBPK Models
McNally, Kevin; Cotton, Richard; Loizou, George D.
2011-01-01
Physiologically based pharmacokinetic (PBPK) models have a potentially significant role in the development of a reliable predictive toxicity testing strategy. The structure of PBPK models are ideal frameworks into which disparate in vitro and in vivo data can be integrated and utilized to translate information generated, using alternative to animal measures of toxicity and human biological monitoring data, into plausible corresponding exposures. However, these models invariably include the description of well known non-linear biological processes such as, enzyme saturation and interactions between parameters such as, organ mass and body mass. Therefore, an appropriate sensitivity analysis (SA) technique is required which can quantify the influences associated with individual parameters, interactions between parameters and any non-linear processes. In this report we have defined the elements of a workflow for SA of PBPK models that is computationally feasible, accounts for interactions between parameters, and can be displayed in the form of a bar chart and cumulative sum line (Lowry plot), which we believe is intuitive and appropriate for toxicologists, risk assessors, and regulators. PMID:21772819
COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior
NASA Technical Reports Server (NTRS)
Smialek, James L.; Auping, Judith V.
2002-01-01
COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,
Method for thermally spraying crack-free mullite coatings on ceramic-based substrates
NASA Technical Reports Server (NTRS)
Spitsberg, Irene T. (Inventor); Wang, Hongyu (Inventor); Heidorn, Raymond W. (Inventor)
2001-01-01
A process for depositing a mullite coating on a silicon-based material, such as those used to form articles exposed to high temperatures and including the hostile thermal environment of a gas turbine engine. The process is generally to thermally spray a mullite powder to form a mullite layer on a substrate, in which the thermal spraying process is performed so that the mullite powder absorbs a sufficient low level of energy from the thermal source to prevent evaporation of silica from the mullite powder. Processing includes deposition parameter adjustments or annealing to maintain or reestablish phase equilibrium in the mullite layer, so that through-thickness cracks in the mullite layer are avoided.
Method for thermally spraying crack-free mullite coatings on ceramic-based substrates
NASA Technical Reports Server (NTRS)
Spitsberg, Irene T. (Inventor); Wang, Hongyu (Inventor); Heidorn, Raymond W. (Inventor)
2000-01-01
A process for depositing a mullite coating on a silicon-based material, such as those used to form articles exposed to high temperatures and including the hostile thermal environment of a gas turbine engine. The process is generally to thermally spray a mullite powder to form a mullite layer on a substrate, in which the thermal spraying process is performed so that the mullite powder absorbs a sufficient low level of energy from the thermal source to prevent evaporation of silica from the mullite powder. Processing includes deposition parameter adjustments or annealing to maintain or reestablish phase equilibrium in the mullite layer, so that through-thickness cracks in the mullite layer are avoided.
Earth Survey Applications Division. [a bibliography
NASA Technical Reports Server (NTRS)
Carpenter, L. (Editor)
1981-01-01
Accomplishments of research and data analysis conducted to study physical parameters and processes inside the Earth and on the Earth's surface, to define techniques and systems for remotely sensing the processes and measuring the parameters of scientific and applications interest, and the transfer of promising operational applications techniques to the user community of Earth resources monitors, managers, and decision makers are described. Research areas covered include: geobotany, magnetic field modeling, crustal studies, crustal dynamics, sea surface topography, land resources, remote sensing of vegetation and soils, and hydrological sciences. Major accomplishments include: production of global maps of magnetic anomalies using Magsat data; computation of the global mean sea surface using GEOS-3 and Seasat altimetry data; delineation of the effects of topography on the interpretation of remotely-sensed data; application of snowmelt runoff models to water resources management; and mapping of snow depth over wheat growing areas using Nimbus microwave data.
Lateral position detection and control for friction stir systems
Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.
2012-06-05
An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
Lateral position detection and control for friction stir systems
Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL
2011-11-08
Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
Error free all optical wavelength conversion in highly nonlinear As-Se chalcogenide glass fiber.
Ta'eed, Vahid G; Fu, Libin; Pelusi, Mark; Rochette, Martin; Littler, Ian C; Moss, David J; Eggleton, Benjamin J
2006-10-30
We present the first demonstration of all optical wavelength conversion in chalcogenide glass fiber including system penalty measurements at 10 Gb/s. Our device is based on As2Se3 chalcogenide glass fiber which has the highest Kerr nonlinearity (n(2)) of any fiber to date for which either advanced all optical signal processing functions or system penalty measurements have been demonstrated. We achieve wavelength conversion via cross phase modulation over a 10 nm wavelength range near 1550 nm with 7 ps pulses at 2.1 W peak pump power in 1 meter of fiber, achieving only 1.4 dB excess system penalty. Analysis and comparison of the fundamental fiber parameters, including nonlinear coefficient, two-photon absorption coefficient and dispersion parameter with other nonlinear glasses shows that As(2)Se(3) based devices show considerable promise for radically integrated nonlinear signal processing devices.
Theoretical and experimental studies in ultraviolet solar physics
NASA Technical Reports Server (NTRS)
Parkinson, W. H.; Reeves, E. M.
1975-01-01
The processes and parameters in atomic and molecular physics that are relevant to solar physics are investigated. The areas covered include: (1) measurement of atomic and molecular parameters that contribute to discrete and continous sources of opacity and abundance determinations in the sun; (2) line broadening and scattering phenomena; and (3) development of an ion beam spectroscopic source which is used for the measurement of electron excitation cross sections of transition region and coronal ions.
High density circuit technology, part 3
NASA Technical Reports Server (NTRS)
Wade, T. E.
1982-01-01
Dry processing - both etching and deposition - and present/future trends in semiconductor technology are discussed. In addition to a description of the basic apparatus, terminology, advantages, glow discharge phenomena, gas-surface chemistries, and key operational parameters for both dry etching and plasma deposition processes, a comprehensive survey of dry processing equipment (via vendor listing) is also included. The following topics are also discussed: fine-line photolithography, low-temperature processing, packaging for dense VLSI die, the role of integrated optics, and VLSI and technology innovations.
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
The application of virtual prototyping methods to determine the dynamic parameters of mobile robot
NASA Astrophysics Data System (ADS)
Kurc, Krzysztof; Szybicki, Dariusz; Burghardt, Andrzej; Muszyńska, Magdalena
2016-04-01
The paper presents methods used to determine the parameters necessary to build a mathematical model of an underwater robot with a crawler drive. The parameters present in the dynamics equation will be determined by means of advanced mechatronic design tools, including: CAD/CAE software andMES modules. The virtual prototyping process is described as well as the various possible uses (design adaptability) depending on the optional accessories added to the vehicle. A mathematical model is presented to show the kinematics and dynamics of the underwater crawler robot, essential for the design stage.
Typecasting catchments: Classification, directionality, and the pursuit of universality
NASA Astrophysics Data System (ADS)
Smith, Tyler; Marshall, Lucy; McGlynn, Brian
2018-02-01
Catchment classification poses a significant challenge to hydrology and hydrologic modeling, restricting widespread transfer of knowledge from well-studied sites. The identification of important physical, climatological, or hydrologic attributes (to varying degrees depending on application/data availability) has traditionally been the focus for catchment classification. Classification approaches are regularly assessed with regard to their ability to provide suitable hydrologic predictions - commonly by transferring fitted hydrologic parameters at a data-rich catchment to a data-poor catchment deemed similar by the classification. While such approaches to hydrology's grand challenges are intuitive, they often ignore the most uncertain aspect of the process - the model itself. We explore catchment classification and parameter transferability and the concept of universal donor/acceptor catchments. We identify the implications of the assumption that the transfer of parameters between "similar" catchments is reciprocal (i.e., non-directional). These concepts are considered through three case studies situated across multiple gradients that include model complexity, process description, and site characteristics. Case study results highlight that some catchments are more successfully used as donor catchments and others are better suited as acceptor catchments. These results were observed for both black-box and process consistent hydrologic models, as well as for differing levels of catchment similarity. Therefore, we suggest that similarity does not adequately satisfy the underlying assumptions being made in parameter regionalization approaches regardless of model appropriateness. Furthermore, we suggest that the directionality of parameter transfer is an important factor in determining the success of parameter regionalization approaches.
Hydrogasification reactor and method of operating same
Hobbs, Raymond; Karner, Donald; Sun, Xiaolei; Boyle, John; Noguchi, Fuyuki
2013-09-10
The present invention provides a system and method for evaluating effects of process parameters on hydrogasification processes. The system includes a hydrogasification reactor, a pressurized feed system, a hopper system, a hydrogen gas source, and a carrier gas source. Pressurized carbonaceous material, such as coal, is fed to the reactor using the carrier gas and reacted with hydrogen to produce natural gas.
Kinetics of process of product separation in closed system with recirculation
NASA Astrophysics Data System (ADS)
Prokopenko, V. S.; Orekhova, T. N.; Goncharov, E. I.; Odobesko, I. A.
2018-03-01
The object of an article is the extrapolation of the process of classifying material while passing in a model with the separation of the products of milling in the cleaning system includes a separator, concentrator, cyclone and a recycle loop. The model allows for the given parameters to predict the coarseness of grading of the finished product.
2012-11-26
alloy and High Hardness steel armor (MIL- STD-46100) were successfully joined by the friction stir welding (FSW) process using a tungsten- rhenium ...tungsten- rhenium stir tool. Process parameter variation experiments, which included inductive pre-heating, tool design geometry, plunge and traverse
Elements of an algorithm for optimizing a parameter-structural neural network
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2016-06-01
The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.
Russell, V N L; Green, L E; Bishop, S C; Medley, G F
2013-03-01
A stochastic, individual-based, simulation model of footrot in a flock of 200 ewes was developed that included flock demography, disease processes, host genetic variation for traits influencing infection and disease processes, and bacterial contamination of the environment. Sensitivity analyses were performed using ANOVA to examine the contribution of unknown parameters to outcome variation. The infection rate and bacterial death rate were the most significant factors determining the observed prevalence of footrot, as well as the heritability of resistance. The dominance of infection parameters in determining outcomes implies that observational data cannot be used to accurately estimate the strength of genetic control of underlying traits describing the infection process, i.e. resistance. Further work will allow us to address the potential for genetic selection to control ovine footrot. Copyright © 2012 Elsevier B.V. All rights reserved.
The Effect of Gravity on the Combustion Synthesis of Porous Biomaterials
NASA Technical Reports Server (NTRS)
Castillo, M.; Zhang, X.; Moore, J. J.; Schowengerdt, F. D.; Ayers, R. A.
2003-01-01
Production of highly porous composite materials by traditional materials processing is limited by difficult processing techniques. This work investigates the use of self propagating high temperature (combustion) synthesis (SHS) to create porous tricalcium phosphate (Ca3(PO4)2), TiB-Ti, and NiTi in low and microgravity. Combustion synthesis provides the ability to use set processing parameters to engineer the required porous structure suitable for bone repair or replacement. The processing parameters include green density, particle size, gasifying agents, composition, and gravity. The advantage of the TiB-Ti system is the high level of porosity achieved together with a modulus that can be controlled by both composition (TiB-Ti) and porosity. At the same time, NiTi exhibits shape memory properties. SHS of biomaterials allows the engineering of required porosity coupled with resorbtion properties and specific mechanical properties into the composite materials to allow for a better biomaterial.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.
Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero
2017-04-01
The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.
NASA Astrophysics Data System (ADS)
Pieczara, Łukasz
2015-09-01
The paper presents the results of analysis of surface roughness parameters in the Krosno Sandstones of Mucharz, southern Poland. It was aimed at determining whether these parameters are influenced by structural features (mainly the laminar distribution of mineral components and directional distribution of non-isometric grains) and fracture processes. The tests applied in the analysis enabled us to determine and describe the primary statistical parameters used in the quantitative description of surface roughness, as well as specify the usefulness of contact profilometry as a method of visualizing spatial differentiation of fracture processes in rocks. These aims were achieved by selecting a model material (Krosno Sandstones from the Górka-Mucharz Quarry) and an appropriate research methodology. The schedule of laboratory analyses included: identification analyses connected with non-destructive ultrasonic tests, aimed at the preliminary determination of rock anisotropy, strength point load tests (cleaved surfaces were obtained due to destruction of rock samples), microscopic analysis (observation of thin sections in order to determine the mechanism of inducing fracture processes) and a test method of measuring surface roughness (two- and three-dimensional diagrams, topographic and contour maps, and statistical parameters of surface roughness). The highest values of roughness indicators were achieved for surfaces formed under the influence of intragranular fracture processes (cracks propagating directly through grains). This is related to the structural features of the Krosno Sandstones (distribution of lamination and bedding).
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu
2017-09-01
Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.
A Kinematic Calibration Process for Flight Robotic Arms
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.
Chattree, A; Barbour, J A; Thomas-Gibson, S; Bhandari, P; Saunders, B P; Veitch, A M; Anderson, J; Rembacken, B J; Loughrey, M B; Pullan, R; Garrett, W V; Lewis, G; Dolwani, S; Rutter, M D
2017-01-01
The management of large non-pedunculated colorectal polyps (LNPCPs) is complex, with widespread variation in management and outcome, even amongst experienced clinicians. Variations in the assessment and decision-making processes are likely to be a major factor in this variability. The creation of a standardized minimum dataset to aid decision-making may therefore result in improved clinical management. An official working group of 13 multidisciplinary specialists was appointed by the Association of Coloproctology of Great Britain and Ireland (ACPGBI) and the British Society of Gastroenterology (BSG) to develop a minimum dataset on LNPCPs. The literature review used to structure the ACPGBI/BSG guidelines for the management of LNPCPs was used by a steering subcommittee to identify various parameters pertaining to the decision-making processes in the assessment and management of LNPCPs. A modified Delphi consensus process was then used for voting on proposed parameters over multiple voting rounds with at least 80% agreement defined as consensus. The minimum dataset was used in a pilot process to ensure rigidity and usability. A 23-parameter minimum dataset with parameters relating to patient and lesion factors, including six parameters relating to image retrieval, was formulated over four rounds of voting with two pilot processes to test rigidity and usability. This paper describes the development of the first reported evidence-based and expert consensus minimum dataset for the management of LNPCPs. It is anticipated that this dataset will allow comprehensive and standardized lesion assessment to improve decision-making in the assessment and management of LNPCPs. Colorectal Disease © 2016 The Association of Coloproctology of Great Britain and Ireland.
Vanhoorne, V; Vanbillemont, B; Vercruysse, J; De Leersnyder, F; Gomes, P; Beer, T De; Remon, J P; Vervaet, C
2016-05-30
The aim of this study was to evaluate the potential of twin screw granulation for the continuous production of controlled release formulations with hydroxypropylmethylcellulose as hydrophilic matrix former. Metoprolol tartrate was included in the formulation as very water soluble model drug. A premix of metoprolol tartrate, hydroxypropylmethylcellulose and filler (ratio 20/20/60, w/w) was granulated with demineralized water via twin screw granulation. After oven drying and milling, tablets were produced on a rotary Modul™ P tablet press. A D-optimal design (29 experiments) was used to assess the influence of process (screw speed, throughput, barrel temperature and screw design) and formulation parameters (starch content of the filler) on the process (torque), granule (size distribution, shape, friability, density) and tablet (hardness, friability and dissolution) critical quality attributes. The torque was dominated by the number of kneading elements and throughput, whereas screw speed and filling degree only showed a minor influence on torque. Addition of screw mixing elements after a block of kneading elements improved the yield of the process before milling as it resulted in less oversized granules and also after milling as less fines were present. Temperature was also an important parameter to optimize as a higher temperature yielded less fines and positively influenced the aspect ratio. The shape of hydroxypropylmethylcellulose granules was comparable to that of immediate release formulations. Tensile strength and friability of tablets were not dependent on the process parameters. The use of starch as filler was not beneficial with regard to granule and tablet properties. Complete drug release was obtained after 16-20h and was independent of the design's parameters. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Yi-Mu, E-mail: ymlee@nuu.edu.t; Yang, Hsi-Wen
2011-03-15
High-transparency and high quality ZnO nanorod arrays were grown on the ITO substrates by a two-step chemical bath deposition (CBD) method. The effects of processing parameters including reaction temperature (25-95 {sup o}C) and solution concentration (0.01-0.1 M) on the crystal growth, alignment, optical and electrical properties were systematically investigated. It has been found that these process parameters are critical for the growth, orientation and aspect ratio of the nanorod arrays, showing different structural and optical properties. Experimental results reveal that the hexagonal ZnO nanorod arrays prepared under reaction temperature of 95 {sup o}C and solution concentration of 0.03 M possessmore » highest aspect ratio of {approx}21, and show the well-aligned orientation and optimum optical properties. Moreover the ZnO nanorod arrays based heterojunction electrodes and the solid-state dye-sensitized solar cells (SS-DSSCs) were fabricated with an improved optoelectrical performance. -- Graphical abstract: The ZnO nanorod arrays demonstrate well-alignment, high aspect ratio (L/D{approx}21) and excellent optical transmittance by low-temperature chemical bath deposition (CBD). Display Omitted Research highlights: > Investigate the processing parameters of CBD on the growth of ZnO nanorod arrays. > Optimization of CBD process parameters: 0.03 M solution concentration and reaction temperature of 95 {sup o}C. > The prepared ZnO samples possess well-alignment and high aspect ratio (L/D{approx}21). > An n-ZnO/p-NiO heterojunction: great rectifying behavior and low leakage current. > SS-DSSC has J{sub SC} of 0.31 mA/cm{sup 2} and V{sub OC} of 590 mV, and an improved {eta} of 0.059%.« less
Proceedings of the 3rd Annual SCOLE Workshop
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr. (Compiler)
1987-01-01
Topics addressed include: modeling and controlling the Spacecraft Control Laboratory Experiment (SCOLE) configurations; slewing maneuvers; mathematical models; vibration damping; gravitational effects; structural dynamics; finite element method; distributed parameter system; on-line pulse control; stability augmentation; and stochastic processes.
Chen, Yung-Chuan; Hsiao, Chih-Kun; Ciou, Ji-Sih; Tsai, Yi-Jung; Tu, Yuan-Kun
2016-11-01
This study concerns the effects of different drilling parameters of pilot drills and twist drills on the temperature rise of alveolar bones during dental implant procedures. The drilling parameters studied here include the feed rate and rotation speed of the drill. The bone temperature distribution was analyzed through experiments and numerical simulations of the drilling process. In this study, a three dimensional (3D) elasto-plastic dynamic finite element model (DFEM) was proposed to investigate the effects of drilling parameters on the bone temperature rise. In addition, the FE model is validated with drilling experiments on artificial human bones and porcine alveolar bones. The results indicate that 3D DFEM can effectively simulate the bone temperature rise during the drilling process. During the drilling process with pilot drills or twist drills, the maximum bone temperature occurred in the region of the cancellous bones close to the cortical bones. The feed rate was one of the important factors affecting the time when the maximum bone temperature occurred. Our results also demonstrate that the elevation of bone temperature was reduced as the feed rate increased and the drill speed decreased, which also effectively reduced the risk region of osteonecrosis. These findings can serve as a reference for dentists in choosing drilling parameters for dental implant surgeries. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Byun, Bo-Ram; Kim, Yong-Il; Maki, Koutaro; Son, Woo-Sung
2015-01-01
This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6–18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R 2 had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status. PMID:25878721
Mathematical estimation of melt depth in conduction mode of laser spot remelting process
NASA Astrophysics Data System (ADS)
Hadi, Iraj
2012-12-01
A one-dimensional mathematical model based on the front tracking method was developed to predict the melt depth as a function of internal and external parameters of laser spot remelting process in conduction mode. Power density, pulse duration, and thermophysical properties of material including thermal diffusivity, melting point, latent heat, and absorption coefficient have been taken into account in the model of this article. By comparing the theoretical results and experimental welding data of commercial pure nickel and titanium plates, the validity of the developed model was examined. Comparison shows a reasonably good agreement between the theory and experiment. For the sake of simplicity, a graphical technique was presented to obtain the melt depth of various materials at any arbitrary amount of power density and pulse duration. In the graphical technique, two dimensionless constants including the Stefan number (Ste) and an introduced constant named laser power factor (LPF) are used. Indeed, all of the internal and external parameters have been gathered in LPF. The effect of power density and pulse duration on the variation of melt depth for different materials such as aluminum, copper, and stainless steel were investigated. Additionally, appropriate expressions were extracted to describe the minimum power density and time to reach melting point in terms of process parameters. A simple expression is also extracted to estimate the thickness of mushy zone for alloys.
Surface Damage and Treatment by Impact of a Low Temperature Nitrogen Jet
NASA Astrophysics Data System (ADS)
Laribou, Hicham; Fressengeas, Claude; Entemeyer, Denis; Jeanclaude, Véronique; Tazibt, Abdel
2011-01-01
Nitrogen jets under high pressure and low temperature have been introduced recently. The process consists in projecting onto a surface a low temperature jet obtained from releasing the liquid nitrogen stored in a high pressure tank (e.g. 3000 bars) through a nozzle. It can be used in a range of industrial applications, including surface treatment or material removal through cutting, drilling, striping and cleaning. The process does not generate waste other than the removed matter, and it only releases neutral gas into the atmosphere. This work is aimed at understanding the mechanisms of the interaction between the jet and the material surface. Depending on the impacted material, the thermo-mechanical shock and blast effect induced by the jet can activate a wide range of damage mechanisms, including cleavage, crack nucleation and spalling, as well as void expansion and localized ductile failure. The test parameters (standoff distance, dwell time, operating pressure) play a role in selecting the dominant damage mechanism, but combinations of these various modes are usually present. Surface treatment through phase transformation or grain fragmentation in a layer below the surface can also be obtained by adequate tuning of the process parameters. In the current study, work is undertaken to map the damage mechanisms in metallic materials as well as the influence of the test parameters on damage, along with measurements of the thermo-mechanical conditions (impact force, temperature) in the impacted area.
Volcanic Ash Data Assimilation System for Atmospheric Transport Model
NASA Astrophysics Data System (ADS)
Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.
2017-12-01
The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.
Design and implementation of a biomedical image database (BDIM).
Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R
1988-01-01
We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Jamlongkul, P.; Wannawichian, S.
2017-12-01
Earth's aurora in low latitude region was studied via time variations of oxygen emission spectra, simultaneously with solar wind data. The behavior of spectrum intensity, in corresponding with solar wind condition, could be a trace of aurora in low latitude region including some effects of high energetic auroral particles. Oxygen emission spectral lines were observed by Medium Resolution Echelle Spectrograph (MRES) at 2.4-m diameter telescope at Thai National Observatory, Inthanon Mountain, Chiang Mai, Thailand, during 1-5 LT on 5 and 6 February 2017. The observed spectral lines were calibrated via Dech95 - 2D image processing program and Dech-Fits spectra processing program for spectrum image processing and spectrum wavelength calibration, respectively. The variations of observed intensities each day were compared with solar wind parameters, which are magnitude of IMF (|BIMF|) including IMF in RTN coordinate (BR, BT, BN), ion density (ρ), plasma flow pressure (P), and speed (v). The correlation coefficients between oxygen spectral emissions and different solar wind parameters were found to vary in both positive and negative behaviors.
Shen, Wen-Wei; Lin, Yu-Min; Wu, Sheng-Tsai; Lee, Chia-Hsin; Huang, Shin-Yi; Chang, Hsiang-Hung; Chang, Tao-Chih; Chen, Kuan-Neng
2018-08-01
In this study, through silicon via (TSV)-less interconnection using the fan-out wafer-level-packaging (FO-WLP) technology and a novel redistribution layer (RDL)-first wafer level packaging are investigated. Since warpage of molded wafer is a critical issue and needs to be optimized for process integration, the evaluation of the warpage issue on a 12-inch wafer using finite element analysis (FEA) at various parameters is presented. Related parameters include geometric dimension (such as chip size, chip number, chip thickness, and mold thickness), materials' selection and structure optimization. The effect of glass carriers with various coefficients of thermal expansion (CTE) is also discussed. Chips are bonded onto a 12-inch reconstituted wafer, which includes 2 RDL layers, 3 passivation layers, and micro bumps, followed by using epoxy molding compound process. Furthermore, an optical surface inspector is adopted to measure the surface profile and the results are compared with the results from simulation. In order to examine the quality of the TSV-less interconnection structure, electrical measurement is conducted and the respective results are presented.
Indicator of reliability of power grids and networks for environmental monitoring
NASA Astrophysics Data System (ADS)
Shaptsev, V. A.
2017-10-01
The energy supply of the mining enterprises includes power networks in particular. Environmental monitoring relies on the data network between the observers and the facilitators. Weather and conditions of their work change over time randomly. Temperature, humidity, wind strength and other stochastic processes are interconnecting in different segments of the power grid. The article presents analytical expressions for the probability of failure of the power grid as a whole or its particular segment. These expressions can contain one or more parameters of the operating conditions, simulated by Monte Carlo. In some cases, one can get the ultimate mathematical formula for calculation on the computer. In conclusion, the expression, including the probability characteristic function of one random parameter, for example, wind, temperature or humidity, is given. The parameters of this characteristic function can be given by retrospective or special observations (measurements).
Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.
Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less
Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems
DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.
2017-07-08
Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less
NASA Astrophysics Data System (ADS)
Zhao, W.; Wang, H. T.; Liu, Z. G.; Chu, M. S.; Ying, Z. W.; Tang, J.
2017-10-01
A new type of blast furnace burden, named VTM-CCB (vanadium titanomagnetite carbon composite hot briquette), is proposed and optimized in this paper. The preparation process of VTM-CCB includes two components, hot briquetting and heat treatment. The hot-briquetting and heat-treatment parameters are systematically optimized based on the Taguchi method and single-factor experiment. The optimized preparation parameters of VTM-CCB include a hot-briquetting temperature of 300°C, a coal particle size of <0.075 mm, a vanadium titanomagnetite particle size of <0.075 mm, a coal-added ratio of 28.52%, a heat-treatment temperature of 500°C and a heat-treatment time of 3 h. The compressive strength of VTM-CCB, based on the optimized parameters, reaches 2450 N, which meets the requirement of blast furnace ironmaking. These integrated parameters provide a theoretical basis for the production and application of a blast furnace smelting VTM-CCB.
Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.
Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L
2016-05-10
Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.
Interpreting the Weibull fitting parameters for diffusion-controlled release data
NASA Astrophysics Data System (ADS)
Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.
2017-11-01
We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G
2016-05-01
With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
GilPavas, Edison; Molina-Tirado, Kevin; Gómez-García, Miguel Angel
2009-01-01
An electrocoagulation process was used for the treatment of oily wastewater generated from an automotive industry in Medellín (Colombia). An electrochemical cell consisting of four parallel electrodes (Fe and Al) in bipolar configuration was implemented. A multifactorial experimental design was used for evaluating the influence of several parameters including: type and arrangement of electrodes, pH, and current density. Oil and grease removal was defined as the response variable for the statistical analysis. Additionally, the BOD(5), COD, and TOC were monitored during the treatment process. According to the results, at the optimum parameter values (current density = 4.3 mA/cm(2), distance between electrodes = 1.5 cm, Fe as anode, and pH = 12) it was possible to reach a c.a. 95% oils removal, COD and mineralization of 87.4% and 70.6%, respectively. A final biodegradability (BOD(5)/COD) of 0.54 was reached.
Thermal effects of laser marking on microstructure and corrosion properties of stainless steel.
Švantner, M; Kučera, M; Smazalová, E; Houdková, Š; Čerstvý, R
2016-12-01
Laser marking is an advanced technique used for modification of surface optical properties. This paper presents research on the influence of laser marking on the corrosion properties of stainless steel. Processes during the laser beam-surface interaction cause structure and color changes and can also be responsible for reduction of corrosion resistance of the surface. Corrosion tests, roughness, microscopic, energy dispersive x-ray, grazing incidence x-ray diffraction, and ferrite content analyses were carried out. It was found that increasing heat input is the most crucial parameter regarding the degradation of corrosion resistance of stainless steel. Other relevant parameters include the pulse length and pulse frequency. The authors found a correlation between laser processing parameters, grazing incidence x-ray measurement, ferrite content, and corrosion resistance of the affected surface. Possibilities and limitations of laser marking of stainless steel in the context of the reduction of its corrosion resistance are discussed.
NASA Astrophysics Data System (ADS)
Meléndez, L. V.; Cabanzo, R.; Mejía-Ospino, E.; Guzmán, A.
2016-02-01
Eight vacuum residues and their delayed coking liquids products from Colombian crude were study by infrared spectroscopy with attenuated total reflectance (FTIR-ATR) and principal component analysis (PCA). For the samples the structural parameters of aromaticity factor (fa), alifaticity (A2500-3100cm-1), aromatic condensation degree (GCA), length of aliphatic chains (LCA) and aliphatic chain length associated with aromatic (LACAR) were determined through the development of a methodology, which includes the previous processing of spectroscopy data, identifying the regions in the IR spectra of greatest variance using PCA and molecules patterns. The parameters were compared with the results obtained from proton magnetic resonance (1H-NMR) and 13C-NMR. The results showed the influence and correlation of structural parameters with some physicochemical properties such as API gravity, weight percent sulphur (% S) and Conradson carbon content (% CCR)
Relativistic thermal plasmas - Effects of magnetic fields
NASA Technical Reports Server (NTRS)
Araki, S.; Lightman, A. P.
1983-01-01
Processes and equilibria in finite, relativistic, thermal plasmas are investigated, taking into account electron-positron creation and annihilation, photon production by internal processes, and photon production by a magnetic field. Inclusion of the latter extends previous work on such plasmas. The basic relations for thermal, Comptonized synchrotron emission are analyzed, including emission and absorption without Comptonization, Comptonized thermal synchrotron emission, and the Comptonized synchrotron and bremsstrahlung luminosities. Pair equilibria are calculated, including approximations and dimensionless parameters, the pair balance equation, maximum temperatures and field strengths, and individual models and cooling curves.
Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E
2014-02-11
Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.
Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.
2014-08-12
Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.
Towards simplification of hydrologic modeling: Identification of dominant processes
Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.
2016-01-01
The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many
Içten, Elçin; Giridhar, Arun; Nagy, Zoltan K; Reklaitis, Gintaras V
2016-04-01
The features of a drop-on-demand-based system developed for the manufacture of melt-based pharmaceuticals have been previously reported. In this paper, a supervisory control system, which is designed to ensure reproducible production of high quality of melt-based solid oral dosages, is presented. This control system enables the production of individual dosage forms with the desired critical quality attributes: amount of active ingredient and drug morphology by monitoring and controlling critical process parameters, such as drop size and product and process temperatures. The effects of these process parameters on the final product quality are investigated, and the properties of the produced dosage forms characterized using various techniques, such as Raman spectroscopy, optical microscopy, and dissolution testing. A crystallization temperature control strategy, including controlled temperature cycles, is presented to tailor the crystallization behavior of drug deposits and to achieve consistent drug morphology. This control strategy can be used to achieve the desired bioavailability of the drug by mitigating variations in the dissolution profiles. The supervisor control strategy enables the application of the drop-on-demand system to the production of individualized dosage required for personalized drug regimens.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Kinematic analysis of crank -cam mechanism of process equipment
NASA Astrophysics Data System (ADS)
Podgornyj, Yu I.; Skeeba, V. Yu; Martynova, T. G.; Pechorkina, N. S.; Skeeba, P. Yu
2018-03-01
This article discusses how to define the kinematic parameters of a crank-cam mechanism. Using the mechanism design, the authors have developed a calculation model and a calculation algorithm that allowed the definition of kinematic parameters of the mechanism, including crank displacements, angular velocities and acceleration, as well as driven link (rocker arm) angular speeds and acceleration. All calculations were performed using the Mathcad mathematical package. The results of the calculations are reported as numerical values.
Closed loop adaptive control of spectrum-producing step using neural networks
Fu, Chi Yung
1998-01-01
Characteristics of the plasma in a plasma-based manufacturing process step are monitored directly and in real time by observing the spectrum which it produces. An artificial neural network analyzes the plasma spectrum and generates control signals to control one or more of the process input parameters in response to any deviation of the spectrum beyond a narrow range. In an embodiment, a plasma reaction chamber forms a plasma in response to input parameters such as gas flow, pressure and power. The chamber includes a window through which the electromagnetic spectrum produced by a plasma in the chamber, just above the subject surface, may be viewed. The spectrum is conducted to an optical spectrometer which measures the intensity of the incoming optical spectrum at different wavelengths. The output of optical spectrometer is provided to an analyzer which produces a plurality of error signals, each indicating whether a respective one of the input parameters to the chamber is to be increased or decreased. The microcontroller provides signals to control respective controls, but these lines are intercepted and first added to the error signals, before being provided to the controls for the chamber. The analyzer can include a neural network and an optional spectrum preprocessor to reduce background noise, as well as a comparator which compares the parameter values predicted by the neural network with a set of desired values provided by the microcontroller.
Closed loop adaptive control of spectrum-producing step using neural networks
Fu, C.Y.
1998-11-24
Characteristics of the plasma in a plasma-based manufacturing process step are monitored directly and in real time by observing the spectrum which it produces. An artificial neural network analyzes the plasma spectrum and generates control signals to control one or more of the process input parameters in response to any deviation of the spectrum beyond a narrow range. In an embodiment, a plasma reaction chamber forms a plasma in response to input parameters such as gas flow, pressure and power. The chamber includes a window through which the electromagnetic spectrum produced by a plasma in the chamber, just above the subject surface, may be viewed. The spectrum is conducted to an optical spectrometer which measures the intensity of the incoming optical spectrum at different wavelengths. The output of optical spectrometer is provided to an analyzer which produces a plurality of error signals, each indicating whether a respective one of the input parameters to the chamber is to be increased or decreased. The microcontroller provides signals to control respective controls, but these lines are intercepted and first added to the error signals, before being provided to the controls for the chamber. The analyzer can include a neural network and an optional spectrum preprocessor to reduce background noise, as well as a comparator which compares the parameter values predicted by the neural network with a set of desired values provided by the microcontroller. 7 figs.
Design Principles of DNA Enzyme-Based Walkers: Translocation Kinetics and Photoregulation.
Cha, Tae-Gon; Pan, Jing; Chen, Haorong; Robinson, Heather N; Li, Xiang; Mao, Chengde; Choi, Jong Hyun
2015-07-29
Dynamic DNA enzyme-based walkers complete their stepwise movements along the prescribed track through a series of reactions, including hybridization, enzymatic cleavage, and strand displacement; however, their overall translocation kinetics is not well understood. Here, we perform mechanistic studies to elucidate several key parameters that govern the kinetics and processivity of DNA enzyme-based walkers. These parameters include DNA enzyme core type and structure, upper and lower recognition arm lengths, and divalent metal cation species and concentration. A theoretical model is developed within the framework of single-molecule kinetics to describe overall translocation kinetics as well as each reaction step. A better understanding of kinetics and design parameters enables us to demonstrate a walker movement near 5 μm at an average speed of ∼1 nm s(-1). We also show that the translocation kinetics of DNA walkers can be effectively controlled by external light stimuli using photoisomerizable azobenzene moieties. A 2-fold increase in the cleavage reaction is observed when the hairpin stems of enzyme catalytic cores are open under UV irradiation. This study provides general design guidelines to construct highly processive, autonomous DNA walker systems and to regulate their translocation kinetics, which would facilitate the development of functional DNA walkers.
Mechanical Analog Approach to Parameter Estimation of Lateral Spacecraft Fuel Slosh
NASA Technical Reports Server (NTRS)
Chatman, Yadira; Gangadharan, Sathya; Schlee, Keith; Sudermann, James; Walker, Charles; Ristow, James; Hubert, Carl
2007-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. Even with modern computing systems, CFD type simulations are not fast enough to allow for large scale Monte Carlo analyses of spacecraft and launch vehicle dynamic behavior with slosh included. Simplified mechanical analogs for the slosh are preferred during the initial stages of design to reduce computational time and effort to evaluate the Nutation Time Constant (NTC). Analytic determination of the slosh analog parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices such as elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks, these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the hand-derived equations of motion for the mechanical analog are evaluated and their results compared with the experimental results. Of particular interest is the effect of diaphragms and bladders on the slosh dynamics and how best to model these devices. An experimental set-up is designed and built to include a diaphragm in the simulated spacecraft fuel tank subjected to lateral slosh. This research paper focuses on the parameter estimation of a SimMechanics model of the simulated spacecraft propellant tank with and without diaphragms using lateral fuel slosh experiments. Automating the parameter identification process will save time and thus allow earlier identification of potential vehicle problems.
Effect of Weld Tool Geometry on Friction Stir Welded AA2219-T87 Properties
NASA Technical Reports Server (NTRS)
Querin, Joseph A.; Schneider, Judy A.
2008-01-01
In this study, flat panels of AA2219-T87 were friction stir welded (FSWed) using weld tools with tapered pins The three pin geometries of the weld tools included: 0 (straight cylinder), 30 , and 60 angles on the frustum. For each weld tool geometry, the FSW process parameters were optimized to eliminate defects. A constant heat input was maintained while varying the process parameters of spindle rpm and travel speed. This provided a constant heat input for each FSW weld panel while altering the hot working conditions imparted to the workpiece. The resulting mechanical properties were evaluated from tensile test results of the FSW joint.
Radar systems for the water resources mission, volume 1
NASA Technical Reports Server (NTRS)
Moore, R. K.; Claassen, J. P.; Erickson, R. L.; Fong, R. K. T.; Hanson, B. C.; Komen, M. J.; Mcmillan, S. B.; Parashar, S. K.
1976-01-01
The state of the art determination was made for radar measurement of: soil moisture, snow, standing and flowing water, lake and river ice, determination of required spacecraft radar parameters, study of synthetic-aperture radar systems to meet these parametric requirements, and study of techniques for on-board processing of the radar data. Significant new concepts developed include the following: scanning synthetic-aperture radar to achieve wide-swath coverage; single-sideband radar; and comb-filter range-sequential, range-offset SAR processing. The state of the art in radar measurement of water resources parameters is outlined. The feasibility for immediate development of a spacecraft water resources SAR was established. Numerous candidates for the on-board processor were examined.
Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis
NASA Astrophysics Data System (ADS)
Perttu, A. B.; Taisne, B.
2016-12-01
Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.
Sacha, Gregory A; Schmitt, William J; Nail, Steven L
2006-01-01
The critical processing parameters affecting average particle size, particle size distribution, yield, and level of residual carrier solvent using the supercritical anti-solvent method (SAS) were identified. Carbon dioxide was used as the supercritical fluid. Methylprednisolone acetate was used as the model solute in tetrahydrofuran. Parameters examined included pressure of the supercritical fluid, agitation rate, feed solution flow rate, impeller diameter, and nozzle design. Pressure was identified as the most important process parameter affecting average particle size, either through the effect of pressure on dispersion of the feed solution into the precipitation vessel or through the effect of pressure on solubility of drug in the CO2/organic solvent mixture. Agitation rate, impeller diameter, feed solution flow rate, and nozzle design had significant effects on particle size, which suggests that dispersion of the feed solution is important. Crimped HPLC tubing was the most effective method of introducing feed solution into the precipitation vessel, largely because it resulted in the least amount of clogging during the precipitation. Yields of 82% or greater were consistently produced and were not affected by the processing variables. Similarly, the level of residual solvent was independent of the processing variables and was present at 0.0002% wt/wt THF or less.
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
Electroactive Biofilms: Current Status and Future Research Needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borole, Abhijeet P; Reguera, Gemma; Ringeisen, Bradley
2011-01-01
Electroactive biofilms generated by electrochemically active microorganisms have many potential applications in bioenergy and chemicals production. This review assesses the effects of microbiological and process parameters on enrichment of such biofilms as well as critically evaluates the current knowledge of the mechanisms of extracellular electron transfer in BES systems. First we discuss the role of biofilm forming microorganisms vs. planktonic microorganisms. Physical, chemical and electrochemical parameters which dictate the enrichment and subsequent performance of the biofilms are discussed. Potential dependent biological parameters including biofilm growth rate, specific electron transfer rate and others and their relationship to BES system performance ismore » assessed. A review of the mechanisms of electron transfer in BES systems is included followed by a discussion of biofilm and its exopolymeric components and their electrical conductivity. A discussion of the electroactive biofilms in biocathodes is also included. Finally, we identify the research needs for further development of the electroactive biofilms to enable commercial applications.« less
Numerical simulation of hydrogen fluorine overtone chemical lasers
NASA Astrophysics Data System (ADS)
Chen, Jinbao; Jiang, Zhongfu; Hua, Weihong; Liu, Zejin; Shu, Baihong
1998-08-01
A two-dimensional program was applied to simulate the chemical dynamic process, gas dynamic process and lasing process of a combustion-driven CW HF overtone chemical lasers. Some important parameters in the cavity were obtained. The calculated results included HF molecule concentration on each vibration energy level while lasing, averaged pressure and temperature, zero power gain coefficient of each spectral line, laser spectrum, the averaged laser intensity, output power, chemical efficiency and the length of lasing zone.
RTM: Cost-effective processing of composite structures
NASA Technical Reports Server (NTRS)
Hasko, Greg; Dexter, H. Benson
1991-01-01
Resin transfer molding (RTM) is a promising method for cost effective fabrication of high strength, low weight composite structures from textile preforms. In this process, dry fibers are placed in a mold, resin is introduced either by vacuum infusion or pressure, and the part is cured. RTM has been used in many industries, including automotive, recreation, and aerospace. Each of the industries has different requirements of material strength, weight, reliability, environmental resistance, cost, and production rate. These requirements drive the selection of fibers and resins, fiber volume fractions, fiber orientations, mold design, and processing equipment. Research is made into applying RTM to primary aircraft structures which require high strength and stiffness at low density. The material requirements are discussed of various industries, along with methods of orienting and distributing fibers, mold configurations, and processing parameters. Processing and material parameters such as resin viscosity, perform compaction and permeability, and tool design concepts are discussed. Experimental methods to measure preform compaction and permeability are presented.
Determining fundamental properties of matter created in ultrarelativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Novak, J.; Novak, K.; Pratt, S.; Vredevoogd, J.; Coleman-Smith, C. E.; Wolpert, R. L.
2014-03-01
Posterior distributions for physical parameters describing relativistic heavy-ion collisions, such as the viscosity of the quark-gluon plasma, are extracted through a comparison of hydrodynamic-based transport models to experimental results from 100AGeV+100AGeV Au +Au collisions at the Relativistic Heavy Ion Collider. By simultaneously varying six parameters and by evaluating several classes of observables, we are able to explore the complex intertwined dependencies of observables on model parameters. The methods provide a full multidimensional posterior distribution for the model output, including a range of acceptable values for each parameter, and reveal correlations between them. The breadth of observables and the number of parameters considered here go beyond previous studies in this field. The statistical tools, which are based upon Gaussian process emulators, are tested in detail and should be extendable to larger data sets and a higher number of parameters.
40 CFR 63.526 - Monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters... purpose of determining de minimis status for emission points, engineering assessment may be used to... expected to yield the highest flow rate and concentration. Engineering assessment includes, but is not...
40 CFR 63.526 - Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (D) Design analysis based on accepted chemical engineering principles, measurable process parameters... purpose of determining de minimis status for emission points, engineering assessment may be used to... expected to yield the highest flow rate and concentration. Engineering assessment includes, but is not...
Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes
NASA Astrophysics Data System (ADS)
Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi
2018-05-01
Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.
Production and fabrication of 2500-lb Nb--Ti ingots to rod
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cordier, T.E.; McDonald, W.K.
Interest in Nb--Ti superconducting devices is exploding. This paper outlines the critical production criteria for this material. Areas discussed include ingot blending, melting, forging, extrusion, and rod reducing with emphasis on the metallurgical considerations affecting mechanical properties. Data are included relating process parameters to TEM finding as well as R.T. ductility and optical microscopy. (auth)
Ga- and N-polar GaN Growths on SiC Substrate
2018-03-15
a transition process of a two-section NR are formulated and numerically studied to show the consistent results with experimental data. The relative...contributions of the VLS and VS growths in such a transition process are also numerically illustrated. Besides, the experimentally observed decrease... experimental data, a few important kinetic parameters can be determined. The anti-reflection functions of a surface nanostructure, including
Infrared Thermography For Welding
NASA Technical Reports Server (NTRS)
Gilbert, Jeffrey L.; Lucky, Brian D.; Spiegel, Lyle B.; Hudyma, Russell M.
1992-01-01
Infrared imaging and image-data-processing system shows temperatures of joint during welding and provides data from which rates of heating and cooling determined. Information used to control welding parameters to ensure reliable joints, in materials which microstructures and associated metallurgical and mechanical properties depend strongly on rates of heating and cooling. Applicable to variety of processes, including tungsten/inert-gas welding; plasma, laser, and resistance welding; cutting; and brazing.
Electrokinetic remediation prefield test methods
NASA Technical Reports Server (NTRS)
Hodko, Dalibor (Inventor)
2000-01-01
Methods for determining the parameters critical in designing an electrokinetic soil remediation process including electrode well spacing, operating current/voltage, electroosmotic flow rate, electrode well wall design, and amount of buffering or neutralizing solution needed in the electrode wells at operating conditions are disclosed These methods are preferably performed prior to initiating a full scale electrokinetic remediation process in order to obtain efficient remediation of the contaminants.
Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design
NASA Technical Reports Server (NTRS)
Li, Wu; Robinson, Jay
2016-01-01
This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.
Comparing methods for Earthquake Location
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin
2017-04-01
There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).
Kinetics of Sub-Micron Grain Size Refinement in 9310 Steel
NASA Astrophysics Data System (ADS)
Kozmel, Thomas; Chen, Edward Y.; Chen, Charlie C.; Tin, Sammy
2014-05-01
Recent efforts have focused on the development of novel manufacturing processes capable of producing microstructures dominated by sub-micron grains. For structural applications, grain refinement has been shown to enhance mechanical properties such as strength, fatigue resistance, and fracture toughness. Through control of the thermo-mechanical processing parameters, dynamic recrystallization mechanisms were used to produce microstructures consisting of sub-micron grains in 9310 steel. Starting with initial bainitic grain sizes of 40 to 50 μm, various levels of grain refinement were observed following hot deformation of 9310 steel samples at temperatures and strain rates ranging from 755 K to 922 K (482 °C and 649 °C) and 1 to 0.001/s, respectively. The resulting deformation microstructures were characterized using scanning electron microscopy and electron backscatter diffraction techniques to quantify the extent of carbide coarsening and grain refinement occurring during deformation. Microstructural models based on the Zener-Holloman parameter were developed and modified to include the effect of the ferrite/carbide interactions within the system. These models were shown to effectively correlate microstructural attributes to the thermal mechanical processing parameters.
Development of lithium diffused radiation resistant solar cells, part 2
NASA Technical Reports Server (NTRS)
Payne, P. R.; Somberg, H.
1971-01-01
The work performed to investigate the effect of various process parameters on the performance of lithium doped P/N solar cells is described. Effort was concentrated in four main areas: (1) the starting material, (2) the boron diffusion, (3) the lithium diffusion, and (4) the contact system. Investigation of starting material primarily involved comparison of crucible grown silicon (high oxygen content) and Lopex silicon (low oxygen content). In addition, the effect of varying growing parameters of crucible grown silicon on lithium cell output was also examined. The objective of the boron diffusion studies was to obtain a diffusion process which produced high efficiency cells with minimal silicon stressing and could be scaled up to process 100 or more cells per diffusion. Contact studies included investigating sintering of the TiAg contacts and evaluation of the contact integrity.
Precision laser processing for micro electronics and fiber optic manufacturing
NASA Astrophysics Data System (ADS)
Webb, Andrew; Osborne, Mike; Foster-Turner, Gideon; Dinkel, Duane W.
2008-02-01
The application of laser based materials processing for precision micro scale manufacturing in the electronics and fiber optic industry is becoming increasingly widespread and accepted. This presentation will review latest laser technologies available and discuss the issues to be considered in choosing the most appropriate laser and processing parameters. High repetition rate, short duration pulsed lasers have improved rapidly in recent years in terms of both performance and reliability enabling flexible, cost effective processing of many material types including metal, silicon, plastic, ceramic and glass. Demonstrating the relevance of laser micromachining, application examples where laser processing is in use for production will be presented, including miniaturization of surface mount capacitors by applying a laser technique for demetalization of tracks in the capacitor manufacturing process and high quality laser machining of fiber optics including stripping, cleaving and lensing, resulting in optical quality finishes without the need for traditional polishing. Applications include telecoms, biomedical and sensing. OpTek Systems was formed in 2000 and provide fully integrated systems and sub contract services for laser processes. They are headquartered in the UK and are establishing a presence in North America through a laser processing facility in South Carolina and sales office in the North East.
Sensitivity of the s-process nucleosynthesis in AGB stars to the overshoot model
NASA Astrophysics Data System (ADS)
Goriely, S.; Siess, L.
2018-01-01
Context. S-process elements are observed at the surface of low- and intermediate-mass stars. These observations can be explained empirically by the so-called partial mixing of protons scenario leading to the incomplete operation of the CN cycle and a significant primary production of the neutron source. This scenario has been successful in qualitatively explaining the s-process enrichment in AGB stars. Even so, it remains difficult to describe both physically and numerically the mixing mechanisms taking place at the time of the third dredged-up between the convective envelope and the underlying C-rich radiative layer Aims: We aim to present new calculations of the s-process nucleosynthesis in AGB stars testing two different numerical implementations of chemical transport. These are based on a diffusion equation which depends on the second derivative of the composition and on a numerical algorithm where the transport of species depends linearly on the chemical gradient. Methods: The s-process nucleosynthesis resulting from these different mixing schemes is calculated with our stellar evolution code STAREVOL which has been upgraded to include an extended s-process network of 411 nuclei. Our investigation focuses on a fiducial 2 M⊙, [Fe/H] = -0.5 model star, but also includes four additional stars of different masses and metallicities. Results: We show that for the same set of parameters, the linear mixing approach produces a much larger 13C-pocket and consequently a substantially higher surface s-process enrichment compared to the diffusive prescription. Within the diffusive model, a quite extreme choice of parameters is required to account for surface s-process enrichment of 1-2 dex. These extreme conditions can not, however, be excluded at this stage. Conclusions: Both the diffusive and linear prescriptions of the overshoot mixing are suited to describe the s-process nucleosynthesis in AGB stars provided the profile of the diffusion coefficient below the convective envelope is carefully chosen. Both schemes give rise to relatively similar distributions of s-process elements, but depending on the parameters adopted, some differences may be obtained. These differences are in the element distribution, and most of all in the level of surface enrichment.
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset
NASA Astrophysics Data System (ADS)
Zhou, Y. K.
2018-05-01
Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.
Contributions to ultrasound monitoring of the process of milk curdling.
Jiménez, Antonio; Rufo, Montaña; Paniagua, Jesús M; Crespo, Abel T; Guerrero, M Patricia; Riballo, M José
2017-04-01
Ultrasound evaluation permits the state of milk being curdled to be determined quickly and cheaply, thus satisfying the demands faced by today's dairy product producers. This paper describes the non-invasive ultrasonic method of in situ monitoring the changing physical properties of milk during the renneting process. The basic objectives of the study were, on the one hand, to confirm the usefulness of conventional non-destructive ultrasonic testing (time-of-flight and attenuation of the ultrasound waves) in monitoring the process in the case of ewe's milk, and, on the other, to include other ultrasound parameters which have not previously been considered in studies on this topic, in particular, parameters provided by the Fast Fourier Transform technique. The experimental study was carried out in a dairy industry environment on four 52-l samples of raw milk in which were immersed 500kHz ultrasound transducers. Other physicochemical parameters of the raw milk (pH, dry matter, protein, Gerber fat test, and lactose) were measured, as also were the pH and temperature of the curdled samples simultaneously with the ultrasound tests. Another contribution of this study is the linear correlation analysis of the aforementioned ultrasound parameters and the physicochemical properties of the curdled milk. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.
2016-05-01
One of the major challenges in civil, mechanical, and aerospace engineering is to develop vibration suppression systems with high efficiency and low cost. Recent studies have shown that high damping performance at broadband frequencies can be achieved by incorporating periodic inserts with tunable dynamic properties as internal resonators in structural systems. Structures featuring these kinds of inserts are referred to as metamaterials inspired structures or metastructures. Chiral lattice inserts exhibit unique characteristics such as frequency bandgaps which can be tuned by varying the parameters that define the lattice topology. Recent analytical and experimental investigations have shown that broadband vibration attenuation can be achieved by including chiral lattices as internal resonators in beam-like structures. However, these studies have suggested that the performance of chiral lattice inserts can be maximized by utilizing an efficient optimization technique to obtain the optimal topology of the inserted lattice. In this study, an automated optimization procedure based on a genetic algorithm is applied to obtain the optimal set of parameters that will result in chiral lattice inserts tuned properly to reduce the global vibration levels of a finite-sized beam. Genetic algorithms are considered in this study due to their capability of dealing with complex and insufficiently understood optimization problems. In the optimization process, the basic parameters that govern the geometry of periodic chiral lattices including the number of circular nodes, the thickness of the ligaments, and the characteristic angle are considered. Additionally, a new set of parameters is introduced to enable the optimization process to explore non-periodic chiral designs. Numerical simulations are carried out to demonstrate the efficiency of the optimization process.
Excitonic processes at organic heterojunctions
NASA Astrophysics Data System (ADS)
He, ShouJie; Lu, ZhengHong
2018-02-01
Understanding excitonic processes at organic heterojunctions is crucial for development of organic semiconductor devices. This article reviews recent research on excitonic physics that involve intermolecular charge transfer (CT) excitons, and progress on understanding relationships between various interface energy levels and key parameters governing various competing interface excitonic processes. These interface excitonic processes include radiative exciplex emission, nonradiative recombination, Auger electron emission, and CT exciton dissociation. This article also reviews various device applications involving interface CT excitons, such as organic light-emitting diodes (OLEDs), organic photovoltaic cells, organic rectifying diodes, and ultralow-voltage Auger OLEDs.
Curie-Montgolfiere Planetary Explorers
NASA Astrophysics Data System (ADS)
Taylor, Chris Y.; Hansen, Jeremiah
2007-01-01
Hot-air balloons, also known as Montgolfiere balloons, powered by heat from radioisotope decay are a potentially useful tool for exploring planetary atmospheres and augmenting the capabilities of other exploration technologies. This paper describes the physical equations and identifies the key engineering parameters that drive radioisotope-powered balloon performance. These parameters include envelope strength-to-weight, envelope thermal conductivity, heater power-to-weight, heater temperature, and balloon shape. The design space for these parameters are shown for varying atmospheric compositions to illustrate the performance needed to build functioning ``Curie-Montgolfiere'' balloons for various planetary atmospheres. Methods to ease the process of Curie-Montgolfiere conceptual design and sizing of are also introduced.
Modelling and intepreting the isotopic composition of water vapour in convective updrafts
NASA Astrophysics Data System (ADS)
Bolot, M.; Legras, B.; Moyer, E. J.
2012-08-01
The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener-Bergeron-Findeisen process). As all of these processes are related to updraft strength, droplet size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.
Modelling and interpreting the isotopic composition of water vapour in convective updrafts
NASA Astrophysics Data System (ADS)
Bolot, M.; Legras, B.; Moyer, E. J.
2013-08-01
The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed-phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters, including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener-Bergeron-Findeisen process). As all of these processes are related to updraft strength, particle size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.
Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design
NASA Technical Reports Server (NTRS)
Anderson, B. J.; Justus, C. G.; Batts, G. W.
2001-01-01
Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Michael
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less
Evolution of various fractions during the windrow composting of chicken manure with rice chaff.
Kong, Zhijian; Wang, Xuanqing; Liu, Qiumei; Li, Tuo; Chen, Xing; Chai, Lifang; Liu, Dongyang; Shen, Qirong
2018-02-01
Different fractions during the 85-day windrow composting were characterized based on various parameters, such as physiochemical properties and hydrolytic enzyme activities; several technologies were used, including spectral scanning techniques, confocal laser scanning microscopy (CLSM) and 13 C Nuclear Magnetic Resonance Spectroscopy ( 13 C NMR). The evaluated parameters fluctuated strongly during the first 3 weeks which was the most active period of the composting process. The principal components analysis (PCA) results showed that four classes of the samples were clearly distinguishable, in which the physiochemical parameters were similar, and that the dynamics of the composting process was significantly influenced by C/N and moisture content. The 13 C NMR results indicated that O-alkyl-C was the predominant group both in the solid and water-soluble fractions (WSF), and the decomposition of O-alkyl-C mainly occurred during the active stage. In general, the various parameters indicated that windrow composting is a feasible treatment that can be used for the resource reuse of agricultural wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pilot-Configurable Information on a Display Unit
NASA Technical Reports Server (NTRS)
Bell, Charles Frederick (Inventor); Ametsitsi, Julian (Inventor); Che, Tan Nhat (Inventor); Shafaat, Syed Tahir (Inventor)
2017-01-01
A small thin display unit that can be installed in the flight deck for displaying only flight crew-selected tactical information needed for the task at hand. The flight crew can select the tactical information to be displayed by means of any conventional user interface. Whenever the flight crew selects tactical information for processes the request, including periodically retrieving measured current values or computing current values for the requested tactical parameters and returning those current tactical parameter values to the display unit for display.
10 CFR 63.114 - Requirements for performance assessment.
Code of Federal Regulations, 2014 CFR
2014-01-01
... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...
10 CFR 63.114 - Requirements for performance assessment.
Code of Federal Regulations, 2013 CFR
2013-01-01
... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...
10 CFR 63.114 - Requirements for performance assessment.
Code of Federal Regulations, 2012 CFR
2012-01-01
... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...
10 CFR 63.114 - Requirements for performance assessment.
Code of Federal Regulations, 2011 CFR
2011-01-01
... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...
10 CFR 63.114 - Requirements for performance assessment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...
DOT National Transportation Integrated Search
2010-11-01
The resilient modulus and Poissons ratio of base and sublayers in highway use are : important parameters in design and quality control process. The currently used techniques : include CBR (California Bearing Ratio) test, resilient modulus test,...
Process for fractionating fast-pyrolysis oils, and products derived therefrom
Chum, Helena L.; Black, Stuart K.
1990-01-01
A process is disclosed for fractionating lignocellulosic materials fast-prolysis oils to produce phenol-containing compositions suitable for the manufacture of phenol-formaldehyde resins. The process includes admixing the oils with an organic solvent having at least a moderate solubility parameter and good hydrogen The United States Government has rights in this invention under Contract No. DE-AC02-83CH10093 between the United States Department of Energy and the Solar Energy Research Institute, a Division of the Midwest Research Institute.
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1976-01-01
A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
A two-parameter family of double-power-law biorthonormal potential-density expansions
NASA Astrophysics Data System (ADS)
Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn
2018-07-01
We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in a closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.
A two-parameter family of double-power-law biorthonormal potential-density expansions
NASA Astrophysics Data System (ADS)
Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn
2018-05-01
We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. (2017a) expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.
NASA Astrophysics Data System (ADS)
Landry, Blake J.; Hancock, Matthew J.; Mei, Chiang C.; García, Marcelo H.
2012-09-01
The ability to determine wave heights and phases along a spatial domain is vital to understanding a wide range of littoral processes. The software tool presented here employs established Stokes wave theory and sampling methods to calculate parameters for the incident and reflected components of a field of weakly nonlinear waves, monochromatic at first order in wave slope and propagating in one horizontal dimension. The software calculates wave parameters over an entire wave tank and accounts for reflection, weak nonlinearity, and a free second harmonic. Currently, no publicly available program has such functionality. The included MATLAB®-based open source code has also been compiled for Windows®, Mac® and Linux® operating systems. An additional companion program, VirtualWave, is included to generate virtual wave fields for WaveAR. Together, the programs serve as ideal analysis and teaching tools for laboratory water wave systems.
A two-parameter family of double-power-law biorthonormal potential-density expansions
NASA Astrophysics Data System (ADS)
Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn
2018-05-01
We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. (2018b) expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.
Anderson-Cook, Christine Michaela
2017-03-01
Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less
Scramjet mixing establishment times for a pulse facility
NASA Technical Reports Server (NTRS)
Rogers, R. Clayton; Weidner, Elizabeth H.
1991-01-01
A numerical simulation of the temporally developing flow through a generic scramjet combustor duct is presented for stagnation conditions typical of flight at Mach 13 as produced by a shock tunnel pulse facility. The particular focus is to examine the start up transients and to determine the time required for certain flow parameters to become established. The calculations were made with a Navier-Stokes solver SPARK with temporally relaxing inflow conditions derived from operation of the T4 shock tunnel at the University of Queensland in Australia. Calculations at nominal steady inflow conditions were made for comparison. The generic combustor geometry includes the injection of hydrogen fuel from the base of a centrally located strut. In both cases, the flow was assumed laminar and fuel combustion was not included. The establishment process is presented for viscous parameters in the boundary layer and for parameters related to the fuel mixing.
NASA Astrophysics Data System (ADS)
Famodimu, Omotoyosi H.; Stanford, Mark; Oduoza, Chike F.; Zhang, Lijuan
2018-06-01
Laser melting of aluminium alloy—AlSi10Mg has increasingly been used to create specialised products in various industrial applications, however, research on utilising laser melting of aluminium matrix composites in replacing specialised parts have been slow on the uptake. This has been attributed to the complexity of the laser melting process, metal/ceramic feedstock for the process and the reaction of the feedstock material to the laser. Thus, an understanding of the process, material microstructure and mechanical properties is important for its adoption as a manufacturing route of aluminium metal matrix composites. The effects of several parameters of the laser melting process on the mechanical blended composite were thus investigated in this research. This included single track formations of the matrix alloy and the composite alloyed with 5% and 10% respectively for their reaction to laser melting and the fabrication of density blocks to investigate the relative density and porosity over different scan speeds. The results from these experiments were utilised in determining a process window in fabricating near-fully dense parts.
NASA Astrophysics Data System (ADS)
Shahbudin, S. N. A.; Othman, M. H.; Amin, Sri Yulis M.; Ibrahim, M. H. I.
2017-08-01
This article is about a review of optimization of metal injection molding and microwave sintering process on tungsten cemented carbide produce by metal injection molding process. In this study, the process parameters for the metal injection molding were optimized using Taguchi method. Taguchi methods have been used widely in engineering analysis to optimize the performance characteristics through the setting of design parameters. Microwave sintering is a process generally being used in powder metallurgy over the conventional method. It has typical characteristics such as accelerated heating rate, shortened processing cycle, high energy efficiency, fine and homogeneous microstructure, and enhanced mechanical performance, which is beneficial to prepare nanostructured cemented carbides in metal injection molding. Besides that, with an advanced and promising technology, metal injection molding has proven that can produce cemented carbides. Cemented tungsten carbide hard metal has been used widely in various applications due to its desirable combination of mechanical, physical, and chemical properties. Moreover, areas of study include common defects in metal injection molding and application of microwave sintering itself has been discussed in this paper.
Effect of pilot-scale aseptic processing on tomato soup quality parameters.
Colle, Ines J P; Andrys, Anna; Grundelius, Andrea; Lemmens, Lien; Löfgren, Anders; Buggenhout, Sandy Van; Loey, Ann; Hendrickx, Marc Van
2011-01-01
Tomatoes are often processed into shelf-stable products. However, the different processing steps might have an impact on the product quality. In this study, a model tomato soup was prepared and the impact of pilot-scale aseptic processing, including heat treatment and high-pressure homogenization, on some selected quality parameters was evaluated. The vitamin C content, the lycopene isomer content, and the lycopene bioaccessibility were considered as health-promoting attributes. As a structural characteristic, the viscosity of the tomato soup was investigated. A tomato soup without oil as well as a tomato soup containing 5% olive oil were evaluated. Thermal processing had a negative effect on the vitamin C content, while lycopene degradation was limited. For both compounds, high-pressure homogenization caused additional losses. High-pressure homogenization also resulted in a higher viscosity that was accompanied by a decrease in lycopene bioaccessibility. The presence of lipids clearly enhanced the lycopene isomerization susceptibility and improved the bioaccessibility. The results obtained in this study are of relevance for product formulation and process design of tomato-based food products. © 2011 Institute of Food Technologists®
Pharmaceutical quality by design: product and process development, understanding, and control.
Yu, Lawrence X
2008-04-01
The purpose of this paper is to discuss the pharmaceutical Quality by Design (QbD) and describe how it can be used to ensure pharmaceutical quality. The QbD was described and some of its elements identified. Process parameters and quality attributes were identified for each unit operation during manufacture of solid oral dosage forms. The use of QbD was contrasted with the evaluation of product quality by testing alone. The QbD is a systemic approach to pharmaceutical development. It means designing and developing formulations and manufacturing processes to ensure predefined product quality. Some of the QbD elements include: Defining target product quality profile; Designing product and manufacturing processes; Identifying critical quality attributes, process parameters, and sources of variability; Controlling manufacturing processes to produce consistent quality over time. Using QbD, pharmaceutical quality is assured by understanding and controlling formulation and manufacturing variables. Product testing confirms the product quality. Implementation of QbD will enable transformation of the chemistry, manufacturing, and controls (CMC) review of abbreviated new drug applications (ANDAs) into a science-based pharmaceutical quality assessment.
The study on injection parameters of selected alternative fuels used in diesel engines
NASA Astrophysics Data System (ADS)
Balawender, K.; Kuszewski, H.; Lejda, K.; Lew, K.
2016-09-01
The paper presents selected results concerning fuel charging and spraying process for selected alternative fuels, including regular diesel fuel, rape oil, FAME, blends of these fuels in various proportions, and blends of rape oil with diesel fuel. Examination of the process included the fuel charge measurements. To this end, a set-up for examination of Common Rail-type injection systems was used constructed on the basis of Bosch EPS-815 test bench, from which the high-pressure pump drive system was adopted. For tests concerning the spraying process, a visualisation chamber with constant volume was utilised. The fuel spray development was registered with the use of VisioScope (AVL).
Dubský, Pavel; Ördögová, Magda; Malý, Michal; Riesová, Martina
2016-05-06
We introduce CEval software (downloadable for free at echmet.natur.cuni.cz) that was developed for quicker and easier electrophoregram evaluation and further data processing in (affinity) capillary electrophoresis. This software allows for automatic peak detection and evaluation of common peak parameters, such as its migration time, area, width etc. Additionally, the software includes a nonlinear regression engine that performs peak fitting with the Haarhoff-van der Linde (HVL) function, including automated initial guess of the HVL function parameters. HVL is a fundamental peak-shape function in electrophoresis, based on which the correct effective mobility of the analyte represented by the peak is evaluated. Effective mobilities of an analyte at various concentrations of a selector can be further stored and plotted in an affinity CE mode. Consequently, the mobility of the free analyte, μA, mobility of the analyte-selector complex, μAS, and the apparent complexation constant, K('), are first guessed automatically from the linearized data plots and subsequently estimated by the means of nonlinear regression. An option that allows two complexation dependencies to be fitted at once is especially convenient for enantioseparations. Statistical processing of these data is also included, which allowed us to: i) express the 95% confidence intervals for the μA, μAS and K(') least-squares estimates, ii) do hypothesis testing on the estimated parameters for the first time. We demonstrate the benefits of the CEval software by inspecting complexation of tryptophan methyl ester with two cyclodextrins, neutral heptakis(2,6-di-O-methyl)-β-CD and charged heptakis(6-O-sulfo)-β-CD. Copyright © 2016 Elsevier B.V. All rights reserved.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
A comparative method for processing immunological parameters: developing an "Immunogram".
Ortolani, Riccardo; Bellavite, Paolo; Paiola, Fiorenza; Martini, Morena; Marchesini, Martina; Veneri, Dino; Franchini, Massimo; Chirumbolo, Salvatore; Tridente, Giuseppe; Vella, Antonio
2010-04-01
The immune system is a network of numerous cells that communicate both directly and indirectly with each other. The system is very sensitive to antigenic stimuli, which are memorised, and is closely connected with the endocrine and nervous systems. Therefore, in order to study the immune system correctly, it must be considered in all its complexity by analysing its components with multiparametric tools that take its dynamic characteristic into account. We analysed lymphocyte subpopulations by using monoclonal antibodies with six different fluorochromes; the monoclonal panel employed included CD45, CD3, CD4, CD8, CD16, CD56, CD57, CD19, CD23, CD27, CD5, and HLA-DR. This panel has enabled us to measure many lymphocyte subsets in different states and with different functions: helper, suppressor, activated, effector, naïve, memory, and regulatory. A database was created to collect the values of immunological parameters of approximately 8,000 subjects who have undergone testing since 2000. When the distributions of the values for these parameters were compared with the medians of reference values published in the literature, we found that most of the values from the subjects included in the database were close to the medians in the literature. To process the data we used a comparative method that calculates the percentile rank of the values of a subject by comparing them with the values for others subjects of the same age. From this data processing we obtained a set of percentile ranks that represent the positions of the various parameters with regard to the data for other age-matched subjects included in the database. These positions, relative to both the absolute values and percentages, are plotted in a graph. We have called the final plot, which can be likened to that subject's immunological fingerprint, an "Immunogram". In order to perform the necessary calculations automatically, we developed dedicated software (Immunogramma) which provides at least two different "pictures" for each subject: the first is based on a comparison of the individual's data with those from all age-related subjects, while the second provides a comparison with only age and disease-related subjects. In addition, we can superimpose two fingerprints from the same subject, calculated at different times, in order to produce a dynamic picture, for instance before and after treatment. Finally, with the aim of interpreting the clinical and diagnostic meaning of a set of positions for the values of the measured parameters, we can also search the database to determine whether it contains other subjects who have a similar pattern for some selected immune parameters. This method helps to study and follow-up immune parameters over time. The software enables automation of the process and data sharing with other departments and laboratories, so the database can grow rapidly, thus expanding its informational capacity.
NASA Astrophysics Data System (ADS)
Golder, K.; Burr, D. M.; Tran, L.
2017-12-01
Regional volcanic processes shaped many planetary surfaces in the Solar System, often through the emplacement of long, voluminous lava flows. Terrestrial examples of this type of lava flow have been used as analogues for extensive martian flows, including those within the circum-Cerberus outflow channels. This analogy is based on similarities in morphology, extent, and inferred eruptive style between terrestrial and martian flows, which raises the question of how these lava flows appear comparable in size and morphology on different planets. The parameters that influence the areal extent of silicate lavas during emplacement may be categorized as either inherent or external to the lava. The inherent parameters include the lava yield strength, density, composition, water content, crystallinity, exsolved gas content, pressure, and temperature. Each inherent parameter affects the overall viscosity of the lava, and for this work can be considered a subset of the viscosity parameter. External parameters include the effusion rate, total erupted volume, regional slope, and gravity. To investigate which parameter(s) may control(s) the development of long lava flows on Mars, we are applying a computational numerical-modelling to reproduce the observed lava flow morphologies. Using a matrix of boundary conditions in the model enables us to investigate the possible range of emplacement conditions that can yield the observed morphologies. We have constructed the basic model framework in Model Builder within ArcMap, including all governing equations and parameters that we seek to test, and initial implementation and calibration has been performed. The base model is currently capable of generating a lava flow that propagates along a pathway governed by the local topography. At AGU, the results of model calibration using the Eldgá and Laki lava flows in Iceland will be presented, along with the application of the model to lava flows within the Cerberus plains on Mars. We then plan to convert the model into Python, for easy modification and portability within the community.
Incorporating microbes into large-scale biogeochemical models
NASA Astrophysics Data System (ADS)
Allison, S. D.; Martiny, J. B.
2008-12-01
Micro-organisms, including Bacteria, Archaea, and Fungi, control major processes throughout the Earth system. Recent advances in microbial ecology and microbiology have revealed an astounding level of genetic and metabolic diversity in microbial communities. However, a framework for interpreting the meaning of this diversity has lagged behind the initial discoveries. Microbial communities have yet to be included explicitly in any major biogeochemical models in terrestrial ecosystems, and have only recently broken into ocean models. Although simplification of microbial communities is essential in complex systems, omission of community parameters may seriously compromise model predictions of biogeochemical processes. Two key questions arise from this tradeoff: 1) When and where must microbial community parameters be included in biogeochemical models? 2) If microbial communities are important, how should they be simplified, aggregated, and parameterized in models? To address these questions, we conducted a meta-analysis to determine if microbial communities are sensitive to four environmental disturbances that are associated with global change. In all cases, we found that community composition changed significantly following disturbance. However, the implications for ecosystem function were unclear in most of the published studies. Therefore, we developed a simple model framework to illustrate the situations in which microbial community changes would affect rates of biogeochemical processes. We found that these scenarios could be quite common, but powerful predictive models cannot be developed without much more information on the functions and disturbance responses of microbial taxa. Small-scale models that explicitly incorporate microbial communities also suggest that process rates strongly depend on microbial interactions and disturbance responses. The challenge is to scale up these models to make predictions at the ecosystem and global scales based on measurable parameters. We argue that meeting this challenge will require a coordinated effort to develop a series of nested models at scales ranging from the micron to the globe in order to optimize the tradeoff between model realism and feasibility.
Towards a new parameterization of ice particles growth
NASA Astrophysics Data System (ADS)
Krakovska, Svitlana; Khotyayintsev, Volodymyr; Bardakov, Roman; Shpyg, Vitaliy
2017-04-01
Ice particles are the main component of polar clouds, unlike in warmer regions. That is why correct representation of ice particle formation and growth in NWP and other numerical atmospheric models is crucial for understanding of the whole chain of water transformation, including precipitation formation and its further deposition as snow in polar glaciers. Currently, parameterization of ice in atmospheric models is among the most difficult challenges. In the presented research, we present a renewed theoretical analysis of the evolution of mixed cloud or cold fog from the moment of ice nuclei activation until complete crystallization. The simplified model is proposed that includes both supercooled cloud droplets and initially uniform particles of ice, as well as water vapor. We obtain independent dimensionless input parameters of a cloud, and find main scenarios and stages of evolution of the microphysical state of the cloud. The characteristic times and particle sizes have been found, as well as the peculiarities of microphysical processes at each stage of evolution. In the future, the proposed original and physically grounded approximations may serve as a basis for a new scientifically substantiated and numerically efficient parameterizations of microphysical processes in mixed clouds for modern atmospheric models. The relevance of theoretical analysis is confirmed by numerical modeling for a wide range of combinations of possible conditions in the atmosphere, including cold polar regions. The main conclusion of the research is that until complete disappearance of cloud droplets, the growth of ice particles occurs at a practically constant humidity corresponding to the saturated humidity over water, regardless to all other parameters of a cloud. This process can be described by the one differential equation of the first order. Moreover, a dimensionless parameter has been proposed as a quantitative criterion of a transition from dominant depositional to intense collectional growth of ice particles; it could be used in models with bulk parameterization of cloud and precipitation formation processes.
A probabilistic model framework for evaluating year-to-year variation in crop productivity
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Iizumi, T.; Tao, F.
2008-12-01
Most models describing the relation between crop productivity and weather condition have so far been focused on mean changes of crop yield. For keeping stable food supply against abnormal weather as well as climate change, evaluating the year-to-year variations in crop productivity rather than the mean changes is more essential. We here propose a new framework of probabilistic model based on Bayesian inference and Monte Carlo simulation. As an example, we firstly introduce a model on paddy rice production in Japan. It is called PRYSBI (Process- based Regional rice Yield Simulator with Bayesian Inference; Iizumi et al., 2008). The model structure is the same as that of SIMRIW, which was developed and used widely in Japan. The model includes three sub- models describing phenological development, biomass accumulation and maturing of rice crop. These processes are formulated to include response nature of rice plant to weather condition. This model inherently was developed to predict rice growth and yield at plot paddy scale. We applied it to evaluate the large scale rice production with keeping the same model structure. Alternatively, we assumed the parameters as stochastic variables. In order to let the model catch up actual yield at larger scale, model parameters were determined based on agricultural statistical data of each prefecture of Japan together with weather data averaged over the region. The posterior probability distribution functions (PDFs) of parameters included in the model were obtained using Bayesian inference. The MCMC (Markov Chain Monte Carlo) algorithm was conducted to numerically solve the Bayesian theorem. For evaluating the year-to-year changes in rice growth/yield under this framework, we firstly iterate simulations with set of parameter values sampled from the estimated posterior PDF of each parameter and then take the ensemble mean weighted with the posterior PDFs. We will also present another example for maize productivity in China. The framework proposed here provides us information on uncertainties, possibilities and limitations on future improvements in crop model as well.
NASA Astrophysics Data System (ADS)
Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.
2015-08-01
Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.
Spectral Quantitation Of Hydroponic Nutrients
NASA Technical Reports Server (NTRS)
Schlager, Kenneth J.; Kahle, Scott J.; Wilson, Monica A.; Boehlen, Michelle
1996-01-01
Instrument continuously monitors hydroponic solution by use of absorption and emission spectrometry to determine concentrations of principal nutrients, including nitrate, iron, potassium, calcium, magnesium, phosphorus, sodium, and others. Does not depend on extraction and processing of samples, use of such surrograte parameters as pH or electrical conductivity for control, or addition of analytical reagents to solution. Solution not chemically altered by analysis and can be returned to hydroponic process stream after analysis.
Analysis And Control System For Automated Welding
NASA Technical Reports Server (NTRS)
Powell, Bradley W.; Burroughs, Ivan A.; Kennedy, Larry Z.; Rodgers, Michael H.; Goode, K. Wayne
1994-01-01
Automated variable-polarity plasma arc (VPPA) welding apparatus operates under electronic supervision by welding analysis and control system. System performs all major monitoring and controlling functions. It acquires, analyzes, and displays weld-quality data in real time and adjusts process parameters accordingly. Also records pertinent data for use in post-weld analysis and documentation of quality. System includes optoelectronic sensors and data processors that provide feedback control of welding process.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
High-throughput imaging of heterogeneous cell organelles with an X-ray laser (CXIDB ID 25)
Hantke, Max, F.
2014-11-17
Preprocessed detector images that were used for the paper "High-throughput imaging of heterogeneous cell organelles with an X-ray laser". The CXI file contains the entire recorded data - including both hits and blanks. It also includes down-sampled images and LCLS machine parameters. Additionally, the Cheetah configuration file is attached that was used to create the pre-processed data.
Ormes, James D; Zhang, Dan; Chen, Alex M; Hou, Shirley; Krueger, Davida; Nelson, Todd; Templeton, Allen
2013-02-01
There has been a growing interest in amorphous solid dispersions for bioavailability enhancement in drug discovery. Spray drying, as shown in this study, is well suited to produce prototype amorphous dispersions in the Candidate Selection stage where drug supply is limited. This investigation mapped the processing window of a micro-spray dryer to achieve desired particle characteristics and optimize throughput/yield. Effects of processing variables on the properties of hypromellose acetate succinate were evaluated by a fractional factorial design of experiments. Parameters studied include solid loading, atomization, nozzle size, and spray rate. Response variables include particle size, morphology and yield. Unlike most other commercial small-scale spray dryers, the ProCepT was capable of producing particles with a relatively wide mean particle size, ca. 2-35 µm, allowing material properties to be tailored to support various applications. In addition, an optimized throughput of 35 g/hour with a yield of 75-95% was achieved, which affords to support studies from Lead-identification/Lead-optimization to early safety studies. A regression model was constructed to quantify the relationship between processing parameters and the response variables. The response surface curves provide a useful tool to design processing conditions, leading to a reduction in development time and drug usage to support drug discovery.
Computer Simulation To Assess The Feasibility Of Coring Magma
NASA Astrophysics Data System (ADS)
Su, J.; Eichelberger, J. C.
2017-12-01
Lava lakes on Kilauea Volcano, Hawaii have been successfully cored many times, often with nearly complete recovery and at temperatures exceeding 1100oC. Water exiting nozzles on the diamond core bit face quenches melt to glass just ahead of the advancing bit. The bit readily cuts a clean annulus and the core, fully quenched lava, passes smoothly into the core barrel. The core remains intact after recovery, even when there are comparable amounts of glass and crystals with different coefficients of thermal expansion. The unique resulting data reveal the rate and sequence of crystal growth in cooling basaltic lava and the continuous liquid line of descent as a function of temperature from basalt to rhyolite. Now that magma bodies, rather than lava pooled at the surface, have been penetrated by geothermal drilling, the question arises as to whether similar coring could be conducted at depth, providing fundamentally new insights into behavior of magma. This situation is considerably more complex because the coring would be conducted at depths exceeding 2 km and drilling fluid pressures of 20 MPa or more. Criteria that must be satisfied include: 1) melt is quenched ahead of the bit and the core itself must be quenched before it enters the barrel; 2) circulating drilling fluid must keep the temperature of the coring assembling cooled to within operational limits; 3) the drilling fluid column must nowhere exceed the local boiling point. A fluid flow simulation was conducted to estimate the process parameters necessary to maintain workable temperatures during the coring operation. SolidWorks Flow Simulation was used to estimate the effect of process parameters on the temperature distribution of the magma immediately surrounding the borehole and of drilling fluid within the bottom-hole assembly (BHA). A solid model of the BHA was created in SolidWorks to capture the flow behavior around the BHA components. Process parameters used in the model include the fluid properties and temperature of magma, coolant flow rate, rotation speed, and rate of penetration (ROP). The modeling results indicate that there are combinations of process parameters that will provide sufficient cooling to enable the desired coring process in magma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
VizieR Online Data Catalog: Investigation of mass loss mechanism of LPVs (Winters+, 2000)
NASA Astrophysics Data System (ADS)
Winters, J. M.; Le Bertre, T.; Jeong, K. S.; Helling, C.; Sedlmayr, E.
2000-09-01
Parameters and resultant quantities of a grid of hydrodynamical models for the circumstellar dust shells around pulsating red giants which treat the time-dependent hydrodynamics and include a detailed treatment of the dust formation process. (1 data file).
BIBLIOGRAPHY ON LEARNING PROCESS. SUPPLEMENT II.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Graduate School of Education.
THIS SUPPLEMENTARY BIBLIOGRAPHY LISTS MATERIALS ON VARIOUS FACETS OF HUMAN LEARNING. APPROXIMATELY 60 UNANNOTATED REFERENCES ARE PROVIDED FOR DOCUMENTS DATING FROM 1954 TO 1966. JOURNAL ARTICLES, BOOKS, RESEARCH REPORTS, AND CONFERENCE PAPERS ARE LISTED. SOME SUBJECT AREAS INCLUDED ARE (1) LEARNING PARAMETERS AND ABILITY, (2) RETENTION AND…
Waechter, David A.; Wolf, Michael A.; Umbarger, C. John
1985-01-01
A hand-holdable, battery-operated, microprocessor-based spectrometer gun includes a low-power matrix display and sufficient memory to permit both real-time observation and extended analysis of detected radiation pulses. Universality of the incorporated signal processing circuitry permits operation with various detectors having differing pulse detection and sensitivity parameters.
An integrated software system for geometric correction of LANDSAT MSS imagery
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Esilva, A. J. F. M.; Camara-Neto, G.; Serra, P. R. M.; Desousa, R. C. M.; Mitsuo, Fernando Augusta, II
1984-01-01
A system for geometrically correcting LANDSAT MSS imagery includes all phases of processing, from receiving a raw computer compatible tape (CCT) to the generation of a corrected CCT (or UTM mosaic). The system comprises modules for: (1) control of the processing flow; (2) calculation of satellite ephemeris and attitude parameters, (3) generation of uncorrected files from raw CCT data; (4) creation, management and maintenance of a ground control point library; (5) determination of the image correction equations, using attitude and ephemeris parameters and existing ground control points; (6) generation of corrected LANDSAT file, using the equations determined beforehand; (7) union of LANDSAT scenes to produce and UTM mosaic; and (8) generation of output tape, in super-structure format.
NASA Astrophysics Data System (ADS)
Song, Yuxin; Wang, Cong; Dong, Xinran; Yin, Kai; Zhang, Fan; Xie, Zheng; Chu, Dongkai; Duan, Ji'an
2018-06-01
In this study, a facile and detailed strategy to fabricate superhydrophobic aluminum surfaces with controllable adhesion by femtosecond laser ablation is presented. The influences of key femtosecond laser processing parameters including the scanning speed, laser power and interval on the wetting properties of the laser-ablated surfaces are investigated. It is demonstrated that the adhesion between water and superhydrophobic surface can be effectively tuned from extremely low adhesion to high adhesion by adjusting laser processing parameters. At the same time, the mechanism is discussed for the changes of the wetting behaviors of the laser-ablated surfaces. These superhydrophobic surfaces with tunable adhesion have many potential applications, such as self-cleaning surface, oil-water separation, anti-icing surface and liquid transportation.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
An approach to and web-based tool for infectious disease outbreak intervention analysis
NASA Astrophysics Data System (ADS)
Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; Deshpande, Alina
2017-04-01
Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public health community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.
Improved silicon nitride for advanced heat engines
NASA Technical Reports Server (NTRS)
Yeh, H. C.; Wimmer, J. M.; Huang, H. H.; Rorabaugh, M. E.; Schienle, J.; Styhr, K. H.
1985-01-01
The AiResearch Casting Company baseline silicon nitride (92 percent GTE SN-502 Si sub 3 N sub 4 plus 6 percent Y sub 2 O sub 3 plus 2 percent Al sub 2 O sub 3) was characterized with methods that included chemical analysis, oxygen content determination, electrophoresis, particle size distribution analysis, surface area determination, and analysis of the degree of agglomeration and maximum particle size of elutriated powder. Test bars were injection molded and processed through sintering at 0.68 MPa (100 psi) of nitrogen. The as-sintered test bars were evaluated by X-ray phase analysis, room and elevated temperature modulus of rupture strength, Weibull modulus, stress rupture, strength after oxidation, fracture origins, microstructure, and density from quantities of samples sufficiently large to generate statistically valid results. A series of small test matrices were conducted to study the effects and interactions of processing parameters which included raw materials, binder systems, binder removal cycles, injection molding temperatures, particle size distribution, sintering additives, and sintering cycle parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vackel, Andrew; Sampath, Sanjay
Thermal spray deposited WC-CoCr coatings are extensively used for surface protection of wear prone components in a variety of applications. Although the primary purpose of the coating is wear and corrosion protection, many of the coated components are structural systems (aero landing gear, hydraulic cylinders, drive shafts etc.) and as such experience cyclic loading during service and are potentially prone to fatigue failure. It is of interest to ensure that the coating and the application process does not deleteriously affect the fatigue strength of the parent structural metal. It has long been appreciated that the relative fatigue life of amore » thermal sprayed component can be affected by the residual stresses arising from coating deposition. The magnitude of these stresses can be managed by torch processing parameters and can also be influenced by deposition effects, particularly the deposition temperature. In this study, the effect of both torch operating parameters (particle states) and deposition conditions (notably substrate temperature) were investigated through rotating bending fatigue studies. The results indicate a strong influence of process parameters on relative fatigue life, including credit or debit to the substrate's fatigue life measured via rotating bend beam studies. Damage progression within the substrate was further explored by stripping the coating off part way through fatigue testing, revealing a delay in the onset of substrate damage with more fatigue resistant coatings but no benefit with coatings with inadequate properties. Finally, the results indicate that compressive residual stress and adequate load bearing capability of the coating (both controlled by torch and deposition parameters) delay onset of substrate damage, enabling fatigue credit of the coated component.« less
Vackel, Andrew; Sampath, Sanjay
2017-02-27
Thermal spray deposited WC-CoCr coatings are extensively used for surface protection of wear prone components in a variety of applications. Although the primary purpose of the coating is wear and corrosion protection, many of the coated components are structural systems (aero landing gear, hydraulic cylinders, drive shafts etc.) and as such experience cyclic loading during service and are potentially prone to fatigue failure. It is of interest to ensure that the coating and the application process does not deleteriously affect the fatigue strength of the parent structural metal. It has long been appreciated that the relative fatigue life of amore » thermal sprayed component can be affected by the residual stresses arising from coating deposition. The magnitude of these stresses can be managed by torch processing parameters and can also be influenced by deposition effects, particularly the deposition temperature. In this study, the effect of both torch operating parameters (particle states) and deposition conditions (notably substrate temperature) were investigated through rotating bending fatigue studies. The results indicate a strong influence of process parameters on relative fatigue life, including credit or debit to the substrate's fatigue life measured via rotating bend beam studies. Damage progression within the substrate was further explored by stripping the coating off part way through fatigue testing, revealing a delay in the onset of substrate damage with more fatigue resistant coatings but no benefit with coatings with inadequate properties. Finally, the results indicate that compressive residual stress and adequate load bearing capability of the coating (both controlled by torch and deposition parameters) delay onset of substrate damage, enabling fatigue credit of the coated component.« less
NASA Astrophysics Data System (ADS)
Barsuk, Alexandr A.; Paladi, Florentin
2018-04-01
The dynamic behavior of thermodynamic system, described by one order parameter and one control parameter, in a small neighborhood of ordinary and bifurcation equilibrium values of the system parameters is studied. Using the general methods of investigating the branching (bifurcations) of solutions for nonlinear equations, we performed an exhaustive analysis of the order parameter dependences on the control parameter in a small vicinity of the equilibrium values of parameters, including the stability analysis of the equilibrium states, and the asymptotic behavior of the order parameter dependences on the control parameter (bifurcation diagrams). The peculiarities of the transition to an unstable state of the system are discussed, and the estimates of the transition time to the unstable state in the neighborhood of ordinary and bifurcation equilibrium values of parameters are given. The influence of an external field on the dynamic behavior of thermodynamic system is analyzed, and the peculiarities of the system dynamic behavior are discussed near the ordinary and bifurcation equilibrium values of parameters in the presence of external field. The dynamic process of magnetization of a ferromagnet is discussed by using the general methods of bifurcation and stability analysis presented in the paper.
Valley plugs, land use, and phytogeomorphic response: Chapter 14
Pierce, Aaron R.; King, Sammy L.; Shroder, John F.
2013-01-01
Anthropogenic alteration of fluvial systems can disrupt functional processes that provide valuable ecosystem services. Channelization alters fluvial parameters and the connectivity of river channels to their floodplains which is critical for productivity, nutrient cycling, flood control, and biodiversity. The effects of channelization can be exacerbated by local geology and land-use activities, resulting in dramatic geomorphic readjustments including the formation of valley plugs. Considerable variation in the response of abiotic processes, including surface hydrology, subsurface hydrology, and sedimentation dynamics, to channelization and the formation of valley plugs. Altered abiotic processes associated with these geomorphic features and readjustments influence biotic processes including species composition, abundance, and successional processes. Considerable interest exists for restoring altered fluvial systems and their floodplains because of their social and ecological importance. Understanding abiotic and biotic responses of channelization and valley-plug formation within the context of the watershed is essential to successful restoration. This chapter focuses on the primary causes of valley-plug formation, resulting fluvial-geomorphic responses, vegetation responses, and restoration and research needs for these systems.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Technical Reports Server (NTRS)
Sauer, Carl G., Jr.
1989-01-01
A patched conic trajectory optimization program MIDAS is described that was developed to investigate a wide variety of complex ballistic heliocentric transfer trajectories. MIDAS includes the capability of optimizing trajectory event times such as departure date, arrival date, and intermediate planetary flyby dates and is able to both add and delete deep space maneuvers when dictated by the optimization process. Both powered and unpowered flyby or gravity assist trajectories of intermediate bodies can be handled and capability is included to optimize trajectories having a rendezvous with an intermediate body such as for a sample return mission. Capability is included in the optimization process to constrain launch energy and launch vehicle parking orbit parameters.
NASA Astrophysics Data System (ADS)
Walton, Karl; Blunt, Liam; Fleming, Leigh
2015-09-01
Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.
X-31 aerodynamic characteristics determined from flight data
NASA Technical Reports Server (NTRS)
Kokolios, Alex
1993-01-01
The lateral aerodynamic characteristics of the X-31 were determined at angles of attack ranging from 20 to 45 deg. Estimates of the lateral stability and control parameters were obtained by applying two parameter estimation techniques, linear regression, and the extended Kalman filter to flight test data. An attempt to apply maximum likelihood to extract parameters from the flight data was also made but failed for the reasons presented. An overview of the System Identification process is given. The overview includes a listing of the more important properties of all three estimation techniques that were applied to the data. A comparison is given of results obtained from flight test data and wind tunnel data for four important lateral parameters. Finally, future research to be conducted in this area is discussed.
Impact of various operating modes on performance and emission parameters of small heat source
NASA Astrophysics Data System (ADS)
Vician, Peter; Holubčík, Michal; Palacka, Matej; Jandačka, Jozef
2016-06-01
Thesis deals with the measurement of performance and emission parameters of small heat source for combustion of biomass in each of its operating modes. As the heat source was used pellet boiler with an output of 18 kW. The work includes design of experimental device for measuring the impact of changes in air supply and method for controlling the power and emission parameters of heat sources for combustion of woody biomass. The work describes the main factors that affect the combustion process and analyze the measurements of emissions at the heat source. The results of experiment demonstrate the values of performance and emissions parameters for the different operating modes of the boiler, which serve as a decisive factor in choosing the appropriate mode.
Ferronickel Preparation from Nickeliferous Laterite by Rotary Kiln-Electric Furnace Process
NASA Astrophysics Data System (ADS)
Li, Guanghui; Jia, Hao; Luo, Jun; Peng, Zhiwei; Zhang, Yuanbo; Jiang, Tao
Nickel is an important strategic metal, which is mainly used for stainless steel production. In the recent years, ferronickel has been used as a substitute for electrolytic nickel for alleviating the cost of stainless steel production. Rotary kiln-electric furnace (RKEF) smelting is currently the world-wide mainstreaming process for ferronickel production from nickeliferous laterite ore, in spite of the high power consumption. In this study, aiming to provide some meaningful guidance for ferronickel production of RKEF smelting, reductive roasting followed by smelting process was carried out. The conditions including reducing parameters (roasting temperature and time) and smelting parameters (coke dosage, CaO dosage, melting temperature and time) were ascertained. The metal recovery ratios, as well as Ni, Fe, S and P content of ferronickel were considered. The results showed that a ferronickel containing 10. 32 wt. % Ni was obtained from a laterite with 1. 85 wt. % Ni, the nickel recovery ratio was about 99%.
NASA Astrophysics Data System (ADS)
Wojs, M. K.; Orliński, P.; Kamela, W.; Kruczyński, P.
2016-09-01
The article presents the results of empirical research on the impact of ozone dissolved in fuel-water emulsion on combustion process and concentration of toxic substances in CI engine. The effect of ozone presence in the emulsion and its influence on main engine characteristics (power, torque, fuel consumption) and selected parameters that characterize combustion process (levels of pressures and temperatures in combustion chamber, period of combustion delay, heat release rate, fuel burnt rate) is shown. The change in concentration of toxic components in exhausts gases when engine is fueled with ozonized emulsion was also identified. The empirical research and their analysis showed significant differences in the combustion process when fuel-water emulsion containing ozone was used. These differences include: increased power and efficiency of the engine that are accompanied by reduction in time of combustion delay and beneficial effects of ozone on HC, PM, CO and NOX emissions.
Su-Huan, Kow; Fahmi, Muhammad Ridwan; Abidin, Che Zulzikrami Azner; Soon-An, Ong
2016-11-01
Advanced oxidation processes (AOPs) are of special interest in treating landfill leachate as they are the most promising procedures to degrade recalcitrant compounds and improve the biodegradability of wastewater. This paper aims to refresh the information base of AOPs and to discover the research gaps of AOPs in landfill leachate treatment. A brief overview of mechanisms involving in AOPs including ozone-based AOPs, hydrogen peroxide-based AOPs and persulfate-based AOPs are presented, and the parameters affecting AOPs are elaborated. Particularly, the advancement of AOPs in landfill leachate treatment is compared and discussed. Landfill leachate characterization prior to method selection and method optimization prior to treatment are necessary, as the performance and practicability of AOPs are influenced by leachate matrixes and treatment cost. More studies concerning the scavenging effects of leachate matrixes towards AOPs, as well as the persulfate-based AOPs in landfill leachate treatment, are necessary in the future.
Cost of ownership for inspection equipment
NASA Astrophysics Data System (ADS)
Dance, Daren L.; Bryson, Phil
1993-08-01
Cost of Ownership (CoO) models are increasingly a part of the semiconductor equipment evaluation and selection process. These models enable semiconductor manufacturers and equipment suppliers to quantify a system in terms of dollars per wafer. Because of the complex nature of the semiconductor manufacturing process, there are several key attributes that must be considered in order to accurately reflect the true 'cost of ownership'. While most CoO work to date has been applied to production equipment, the need to understand cost of ownership for inspection and metrology equipment presents unique challenges. Critical parameters such as detection sensitivity as a function of size and type of defect are not included in current CoO models yet are, without question, major factors in the technical evaluation process and life-cycle cost. This paper illustrates the relationship between these parameters, as components of the alpha and beta risk, and cost of ownership.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrubesh, L.; McGann, T. W.
This project was established as a three-year collaboration to produce and characterize · silica aerogels prepared by a Rapid Supercritical Extraction (RSCE) process to meet . BNA, Inc. application requirements. The objectives of this project were to study the parameters necessary to produce optimized aerogel parts with narrowly specified properties and establish the range and limits of the process for producing such aerogels. The project also included development of new aerogel materials useful for high temperature applications. The results of the project were expected to set the conditions necessary to produce quantities of aerogels having particular specifications such as size,more » shape, density, and mechanical strength. BNA, Inc. terminated the project on April 7, 1999, 10-months prior to the anticipated completion date, due to termination of corporate funding for the project. The technical accomplishments achieved are outlined in Paragraph C below.« less
Combined micromechanical and fabrication process optimization for metal-matrix composites
NASA Technical Reports Server (NTRS)
Morel, M.; Saravanos, D. A.; Chamis, C. C.
1991-01-01
A method is presented to minimize the residual matrix stresses in metal matrix composites. Fabrication parameters such as temperature and consolidation pressure are optimized concurrently with the characteristics (i.e., modulus, coefficient of thermal expansion, strength, and interphase thickness) of a fiber-matrix interphase. By including the interphase properties in the fabrication process, lower residual stresses are achievable. Results for an ultra-high modulus graphite (P100)/copper composite show a reduction of 21 percent for the maximum matrix microstress when optimizing the fabrication process alone. Concurrent optimization of the fabrication process and interphase properties show a 41 percent decrease in the maximum microstress. Therefore, this optimization method demonstrates the capability of reducing residual microstresses by altering the temperature and consolidation pressure histories and tailoring the interphase properties for an improved composite material. In addition, the results indicate that the consolidation pressures are the most important fabrication parameters, and the coefficient of thermal expansion is the most critical interphase property.
NASA Technical Reports Server (NTRS)
Morel, M.; Saravanos, D. A.; Chamis, Christos C.
1990-01-01
A method is presented to minimize the residual matrix stresses in metal matrix composites. Fabrication parameters such as temperature and consolidation pressure are optimized concurrently with the characteristics (i.e., modulus, coefficient of thermal expansion, strength, and interphase thickness) of a fiber-matrix interphase. By including the interphase properties in the fabrication process, lower residual stresses are achievable. Results for an ultra-high modulus graphite (P100)/copper composite show a reduction of 21 percent for the maximum matrix microstress when optimizing the fabrication process alone. Concurrent optimization of the fabrication process and interphase properties show a 41 percent decrease in the maximum microstress. Therefore, this optimization method demonstrates the capability of reducing residual microstresses by altering the temperature and consolidation pressure histories and tailoring the interphase properties for an improved composite material. In addition, the results indicate that the consolidation pressures are the most important fabrication parameters, and the coefficient of thermal expansion is the most critical interphase property.
Cutting performance orthogonal test of single plane puncture biopsy needle based on puncture force
NASA Astrophysics Data System (ADS)
Xu, Yingqiang; Zhang, Qinhe; Liu, Guowei
2017-04-01
Needle biopsy is a method to extract the cells from the patient's body with a needle for tissue pathological examination. Many factors affect the cutting process of soft tissue, including the geometry of the biopsy needle, the mechanical properties of the soft tissue, the parameters of the puncture process and the interaction between them. This paper conducted orthogonal experiment of main cutting parameters based on single plane puncture biopsy needle, and obtained the cutting force curve of single plane puncture biopsy needle by studying the influence of the inclination angle, diameter and velocity of the single plane puncture biopsy needle on the puncture force of the biopsy needle. Stage analysis of the cutting process of biopsy needle puncture was made to determine the main influencing factors of puncture force during the cutting process, which provides a certain theoretical support for the design of new type of puncture biopsy needle and the operation of puncture biopsy.
VESGEN Software for Mapping and Quantification of Vascular Regulators
NASA Technical Reports Server (NTRS)
Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.
2012-01-01
VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.
NASA Astrophysics Data System (ADS)
Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.
2017-12-01
Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.
Display device for indicating the value of a parameter in a process plant
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1993-01-01
An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.
The stochastic runoff-runon process: Extending its analysis to a finite hillslope
NASA Astrophysics Data System (ADS)
Jones, O. D.; Lane, P. N. J.; Sheridan, G. J.
2016-10-01
The stochastic runoff-runon process models the volume of infiltration excess runoff from a hillslope via the overland flow path. Spatial variability is represented in the model by the spatial distribution of rainfall and infiltration, and their ;correlation scale;, that is, the scale at which the spatial correlation of rainfall and infiltration become negligible. Notably, the process can produce runoff even when the mean rainfall rate is less than the mean infiltration rate, and it displays a gradual increase in net runoff as the rainfall rate increases. In this paper we present a number of contributions to the analysis of the stochastic runoff-runon process. Firstly we illustrate the suitability of the process by fitting it to experimental data. Next we extend previous asymptotic analyses to include the cases where the mean rainfall rate equals or exceeds the mean infiltration rate, and then use Monte Carlo simulation to explore the range of parameters for which the asymptotic limit gives a good approximation on finite hillslopes. Finally we use this to obtain an equation for the mean net runoff, consistent with our asymptotic results but providing an excellent approximation for finite hillslopes. Our function uses a single parameter to capture spatial variability, and varying this parameter gives us a family of curves which interpolate between known upper and lower bounds for the mean net runoff.
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Vapor hydrogen peroxide as alternative to dry heat microbial reduction
NASA Astrophysics Data System (ADS)
Chung, S.; Kern, R.; Koukol, R.; Barengoltz, J.; Cash, H.
2008-09-01
The Jet Propulsion Laboratory (JPL), in conjunction with the NASA Planetary Protection Officer, has selected vapor phase hydrogen peroxide (VHP) sterilization process for continued development as a NASA approved sterilization technique for spacecraft subsystems and systems. The goal was to include this technique, with an appropriate specification, in NASA Procedural Requirements 8020.12 as a low-temperature complementary technique to the dry heat sterilization process. The VHP process is widely used by the medical industry to sterilize surgical instruments and biomedical devices, but high doses of VHP may degrade the performance of flight hardware, or compromise material compatibility. The goal for this study was to determine the minimum VHP process conditions for planetary protection acceptable microbial reduction levels. Experiments were conducted by the STERIS Corporation, under contract to JPL, to evaluate the effectiveness of vapor hydrogen peroxide for the inactivation of the standard spore challenge, Geobacillus stearothermophilus. VHP process parameters were determined that provide significant reductions in spore viability while allowing survival of sufficient spores for statistically significant enumeration. In addition to the obvious process parameters of interest: hydrogen peroxide concentration, number of injection cycles, and exposure duration, the investigation also considered the possible effect on lethality of environmental parameters: temperature, absolute humidity, and material substrate. This study delineated a range of test sterilizer process conditions: VHP concentration, process duration, a process temperature range for which the worst case D-value may be imposed, a process humidity range for which the worst case D-value may be imposed, and the dependence on selected spacecraft material substrates. The derivation of D-values from the lethality data permitted conservative planetary protection recommendations.
Alizadeh Ashrafi, Sina; Miller, Peter W; Wandro, Kevin M; Kim, Dave
2016-10-13
Hole quality plays a crucial role in the production of close-tolerance holes utilized in aircraft assembly. Through drilling experiments of carbon fiber-reinforced plastic composites (CFRP), this study investigates the impact of varying drilling feed and speed conditions on fiber pull-out geometries and resulting hole quality parameters. For this study, hole quality parameters include hole size variance, hole roundness, and surface roughness. Fiber pull-out geometries are quantified by using scanning electron microscope (SEM) images of the mechanically-sectioned CFRP-machined holes, to measure pull-out length and depth. Fiber pull-out geometries and the hole quality parameter results are dependent on the drilling feed and spindle speed condition, which determines the forces and undeformed chip thickness during the process. Fiber pull-out geometries influence surface roughness parameters from a surface profilometer, while their effect on other hole quality parameters obtained from a coordinate measuring machine is minimal.
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
High Performance Input/Output for Parallel Computer Systems
NASA Technical Reports Server (NTRS)
Ligon, W. B.
1996-01-01
The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
A Modified MinMax k-Means Algorithm Based on PSO.
Wang, Xiaoyan; Bai, Yanping
The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.
Gas Atomization of Molten Metal: Part II. Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Lebdeh, Taher M.; Leon, Genaro Perez-de; Hamoush, Sameer A.
A numerical model was derived to obtain results for two alloys during the Gas Atomization (GA) method. The model equations and governing equations were implemented through the application of part I data. Aspects such as heat transfer, fluid mechanics, thermodynamics and law of motions were taken into account for the formulation of equations that take gas dynamics, droplet dynamics and energy balance or conservation into consideration. The inputs of the model include: Processing parameters such as the size of the droplets, characteristics of the metal alloy, initial temperature of the molten metal, properties and fractions of the atomization gas andmore » the gas pressure. The outputs include velocity and thermal profiles of the droplet and gas. Velocity profiles illustrate the velocity of both droplet and gas, while thermal profiles illustrate cooling rate and the rate of temperature change of the droplets. The alloys are gamma-Titanium Aluminide (γ-TiAl) and Al-3003-O. These alloys were selected due to the vast amount of applications both can have in several industries. Certain processing parameters were held constant, while others were altered. Furthermore, the main focus of this study was to gain insight into which optimal parameters should be utilized within the GA method for these alloys and to provide insight into the behavior of these alloys« less
NASA Astrophysics Data System (ADS)
Ju, Weimin; Gao, Ping; Wang, Jun; Li, Xianfeng; Chen, Shu
2008-10-01
Soil water content (SWC) is an important factor affecting photosynthesis, growth, and final yields of crops. The information on SWC is of importance for mitigating the reduction of crop yields caused by drought through proper agricultural water management. A variety of methodologies have been developed to estimate SWC at local and regional scales, including field sampling, remote sensing monitoring and model simulations. The reliability of regional SWC simulation depends largely on the accuracy of spatial input datasets, including vegetation parameters, soil and meteorological data. Remote sensing has been proved to be an effective technique for controlling uncertainties in vegetation parameters. In this study, the vegetation parameters (leaf area index and land cover type) derived from the Moderate Resolution Imaging Spectrometer (MODIS) were assimilated into a process-based ecosystem model BEPS for simulating the variations of SWC in croplands of Jiangsu province, China. Validation shows that the BEPS model is able to capture 81% and 83% of across-site variations of SWC at 10 and 20 cm depths during the period from September to December, 2006 when a serous autumn drought occurred. The simulated SWC responded the events of rainfall well at regional scale, demonstrating the usefulness of our methodology for SWC and practical agricultural water management at large scales.
Gas Atomization of Molten Metal: Part II. Applications
Abu-Lebdeh, Taher M.; Leon, Genaro Perez-de; Hamoush, Sameer A.; ...
2016-02-01
A numerical model was derived to obtain results for two alloys during the Gas Atomization (GA) method. The model equations and governing equations were implemented through the application of part I data. Aspects such as heat transfer, fluid mechanics, thermodynamics and law of motions were taken into account for the formulation of equations that take gas dynamics, droplet dynamics and energy balance or conservation into consideration. The inputs of the model include: Processing parameters such as the size of the droplets, characteristics of the metal alloy, initial temperature of the molten metal, properties and fractions of the atomization gas andmore » the gas pressure. The outputs include velocity and thermal profiles of the droplet and gas. Velocity profiles illustrate the velocity of both droplet and gas, while thermal profiles illustrate cooling rate and the rate of temperature change of the droplets. The alloys are gamma-Titanium Aluminide (γ-TiAl) and Al-3003-O. These alloys were selected due to the vast amount of applications both can have in several industries. Certain processing parameters were held constant, while others were altered. Furthermore, the main focus of this study was to gain insight into which optimal parameters should be utilized within the GA method for these alloys and to provide insight into the behavior of these alloys« less
Matero, Sanni; van Den Berg, Frans; Poutiainen, Sami; Rantanen, Jukka; Pajander, Jari
2013-05-01
The manufacturing of tablets involves many unit operations that possess multivariate and complex characteristics. The interactions between the material characteristics and process related variation are presently not comprehensively analyzed due to univariate detection methods. As a consequence, current best practice to control a typical process is to not allow process-related factors to vary i.e. lock the production parameters. The problem related to the lack of sufficient process understanding is still there: the variation within process and material properties is an intrinsic feature and cannot be compensated for with constant process parameters. Instead, a more comprehensive approach based on the use of multivariate tools for investigating processes should be applied. In the pharmaceutical field these methods are referred to as Process Analytical Technology (PAT) tools that aim to achieve a thorough understanding and control over the production process. PAT includes the frames for measurement as well as data analyzes and controlling for in-depth understanding, leading to more consistent and safer drug products with less batch rejections. In the optimal situation, by applying these techniques, destructive end-product testing could be avoided. In this paper the most prominent multivariate data analysis measuring tools within tablet manufacturing and basic research on operations are reviewed. Copyright © 2013 Wiley Periodicals, Inc.
Waechter, D.A.; Wolf, M.A.; Umbarger, C.J.
1981-11-03
A hand-holdable, battery-operated, microprocessor-based spectrometer gun is described that includes a low-power matrix display and sufficient memory to permit both real-time observation and extended analysis of detected radiation pulses. Universality of the incorporated signal processing circuitry permits operation with various detectors having differing pulse detection and sensitivity parameters.
High-speed blanking of copper alloy sheets: Material modeling and simulation
NASA Astrophysics Data System (ADS)
Husson, Ch.; Ahzi, S.; Daridon, L.
2006-08-01
To optimize the blanking process of thin copper sheets ( ≈ 1. mm thickness), it is necessary to study the influence of the process parameters such as the punch-die clearance and the wear of the punch and the die. For high stroke rates, the strain rate developed in the work-piece can be very high. Therefore, the material modeling must include the dynamic effects.For the modeling part, we propose an elastic-viscoplastic material model combined with a non-linear isotropic damage evolution law based on the theory of the continuum damage mechanics. Our proposed modeling is valid for a wide range of strain rates and temperatures. Finite Element simulations, using the commercial code ABAQUS/Explicit, of the blanking process are then conducted and the results are compared to the experimental investigations. The predicted cut edge of the blanked part and the punch-force displacement curves are discussed as function of the process parameters. The evolution of the shape errors (roll-over depth, fracture depth, shearing depth, and burr formation) as function of the punch-die clearance, the punch and the die wear, and the contact punch/die/blank-holder are presented. A discussion on the different stages of the blanking process as function of the processing parameters is given. The predicted results of the blanking dependence on strain-rate and temperature using our modeling are presented (for the plasticity and damage). The comparison our model results with the experimental ones shows a good agreement.
NASA Astrophysics Data System (ADS)
Goodkin, N.; Tanzil, J.; Murty, S. A.; Ramos, R.; Pullen, J. D.
2016-12-01
The Maritime Continent (MC) is a region of highly complex oceanography, encompassing a majority of the Coral Triangle, the most important region for coral biodiversity and cover. Intricate coastal processes including water body mixing, resulting from reversing monsoon winds and internal waves, expose corals to a wide variety of physical conditions. However, the pressures of climate change, overfishing, ocean acidification, and coastal development, to name a few, are significant in this region and threaten to challenge reefs over the next several decades. In order to predict and study how to facilitate reef recovery in the MC region, it is crucial to understand the environmental parameters for coral success. In this presentation, we will provide an overview of oceanographic processes on the maritime continent that drive seasonal variability in the waters of the MC, including changes to sea surface temperature, salinity, pH, turbidity, productivity and nutrients. Each of these parameters is known to have impacts on calcification rates and thus coral reef formation. Environmental conditions and currents can combine to facilitate larval dispersion or to exacerbate coral disease and predation, including crown of thorns outbreaks. Internal waves may protect against coral bleaching by lowering temperatures with the delivery of deeper water. Drawing on previously published and unpublished results, we will evaluate the parameters that may be impacting reef growth rates, biodiversity and resilience in a changing world in an effort to help plan for key measurements in the year of the MC.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
Experiments on Adaptive Self-Tuning of Seismic Signal Detector Parameters
NASA Astrophysics Data System (ADS)
Knox, H. A.; Draelos, T.; Young, C. J.; Chael, E. P.; Peterson, M. G.; Lawry, B.; Phillips-Alonge, K. E.; Balch, R. S.; Ziegler, A.
2016-12-01
Scientific applications, including underground nuclear test monitoring and microseismic monitoring can benefit enormously from data-driven dynamic algorithms for tuning seismic and infrasound signal detection parameters since continuous streams are producing waveform archives on the order of 1TB per month. Tuning is a challenge because there are a large number of data processing parameters that interact in complex ways, and because the underlying populating of true signal detections is generally unknown. The largely manual process of identifying effective parameters, often performed only over a subset of stations over a short time period, is painstaking and does not guarantee that the resulting controls are the optimal configuration settings. We present improvements to an Adaptive Self-Tuning algorithm for continuously adjusting detection parameters based on consistency with neighboring sensors. Results are shown for 1) data from a very dense network ( 120 stations, 10 km radius) deployed during 2008 on Erebus Volcano, Antarctica, and 2) data from a continuous downhole seismic array in the Farnsworth Field, an oil field in Northern Texas that hosts an ongoing carbon capture, utilization, and storage project. Performance is assessed in terms of missed detections and false detections relative to human analyst detections, simulated waveforms where ground-truth detections exist and visual inspection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
NASA Astrophysics Data System (ADS)
Xiao, D.; Shi, Y.; Li, L.
2016-12-01
Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
a Standardized Approach to Topographic Data Processing and Workflow Management
NASA Astrophysics Data System (ADS)
Wheaton, J. M.; Bailey, P.; Glenn, N. F.; Hensleigh, J.; Hudak, A. T.; Shrestha, R.; Spaete, L.
2013-12-01
An ever-increasing list of options exist for collecting high resolution topographic data, including airborne LIDAR, terrestrial laser scanners, bathymetric SONAR and structure-from-motion. An equally rich, arguably overwhelming, variety of tools exists with which to organize, quality control, filter, analyze and summarize these data. However, scientists are often left to cobble together their analysis as a series of ad hoc steps, often using custom scripts and one-time processes that are poorly documented and rarely shared with the community. Even when literature-cited software tools are used, the input and output parameters differ from tool to tool. These parameters are rarely archived and the steps performed lost, making the analysis virtually impossible to replicate precisely. What is missing is a coherent, robust, framework for combining reliable, well-documented topographic data-processing steps into a workflow that can be repeated and even shared with others. We have taken several popular topographic data processing tools - including point cloud filtering and decimation as well as DEM differencing - and defined a common protocol for passing inputs and outputs between them. This presentation describes a free, public online portal that enables scientists to create custom workflows for processing topographic data using a number of popular topographic processing tools. Users provide the inputs required for each tool and in what sequence they want to combine them. This information is then stored for future reuse (and optionally sharing with others) before the user then downloads a single package that contains all the input and output specifications together with the software tools themselves. The user then launches the included batch file that executes the workflow on their local computer against their topographic data. This ZCloudTools architecture helps standardize, automate and archive topographic data processing. It also represents a forum for discovering and sharing effective topographic processing workflows.
NASA Astrophysics Data System (ADS)
Flores, J. C.
2015-12-01
For ancient civilizations, the shift from disorder to organized urban settlements is viewed as a phase-transition simile. The number of monumental constructions, assumed to be a signature of civilization processes, corresponds to the order parameter, and effective connectivity becomes related to the control parameter. Based on parameter estimations from archaeological and paleo-climatological data, this study analyzes the rise and fall of the ancient Caral civilization on the South Pacific coast during a period of small ENSO fluctuations (approximately 4500 BP). Other examples considered include civilizations on Easter Island and the Maya Lowlands. This work considers a typical nonlinear third order evolution equation and numerical simulations.
Feng, Yan; Mitchison, Timothy J; Bender, Andreas; Young, Daniel W; Tallarico, John A
2009-07-01
Multi-parameter phenotypic profiling of small molecules provides important insights into their mechanisms of action, as well as a systems level understanding of biological pathways and their responses to small molecule treatments. It therefore deserves more attention at an early step in the drug discovery pipeline. Here, we summarize the technologies that are currently in use for phenotypic profiling--including mRNA-, protein- and imaging-based multi-parameter profiling--in the drug discovery context. We think that an earlier integration of phenotypic profiling technologies, combined with effective experimental and in silico target identification approaches, can improve success rates of lead selection and optimization in the drug discovery process.
Farajzadeh, Manuchehr; Egbal, Mahbobeh Nik
2007-08-15
In this study, the MEDALUS model along with GIS mapping techniques are used to determine desertification hazards for a province of Iran to determine the desertification hazard. After creating a desertification database including 20 parameters, the first steps consisted of developing maps of four indices for the MEDALUS model including climate, soil, vegetation and land use were prepared. Since these parameters have mostly been presented for the Mediterranean region in the past, the next step included the addition of other indicators such as ground water and wind erosion. Then all of the layers weighted by environmental conditions present in the area were used (following the same MEDALUS framework) before a desertification map was prepared. The comparison of two maps based on the original and modified MEDALUS models indicates that the addition of more regionally-specific parameters into the model allows for a more accurate representation of desertification processes across the Iyzad Khast plain. The major factors affecting desertification in the area are climate, wind erosion and low land quality management, vegetation degradation and the salinization of soil and water resources.
ASRM process development in aqueous cleaning
NASA Technical Reports Server (NTRS)
Swisher, Bill
1992-01-01
Viewgraphs are included on process development in aqueous cleaning which is taking place at the Aerojet Advanced Solid Rocket Motor (ASRM) Division under a NASA Marshall Space and Flight Center contract for design, development, test, and evaluation of the ASRM including new production facilities. The ASRM will utilize aqueous cleaning in several manufacturing process steps to clean case segments, nozzle metal components, and igniter closures. ASRM manufacturing process development is underway, including agent selection, agent characterization, subscale process optimization, bonding verification, and scale-up validation. Process parameters are currently being tested for optimization utilizing a Taguci Matrix, including agent concentration, cleaning solution temperature, agitation and immersion time, rinse water amount and temperature, and use/non-use of drying air. Based on results of process development testing to date, several observations are offered: aqueous cleaning appears effective for steels and SermeTel-coated metals in ASRM processing; aqueous cleaning agents may stain and/or attack bare aluminum metals to various extents; aqueous cleaning appears unsuitable for thermal sprayed aluminum-coated steel; aqueous cleaning appears to adequately remove a wide range of contaminants from flat metal surfaces, but supplementary assistance may be needed to remove clumps of tenacious contaminants embedded in holes, etc.; and hot rinse water appears to be beneficial to aid in drying of bare steel and retarding oxidation rate.
A note on physical mass and the thermodynamics of AdS-Kerr black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
McInnes, Brett; Ong, Yen Chin, E-mail: matmcinn@nus.edu.sg, E-mail: yenchin.ong@nordita.org
As with any black hole, asymptotically anti-de Sitter Kerr black holes are described by a small number of parameters, including a ''mass parameter'' M that reduces to the AdS-Schwarzschild mass in the limit of vanishing angular momentum. In sharp contrast to the asymptotically flat case, the horizon area of such a black hole increases with the angular momentum parameter a if one fixes M; this appears to mean that the Penrose process in this case would violate the Second Law of black hole thermodynamics. We show that the correct procedure is to fix not M but rather the ''physical'' massmore » E=M/(1−a{sup 2}/L{sup 2}){sup 2}; this is motivated by the First Law. For then the horizon area decreases with a. We recommend that E always be used as the mass in physical processes: for example, in attempts to ''over-spin'' AdS-Kerr black holes.« less
Parametric analysis and temperature effect of deployable hinged shells using shape memory polymers
NASA Astrophysics Data System (ADS)
Tao, Ran; Yang, Qing-Sheng; He, Xiao-Qiao; Liew, Kim-Meow
2016-11-01
Shape memory polymers (SMPs) are a class of intelligent materials, which are defined by their capacity to store a temporary shape and recover an original shape. In this work, the shape memory effect of SMP deployable hinged shell is simulated by using compiled user defined material subroutine (UMAT) subroutine of ABAQUS. Variations of bending moment and strain energy of the hinged shells with different temperatures and structural parameters in the loading process are given. The effects of the parameters and temperature on the nonlinear deformation process are emphasized. The entire thermodynamic cycle of SMP deployable hinged shell includes loading at high temperature, load carrying with cooling, unloading at low temperature and recovering the original shape with heating. The results show that the complicated thermo-mechanical deformation and shape memory effect of SMP deployable hinge are influenced by the structural parameters and temperature. The design ability of SMP smart hinged structures in practical application is prospected.
Mohamadzadeh Shirazi, Hamed; Karimi-Sabet, Javad; Ghotbi, Cyrus
2017-09-01
Microalgae as a candidate for production of biodiesel, possesses a hard cell wall that prevents intracellular lipids leaving out from the cells. Direct or in situ supercritical transesterification has the potential for destruction of microalgae hard cell wall and conversion of extracted lipids to biodiesel that consequently reduces the total energy consumption. Response surface methodology combined with central composite design was applied to investigate process parameters including: Temperature, Time, Methanol-to-dry algae, Hexane-to-dry algae, and Moisture content. Thirty-two experiments were designed and performed in a batch reactor, and biodiesel efficiency between 0.44% and 99.32% was obtained. According to fatty acid methyl ester yields, a quadratic experimental model was adjusted and the significance of parameters was evaluated using analysis of variance (ANOVA). Effects of single and interaction parameters were also interpreted. In addition, the effect of supercritical process on the ultrastructure of microalgae cell wall using scanning electron spectrometry (SEM) was surveyed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effect of printing parameters on gravure patterning with conductive silver ink
NASA Astrophysics Data System (ADS)
Kim, Seunghwan; Sung, Hyung Jin
2015-04-01
Conductive line patterns were printed on a poly-dimethylsiloxane (PDMS) substrate using a gravure printing method with conductive silver ink. A plate-to-roll gravure print was prepared for this experiment. Gravure plates with fine lines 5-25 μm in width and 0-90° in tilted angles were fabricated using photolithography techniques. The printability, defined as the ratio of the real printed area to the ideal printed area, was measured and analyzed with respect to the process parameters and the line pattern designs. The effect of the process parameters on the fine line patterning was discussed, including the wiping condition, the printing pressure and the printing speed. The printability of the high adhesive substrate was examined by preparing a nanostructured PDMS substrate featuring a forest of 200 nm nanopillars using an anodic aluminum oxide (AAO) template. The patterns printed onto the nanostructured PDMS were compared with those printed on a flat PDMS substrate.
NASA Astrophysics Data System (ADS)
junfeng, Li; zhengying, Wei
2017-11-01
Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.
The CRDS method application for study of the gas-phase processes in the hot CVD diamond thin film.
NASA Astrophysics Data System (ADS)
Buzaianumakarov, Vladimir; Hidalgo, Arturo; Morell, Gerardo; Weiner, Brad; Buzaianu, Madalina
2006-03-01
For detailed analysis of problem related to the hot CVD carbon-containing nano-material growing, we have to detect different intermediate species forming during the growing process as well as investigate dependences of concentrations of these species on different experimental parameters (concentrations of the CJH4, H2S stable chemical compounds and distance from the filament system to the substrate surface). In the present study, the HS and CS radicals were detected using the Cavity Ring Down Spectroscopic (CRDS) method in the hot CVD diamond thin film for the CH4(0.4 %) + H2 mixture doped by H2S (400 ppm). The absolute absorption density spectra of the HS and CS radicals were obtained as a function of different experimental parameters. This study proofs that the HS and CS radicals are an intermediate, which forms during the hot filament CVD process. The kinetics approach was developed for detailed analysis of the experimental data obtained. The kinetics scheme includes homogenous and heterogenous processes as well as processes of the chemical species transport in the CVD chamber.
NASA Astrophysics Data System (ADS)
Liu, Ronghua; Sun, Qiaofeng; Hu, Tian; Li, Lian; Nie, Lei; Wang, Jiayue; Zhou, Wanhui; Zang, Hengchang
2018-03-01
As a powerful process analytical technology (PAT) tool, near infrared (NIR) spectroscopy has been widely used in real-time monitoring. In this study, NIR spectroscopy was applied to monitor multi-parameters of traditional Chinese medicine (TCM) Shenzhiling oral liquid during the concentration process to guarantee the quality of products. Five lab scale batches were employed to construct quantitative models to determine five chemical ingredients and physical change (samples density) during concentration process. The paeoniflorin, albiflorin, liquiritin and samples density were modeled by partial least square regression (PLSR), while the content of the glycyrrhizic acid and cinnamic acid were modeled by support vector machine regression (SVMR). Standard normal variate (SNV) and/or Savitzkye-Golay (SG) smoothing with derivative methods were adopted for spectra pretreatment. Variable selection methods including correlation coefficient (CC), competitive adaptive reweighted sampling (CARS) and interval partial least squares regression (iPLS) were performed for optimizing the models. The results indicated that NIR spectroscopy was an effective tool to successfully monitoring the concentration process of Shenzhiling oral liquid.
Experimental Study of Heat Transfer Performance of Polysilicon Slurry Drying Process
NASA Astrophysics Data System (ADS)
Wang, Xiaojing; Ma, Dongyun; Liu, Yaqian; Wang, Zhimin; Yan, Yangyang; Li, Yuankui
2016-12-01
In recent years, the growth of the solar energy photovoltaic industry has greatly promoted the development of polysilicon. However, there has been little research into the slurry by-products of polysilicon production. In this paper the thermal performance of polysilicon slurry was studied in an industrial drying process with a twin-screw horizontal intermittent dryer. By dividing the drying process into several subunits, the parameters of each unit could be regarded as constant in that period. The time-dependent changes in parameters including temperature, specific heat and evaporation enthalpy were plotted. An equation for the change in the heat transfer coefficient over time was calculated based on heat transfer equations. The concept of a distribution coefficient was introduced to reflect the influence of stirring on the heat transfer area. The distribution coefficient ranged from 1.2 to 1.7 and was obtained with the fluid simulation software FLUENT, which simplified the calculation of heat transfer area during the drying process. These experimental data can be used to guide the study of polysilicon slurry drying and optimize the design of dryers for industrial processes.
An invertebrate embryologist's guide to routine processing of confocal images.
von Dassow, George
2014-01-01
It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.
Additive manufactured serialization
Bobbitt, III, John T.
2017-04-18
Methods for forming an identifying mark in a structure are described. The method is used in conjunction with an additive manufacturing method and includes the alteration of a process parameter during the manufacturing process. The method can form in a unique identifying mark within or on the surface of a structure that is virtually impossible to be replicated. Methods can provide a high level of confidence that the identifying mark will remain unaltered on the formed structure.
Current techniques for the real-time processing of complex radar signatures
NASA Astrophysics Data System (ADS)
Clay, E.
A real-time processing technique has been developed for the microwave receiver of the Brahms radar station. The method allows such target signatures as the radar cross section (RCS) of the airframes and rotating parts, the one-dimensional tomography of aircraft, and the RCS of electromagnetic decoys to be characterized. The method allows optimization of experimental parameters including the analysis frequency band, the receiver gain, and the wavelength range of EM analysis.
Predictive Feature Selection for Genetic Policy Search
2014-05-22
inverted pendulum balancing problem (Gomez and Miikkulainen, 1999), where the agent must learn a policy in a continuous state space using discrete...algorithms to automate the process of training and/or designing NNs, mitigate these drawbacks and allow NNs to be easily applied to RL domains (Sher, 2012...racing simulator and the double inverted pendulum balance environments. It also includes parameter settings for all algorithms included in the study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford Kuofei
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less
Wave data processing toolbox manual
Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul
2006-01-01
Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata structure is embedded with the data to document pertinent information regarding the deployment and the parameters used to process the data. Using this format ensures that the relevant information about how the data was collected and converted to physical units is maintained with the actual data. EPIC-standard variable names have been utilized where appropriate. These standards, developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) (http://www.pmel.noaa.gov/epic/), provide a universal vernacular allowing researchers to share data without translation.
Processing study of a high temperature adhesive
NASA Technical Reports Server (NTRS)
Progar, D. J.
1984-01-01
An adhesive-bonding process cycle study was performed for a polyimidesulphone. The high molecular weight, linear aromatic system possesses properties which make it attractive as a processable, low-cost material for elevated temperature applications. The results of a study to better understand the parameters that affect the adhesive properties of the polymer for titanium alloy adherends are presented. These include the tape preparation, the use of a primer and press and simulated autoclave processing conditions. The polymer was characterized using Fourier transform infrared spectroscopy, glass transition temperature determination, flow measurements, and weight loss measurements. The lap shear strength of the adhesive was used to evaluate the effects of the bonding process variations.
NASA Astrophysics Data System (ADS)
Kunstadt, Peter; Eng, P.; Steeves, Colyn; Beaulieu, Daniel; Eng, P.
1993-07-01
The number of products being radiation processed worldwide is constantly increasing and today includes such diverse items as medical disposables, fruits and vegetables, spices, meats, seafoods and waste products. This range of products to be processed has resulted in a wide range of irradiator designs and capital and operating cost requirements. This paper discusses the economics of low dose food irradiation applications and the effects of various parameters on unit processing costs. It provides a model for calculating specific unit processing costs by correlating known capital costs with annual operating costs and annual throughputs. It is intended to provide the reader with a general knowledge of how unit processing costs are derived.
Modeling of dialogue regimes of distance robot control
NASA Astrophysics Data System (ADS)
Larkin, E. V.; Privalov, A. N.
2017-02-01
Process of distance control of mobile robots is investigated. Petri-Markov net for modeling of dialogue regime is worked out. It is shown, that sequence of operations of next subjects: a human operator, a dialogue computer and an onboard computer may be simulated with use the theory of semi-Markov processes. From the semi-Markov process of the general form Markov process was obtained, which includes only states of transaction generation. It is shown, that a real transaction flow is the result of «concurrency» in states of Markov process. Iteration procedure for evaluation of transaction flow parameters, which takes into account effect of «concurrency», is proposed.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Landsat-5 bumper-mode geometric correction
Storey, James C.; Choate, Michael J.
2004-01-01
The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.
Pervez, Hifsa; Mozumder, Mohammad S; Mourad, Abdel-Hamid I
2016-08-22
The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO₂ nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO₂), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young's modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L₉ orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO₂, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO₂ nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO₂ nanocomposites fabricated through the injection molding process.
Analysis of latency performance of bluetooth low energy (BLE) networks.
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2014-12-23
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes.
Analysis of Latency Performance of Bluetooth Low Energy (BLE) Networks
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2015-01-01
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266
Radiometrie recalibration procedure for landsat-5 thematic mapper data
Chander, G.; Micijevic, E.; Hayes, R.W.; Barsi, J.A.
2008-01-01
The Landsat-5 (L5) satellite was launched on March 01, 1984, with a design life of three years. Incredibly, the L5 Thematic Mapper (TM) has collected data for 23 years. Over this time, the detectors have aged, and its radiometric characteristics have changed since launch. The calibration procedures and parameters have also changed with time. Revised radiometric calibrations have improved the radiometric accuracy of recently processed data; however, users with data that were processed prior to the calibration update do not benefit from the revisions. A procedure has been developed to give users the ability to recalibrate their existing Level 1 (L1) products without having to purchase reprocessed data from the U.S. Geological Survey (USGS). The accuracy of the recalibration is dependent on the knowledge of the prior calibration applied to the data. The ""Work Order" file, included with standard National Land Archive Production System (NLAFS) data products, gives parameters that define the applied calibration. These are the Internal Calibrator (IC) calibration parameters or the default prelaunch calibration, if there were problems with the IC calibration. This paper details the recalibration procedure for data processed using IC, in which users have the Work Order file. ?? 2001 IEEE.
Chen, Chunyan; Long, Sihua; Li, Airong; Xiao, Guoqing; Wang, Linyuan; Xiao, Zeyi
2017-03-16
Since both ethanol and butanol fermentations are urgently developed processes with the biofuel-demand increasing, performance comparison of aerobic ethanol fermentation and anerobic butanol fermentation in a continuous and closed-circulating fermentation (CCCF) system was necessary to achieve their fermentation characteristics and further optimize the fermentation process. Fermentation and pervaporation parameters including the average cell concentration, glucose consumption rate, cumulated production concentration, product flux, and separation factor of ethanol fermentation were 11.45 g/L, 3.70 g/L/h, 655.83 g/L, 378.5 g/m 2 /h, and 4.83, respectively, the corresponding parameters of butanol fermentation were 2.19 g/L, 0.61 g/L/h, 28.03 g/L, 58.56 g/m 2 /h, and 10.62, respectively. Profiles of fermentation and pervaporation parameters indicated that the intensity and efficiency of ethanol fermentation was higher than butanol fermentation, but the stability of butanol fermentation was superior to ethanol fermentation. Although the two fermentation processes had different features, the performance indicated the application prospect of both ethanol and butanol production by the CCCF system.
Meirelles, S L C; Mokry, F B; Espasandín, A C; Dias, M A D; Baena, M M; de A Regitano, L C
2016-06-10
Correlation between genetic parameters and factors such as backfat thickness (BFT), rib eye area (REA), and body weight (BW) were estimated for Canchim beef cattle raised in natural pastures of Brazil. Data from 1648 animals were analyzed using multi-trait (BFT, REA, and BW) animal models by the Bayesian approach. This model included the effects of contemporary group, age, and individual heterozygosity as covariates. In addition, direct additive genetic and random residual effects were also analyzed. Heritability estimated for BFT (0.16), REA (0.50), and BW (0.44) indicated their potential for genetic improvements and response to selection processes. Furthermore, genetic correlations between BW and the remaining traits were high (P > 0.50), suggesting that selection for BW could improve REA and BFT. On the other hand, genetic correlation between BFT and REA was low (P = 0.39 ± 0.17), and included considerable variations, suggesting that these traits can be jointly included as selection criteria without influencing each other. We found that REA and BFT responded to the selection processes, as measured by ultrasound. Therefore, selection for yearling weight results in changes in REA and BFT.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory
2007-01-01
NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.
Solid State Joining of Magnesium to Steel
NASA Astrophysics Data System (ADS)
Jana, Saumyadeep; Hovanski, Yuri; Pilli, Siva P.; Field, David P.; Yu, Hao; Pan, Tsung-Yu; Santella, M. L.
Friction stir welding and ultrasonic welding techniques were applied to join automotive magnesium alloys to steel sheet. The effect of tooling and process parameters on the post-weld microstructure, texture and mechanical properties was investigated. Static and dynamic loading were utilized to investigate the joint strength of both cast and wrought magnesium alloys including their susceptibility and degradation under corrosive media. The conditions required to produce joint strengths in excess of 75% of the base metal strength were determined, and the effects of surface coatings, tooling and weld parameters on weld properties are presented.
Taylor, Stephen R; Simon, Joseph; Sampson, Laura
2017-05-05
We introduce a technique for gravitational-wave analysis, where Gaussian process regression is used to emulate the strain spectrum of a stochastic background by training on population-synthesis simulations. This leads to direct Bayesian inference on astrophysical parameters. For pulsar timing arrays specifically, we interpolate over the parameter space of supermassive black-hole binary environments, including three-body stellar scattering, and evolving orbital eccentricity. We illustrate our approach on mock data, and assess the prospects for inference with data similar to the NANOGrav 9-yr data release.
A VLBI variance-covariance analysis interactive computer program. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bock, Y.
1980-01-01
An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.
Optimization of injection molding process parameters for a plastic cell phone housing component
NASA Astrophysics Data System (ADS)
Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya
2016-11-01
To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
β-decay studies of r-process nuclei at NSCL
NASA Astrophysics Data System (ADS)
Pereira, J.; Aprahamian, A.; Arndt, O.; Becerril, A.; Elliot, T.; Estrade, A.; Galaviz, D.; Hennrich, S.; Hosmer, P.; Schnorrenberger, L.; Kessler, R.; Kratz, K.-L.; Lorusso, G.; Mantica, P. F.; Matos, M.; Montes, F.; Pfeiffer, B.; Quinn, M.; Santi, P.; Schatz, H.; Schertz, F.; Smith, E.; Tomlin, B. E.; Walters, W. B.; Wöhr, A.
2008-06-01
Observed neutron-capture elemental abundances in metal-poor stars, along with ongoing analysis of the extremely metal-poor Eu-enriched sub-class provide new guidance for astrophysical models aimed at finding the r-process sites. The present paper emphasizes the importance of nuclear physics parameters entering in these models, particularly β-decay properties of neutron-rich nuclei. In this context, several r-process motivated β-decay experiments performed at the National Superconducting Cyclotron Laboratory (NSCL) are presented, including a summary of results and impact on model calculations.
"Second Chance": Some Theoretical and Empirical Remarks.
ERIC Educational Resources Information Center
Inbar, Dan E.; Sever, Rita
1986-01-01
Presents a conceptual framework of second-chance systems analyzable in terms of several basic parameters (targeted population, declared goals, processes, options for students, evaluation criteria, and implications for the regular system). Uses this framework to analyze an Israeli external high school, the subject of a large-scale study. Includes 3…
Unit Operation Experiment Linking Classroom with Industrial Processing
ERIC Educational Resources Information Center
Benson, Tracy J.; Richmond, Peyton C.; LeBlanc, Weldon
2013-01-01
An industrial-type distillation column, including appropriate pumps, heat exchangers, and automation, was used as a unit operations experiment to provide a link between classroom teaching and real-world applications. Students were presented with an open-ended experiment where they defined the testing parameters to solve a generalized problem. The…
The ability of pervaporation to remove methyl t-butyl ether (MTBE) from water was evaluated at bench- and pilot-scales. Process parameters studied included flow rate, temperature, MTBE concentration, membrane module type, and permeate pressure. Pervaporation performance was ass...
USDA-ARS?s Scientific Manuscript database
Several parameters of Microwave-assisted extraction (MAE) including extraction time, extraction temperature, ethanol concentration and solid-liquid ratio were selected to describe the MAE processing. The silybin content, measured by an UV-Vis spectrophotometry, was considered as the silymarin yield....
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2010-01-01
A method, computer readable storage, and apparatus for implementing recursive segmentation of data with spatial characteristics into regions including splitting-remerging of pixels with contagious region designations and a user controlled parameter for providing a preference for merging adjacent regions to eliminate window artifacts.
A CRITERION PAPER ON PARAMETERS OF EDUCATION. FINAL REVISION.
ERIC Educational Resources Information Center
MEIERHENRY, W. C.
THIS POSITION PAPER DEFINES ASPECTS OF INNOVATION IN EDUCATION. THE APPROPRIATENESS OF PLANNED CHANGE AND THE LEGITIMACY OF FUNCTION OF PLANNED CHANGE ARE DISCUSSED. PRIMARY ELEMENTS OF INNOVATION INCLUDE THE SUBSTITUTION OF ONE MATERIAL OR PROCESS FOR ANOTHER, THE RESTRUCTURING OF TEACHER ASSIGNMENTS, VALUE CHANGES WITH RESPECT TO TEACHING…
NASA Astrophysics Data System (ADS)
Chitrakar, S.; Miller, S. N.; Liu, T.; Caffrey, P. A.
2015-12-01
Water quality data have been collected from three representative stream reaches in a coalbed methane (CBM) development area for over five years to improve the understanding of salt loading in the system. These streams are located within Atlantic Rim development area of the Muddy Creek in south-central Wyoming. Significant development of CBM wells is ongoing in the study area. Three representative sampling stream reaches included the Duck Pond Draw and Cow Creek, which receive co-produced water, and; South Fork Creek, and upstream Cow Creek which do not receive co-produced water. Water samples were assayed for various parameters which included sodium, calcium, magnesium, fluoride, chlorine, nitrate, O-phosphate, sulfate, carbonate, bicarbonates, and other water quality parameters such as pH, conductivity, and TDS. Based on these water quality parameters we have investigated various hydrochemical and geochemical processes responsible for the high variability in water quality in the region. However, effective interpretation of complex databases to understand aforementioned processes has been a challenging task due to the system's complexity. In this work we applied multivariate statistical techniques including cluster analysis (CA), principle component analysis (PCA) and discriminant analysis (DA) to analyze water quality data and identify similarities and differences among our locations. First, CA technique was applied to group the monitoring sites based on the multivariate similarities. Second, PCA technique was applied to identify the prevalent parameters responsible for the variation of water quality in each group. Third, the DA technique was used to identify the most important factors responsible for variation of water quality during low flow season and high flow season. The purpose of this study is to improve the understanding of factors or sources influencing the spatial and temporal variation of water quality. The ultimate goal of this whole research is to develop coupled salt loading and GIS-based hydrological modelling tool that will be able to simulate the salt loadings under various user defined scenarios in the regions undergoing CBM development. Therefore, the findings from this study will be used to formulate the predominant processes responsible for solute loading.
NASA Astrophysics Data System (ADS)
Naik, Deepak kumar; Maity, K. P.
2018-03-01
Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Anderman, Evan R.; Hill, Mary Catherine
2001-01-01
Observations of the advective component of contaminant transport in steady-state flow fields can provide important information for the calibration of ground-water flow models. This report documents the Advective-Transport Observation (ADV2) Package, version 2, which allows advective-transport observations to be used in the three-dimensional ground-water flow parameter-estimation model MODFLOW-2000. The ADV2 Package is compatible with some of the features in the Layer-Property Flow and Hydrogeologic-Unit Flow Packages, but is not compatible with the Block-Centered Flow or Generalized Finite-Difference Packages. The particle-tracking routine used in the ADV2 Package duplicates the semi-analytical method of MODPATH, as shown in a sample problem. Particles can be tracked in a forward or backward direction, and effects such as retardation can be simulated through manipulation of the effective-porosity value used to calculate velocity. Particles can be discharged at cells that are considered to be weak sinks, in which the sink applied does not capture all the water flowing into the cell, using one of two criteria: (1) if there is any outflow to a boundary condition such as a well or surface-water feature, or (2) if the outflow exceeds a user specified fraction of the cell budget. Although effective porosity could be included as a parameter in the regression, this capability is not included in this package. The weighted sum-of-squares objective function, which is minimized in the Parameter-Estimation Process, was augmented to include the square of the weighted x-, y-, and z-components of the differences between the simulated and observed advective-front locations at defined times, thereby including the direction of travel as well as the overall travel distance in the calibration process. The sensitivities of the particle movement to the parameters needed to minimize the objective function are calculated for any particle location using the exact sensitivity-equation approach; the equations are derived by taking the partial derivatives of the semi-analytical particle-tracking equation with respect to the parameters. The ADV2 Package is verified by showing that parameter estimation using advective-transport observations produces the true parameter values in a small but complicated test case when exact observations are used. To demonstrate how the ADV2 Package can be used in practice, a field application is presented. In this application, the ADV2 Package is used first in the Sensitivity-Analysis mode of MODFLOW-2000 to calculate measures of the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Cape Cod, Massachusetts. The ADV2 Package is then used in the Parameter-Estimation mode of MODFLOW-2000 to determine best-fit parameter values. It is concluded that, for this problem, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and the use of formal parameter-estimation methods and related techniques produced significant insight into the physical system.
NASA Astrophysics Data System (ADS)
Shayesteh Moghaddam, Narges; Saedi, Soheil; Amerinatanzi, Amirhesam; Saghaian, Ehsan; Jahadakbar, Ahmadreza; Karaca, Haluk; Elahinia, Mohammad
2018-03-01
Material and mechanical properties of NiTi shape memory alloys strongly depend on the fabrication process parameters and the resulting microstructure. In selective laser melting, the combination of parameters such as laser power, scanning speed, and hatch spacing determine the microstructural defects, grain size and texture. Therefore, processing parameters can be adjusted to tailor the microstructure and mechanical response of the alloy. In this work, NiTi samples were fabricated using Ni50.8Ti (at.%) powder via SLM PXM by Phenix/3D Systems and the effects of processing parameters were systematically studied. The relationship between the processing parameters and superelastic properties were investigated thoroughly. It will be shown that energy density is not the only parameter that governs the material response. It will be shown that hatch spacing is the dominant factor to tailor the superelastic response. It will be revealed that with the selection of right process parameters, perfect superelasticity with recoverable strains of up to 5.6% can be observed in the as-fabricated condition.
Process qualification and testing of LENS deposited AY1E0125 D-bottle brackets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atwood, Clinton J.; Smugeresky, John E.; Jew, Michael
2006-11-01
The LENS Qualification team had the goal of performing a process qualification for the Laser Engineered Net Shaping{trademark}(LENS{reg_sign}) process. Process Qualification requires that a part be selected for process demonstration. The AY1E0125 D-Bottle Bracket from the W80-3 was selected for this work. The repeatability of the LENS process was baselined to determine process parameters. Six D-Bottle brackets were deposited using LENS, machined to final dimensions, and tested in comparison to conventionally processed brackets. The tests, taken from ES1E0003, included a mass analysis and structural dynamic testing including free-free and assembly-level modal tests, and Haversine shock tests. The LENS brackets performedmore » with very similar characteristics to the conventionally processed brackets. Based on the results of the testing, it was concluded that the performance of the brackets made them eligible for parallel path testing in subsystem level tests. The testing results and process rigor qualified the LENS process as detailed in EER200638525A.« less
NASA Astrophysics Data System (ADS)
Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul
2018-04-01
Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.
A real-time multi-channel monitoring system for stem cell culture process.
Xicai Yue; Drakakis, E M; Lim, M; Radomska, A; Hua Ye; Mantalaris, A; Panoskaltsis, N; Cass, A
2008-06-01
A novel, up to 128 channels, multi-parametric physiological measurement system suitable for monitoring hematopoietic stem cell culture processes and cell cultures in general is presented in this paper. The system aims to measure in real-time the most important physical and chemical culture parameters of hematopoietic stem cells, including physicochemical parameters, nutrients, and metabolites, in a long-term culture process. The overarching scope of this research effort is to control and optimize the whole bioprocess by means of the acquisition of real-time quantitative physiological information from the culture. The system is designed in a modular manner. Each hardware module can operate as an independent gain programmable, level shift adjustable, 16 channel data acquisition system specific to a sensor type. Up to eight such data acquisition modules can be combined and connected to the host PC to realize the whole system hardware. The control of data acquisition and the subsequent management of data is performed by the system's software which is coded in LabVIEW. Preliminary experimental results presented here show that the system not only has the ability to interface to various types of sensors allowing the monitoring of different types of culture parameters. Moreover, it can capture dynamic variations of culture parameters by means of real-time multi-channel measurements thus providing additional information on both temporal and spatial profiles of these parameters within a bioreactor. The system is by no means constrained in the hematopoietic stem cell culture field only. It is suitable for cell growth monitoring applications in general.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun
2018-01-01
Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073
Information spreading dynamics in hypernetworks
NASA Astrophysics Data System (ADS)
Suo, Qi; Guo, Jin-Li; Shen, Ai-Zhong
2018-04-01
Contact pattern and spreading strategy fundamentally influence the spread of information. Current mathematical methods largely assume that contacts between individuals are fixed by networks. In fact, individuals are affected by all his/her neighbors in different social relationships. Here, we develop a mathematical approach to depict the information spreading process in hypernetworks. Each individual is viewed as a node, and each social relationship containing the individual is viewed as a hyperedge. Based on SIS epidemic model, we construct two spreading models. One model is based on global transmission, corresponding to RP strategy. The other is based on local transmission, corresponding to CP strategy. These models can degenerate into complex network models with a special parameter. Thus hypernetwork models extend the traditional models and are more realistic. Further, we discuss the impact of parameters including structure parameters of hypernetwork, spreading rate, recovering rate as well as information seed on the models. Propagation time and density of informed nodes can reveal the overall trend of information dissemination. Comparing these two models, we find out that there is no spreading threshold in RP, while there exists a spreading threshold in CP. The RP strategy induces a broader and faster information spreading process under the same parameters.
Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun
2018-01-01
Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Inverse analysis of water profile in starch by non-contact photopyroelectric method
NASA Astrophysics Data System (ADS)
Frandas, A.; Duvaut, T.; Paris, D.
2000-07-01
The photopyroelectric (PPE) method in a non-contact configuration was proposed to study water migration in starch sheets used for biodegradable packaging. A 1-D theoretical model was developed, allowing the study of samples having a water profile characterized by an arbitrary continuous function. An experimental setup was designed or this purpose which included the choice of excitation source, detection of signals, signal and data processing, and cells for conditioning the samples. We report here the development of an inversion procedure allowing for the determination of the parameters that influence the PPE signal. This procedure led to the optimization of experimental conditions in order to identify the parameters related to the water profile in the sample, and to monitor the dynamics of the process.
Satellite on-board processing for earth resources data
NASA Technical Reports Server (NTRS)
Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.
1975-01-01
Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.
Pan, Hongye; Zhang, Qing; Cui, Keke; Chen, Guoquan; Liu, Xuesong; Wang, Longhu
2017-05-01
The extraction of linarin from Flos chrysanthemi indici by ethanol was investigated. Two modeling techniques, response surface methodology and artificial neural network, were adopted to optimize the process parameters, such as, ethanol concentration, extraction period, extraction frequency, and solvent to material ratio. We showed that both methods provided good predictions, but artificial neural network provided a better and more accurate result. The optimum process parameters include, ethanol concentration of 74%, extraction period of 2 h, extraction three times, solvent to material ratio of 12 mL/g. The experiment yield of linarin was 90.5% that deviated less than 1.6% from that obtained by predicted result. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Registering parameters and granules of wave observations: IMAGE RPI success story
NASA Astrophysics Data System (ADS)
Galkin, I. A.; Charisi, A.; Fung, S. F.; Benson, R. F.; Reinisch, B. W.
2015-12-01
Modern metadata systems strive to help scientists locate data relevant to their research and then retrieve them quickly. Success of this mission depends on the organization and completeness of metadata. Each relevant data resource has to be registered; each content has to be described; each data file has to be accessible. Ultimately, data discoverability is about the practical ability to describe data content and location. Correspondingly, data registration has a "Parameter" level, at which content is specified by listing available observed properties (parameters), and a "Granule" level, at which download links are given to data records (granules). Until recently, both parameter- and granule-level data registrations were accomplished at NASA Virtual System Observatory easily by listing provided parameters and building Granule documents with URLs to the datafile locations, usually those at NASA CDAWeb data warehouse. With the introduction of the Virtual Wave Observatory (VWO), however, the parameter/granule concept faced a scalability challenge. The wave phenomenon content is rich with descriptors of the wave generation, propagation, interaction with propagation media, and observation processes. Additionally, the wave phenomenon content varies from record to record, reflecting changes in the constituent processes, making it necessary to generate granule documents at sub-minute resolution. We will present the first success story of registering 234,178 records of IMAGE Radio Plasma Imager (RPI) plasmagram data and Level 2 derived data products in ESPAS (near-Earth Space Data Infrastructure for e-Science), using the VWO-inspired wave ontology. The granules are arranged in overlapping display and numerical data collections. Display data include (a) auto-prospected plasmagrams of potential interest, (b) interesting plasmagrams annotated by human analysts or software, and (c) spectacular plasmagrams annotated by analysts as publication-quality examples of the RPI science. Numerical data products include plasmagram-derived records containing signatures of local and remote signal propagation, as well as field-aligned profiles of electron density in the plasmasphere. Registered granules of RPI observations are available in ESPAS for their content-targeted search and retrieval.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.
Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A
2017-04-15
Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.
Wu, Yue; Nan, Bo; Chen, Liang
2014-01-01
3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.
A System for Cost and Reimbursement Control in Hospitals
Fetter, Robert B.; Thompson, John D.; Mills, Ronald E.
1976-01-01
This paper approaches the design of a regional or statewide hospital rate-setting system as the underpinning of a larger system which permits a regulatory agency to satisfy the requirements of various public laws now on the books or in process. It aims to generate valid interinstitutional monitoring on the three parameters of cost, utilization, and quality review. Such an approach requires the extension of the usual departmental cost and budgeting system to include consideration of the mix of patients treated and the utilization of various resources, including patient days, in the treatment of these patients. A sampling framework for the application of process-based quality studies and the generation of selected performance measurements is also included. PMID:941461
Remote Neural Pendants In A Welding-Control System
NASA Technical Reports Server (NTRS)
Venable, Richard A.; Bucher, Joseph H.
1995-01-01
Neural network integrated circuits enhance functionalities of both remote terminals (called "pendants") and communication links, without necessitating installation of additional wires in links. Makes possible to incorporate many features into pendant, including real-time display of critical welding parameters and other process information, capability for communication between technician at pendant and host computer or technician elsewhere in system, and switches and potentiometers through which technician at pendant exerts remote control over such critical aspects of welding process as current, voltage, rate of travel, flow of gas, starting, and stopping. Other potential manufacturing applications include control of spray coating and of curing of composite materials. Potential nonmanufacturing uses include remote control of heating, air conditioning, and lighting in electrically noisy and otherwise hostile environments.