Sample records for measurable process parameters

  1. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  2. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  3. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  4. New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise

    NASA Astrophysics Data System (ADS)

    Pal'a, Jozef; Ušák, Elemír

    2016-03-01

    A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.

  5. Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth.

    PubMed

    Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor

    2016-10-05

    In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a "post process" method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of "in situ" monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yieded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid.

  6. Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth

    PubMed Central

    Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor

    2016-01-01

    In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a “post process” method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of “in situ” monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yielded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid. PMID:28773941

  7. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  8. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  9. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  10. Effect of Electron Beam Freeform Fabrication (EBF3) Processing Parameters on Composition of Ti-6-4

    NASA Technical Reports Server (NTRS)

    Lach, Cynthia L.; Taminger, Karen; Schuszler, A. Bud, II; Sankaran, Sankara; Ehlers, Helen; Nasserrafi, Rahbar; Woods, Bryan

    2007-01-01

    The Electron Beam Freeform Fabrication (EBF3) process developed at NASA Langley Research Center was evaluated using a design of experiments approach to determine the effect of processing parameters on the composition and geometry of Ti-6-4 deposits. The effects of three processing parameters: beam power, translation speed, and wire feed rate, were investigated by varying one while keeping the remaining parameters constant. A three-factorial, three-level, fully balanced mutually orthogonal array (L27) design of experiments approach was used to examine the effects of low, medium, and high settings for the processing parameters on the chemistry, geometry, and quality of the resulting deposits. Single bead high deposits were fabricated and evaluated for 27 experimental conditions. Loss of aluminum in Ti-6-4 was observed in EBF3 processing due to selective vaporization of the aluminum from the sustained molten pool in the vacuum environment; therefore, the chemistries of the deposits were measured and compared with the composition of the initial wire and base plate to determine if the loss of aluminum could be minimized through careful selection of processing parameters. The influence of processing parameters and coupling between these parameters on bulk composition, measured by Direct Current Plasma (DCP), local microchemistries determined by Wavelength Dispersive Spectrometry (WDS), and deposit geometry will also be discussed.

  11. Rapid permeation measurement system for the production control of monolayer and multilayer films

    NASA Astrophysics Data System (ADS)

    Botos, J.; Müller, K.; Heidemeyer, P.; Kretschmer, K.; Bastian, M.; Hochrein, T.

    2014-05-01

    Plastics have been used for packaging films for a long time. Until now the development of new formulations for film applications, including process optimization, has been a time-consuming and cost-intensive process for gases like oxygen (O2) or carbon dioxide (CO2). By using helium (He) the permeation measurement can be accelerated from hours or days to a few minutes. Therefore a manometric measuring system for tests according to ISO 15105-1 is coupled with a mass spectrometer to determine the helium flow rate and to calculate the helium permeation rate. Due to the accelerated determination the permeation quality of monolayer and multilayer films can be measured atline. Such a system can be used to predict for example the helium permeation rate of filled polymer films. Defined quality limits for the permeation rate can be specified as well as the prompt correction of process parameters if the results do not meet the specification. This method for process control was tested on a pilot line with a corotating twin-screw extruder for monolayer films. Selected process parameters were varied iteratively without changing the material formulation to obtain the best process parameter set and thus the lowest permeation rate. Beyond that the influence of different parameters on the helium permeation rate was examined on monolayer films. The results were evaluated conventional as well as with artificial neuronal networks in order to determine the non-linear correlation between all process parameters.

  12. Optimization of hybrid laser - TIG welding of 316LN steel using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Ragavendran, M.; Chandrasekhar, N.; Ravikumar, R.; Saxena, Rajesh; Vasudevan, M.; Bhaduri, A. K.

    2017-07-01

    In the present study, the hybrid laser - TIG welding parameters for welding of 316LN austenitic stainless steel have been investigated by combining a pulsed laser beam with a TIG welding heat source at the weld pool. Laser power, pulse frequency, pulse duration, TIG current were presumed as the welding process parameters whereas weld bead width, weld cross-sectional area and depth of penetration (DOP) were considered as the process responses. Central composite design was used to complete the design matrix and welding experiments were conducted based on the design matrix. Weld bead measurements were then carried out to generate the dataset. Multiple regression models correlating the process parameters with the responses have been developed. The accuracy of the models were found to be good. Then, the desirability approach optimization technique was employed for determining the optimum process parameters to obtain the desired weld bead profile. Validation experiments were then carried out from the determined optimum process parameters. There was good agreement between the predicted and measured values.

  13. Experimental equipment for measuring of rotary air motors parameters

    NASA Astrophysics Data System (ADS)

    Dvořák, Lukáš; Fojtášek, Kamil; Řeháček, Vojtěch

    In the article the construction of an experimental device for measuring the parameters of small rotary air motors is described. Further a measurement methodology and measured data processing are described. At the end of the article characteristics of the chosen air motor are presented.

  14. Research on human physiological parameters intelligent clothing based on distributed Fiber Bragg Grating

    NASA Astrophysics Data System (ADS)

    Miao, Changyun; Shi, Boya; Li, Hongqiang

    2008-12-01

    A human physiological parameters intelligent clothing is researched with FBG sensor technology. In this paper, the principles and methods of measuring human physiological parameters including body temperature and heart rate in intelligent clothing with distributed FBG are studied, the mathematical models of human physiological parameters measurement are built; the processing method of body temperature and heart rate detection signals is presented; human physiological parameters detection module is designed, the interference signals are filtered out, and the measurement accuracy is improved; the integration of the intelligent clothing is given. The intelligent clothing can implement real-time measurement, processing, storage and output of body temperature and heart rate. It has accurate measurement, portability, low cost, real-time monitoring, and other advantages. The intelligent clothing can realize the non-contact monitoring between doctors and patients, timely find the diseases such as cancer and infectious diseases, and make patients get timely treatment. It has great significance and value for ensuring the health of the elders and the children with language dysfunction.

  15. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  16. Terrestrial photovoltaic cell process testing

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.

  17. Terrestrial photovoltaic cell process testing

    NASA Astrophysics Data System (ADS)

    Burger, D. R.

    The paper examines critical test parameters, criteria for selecting appropriate tests, and the use of statistical controls and test patterns to enhance PV-cell process test results. The coverage of critical test parameters is evaluated by examining available test methods and then screening these methods by considering the ability to measure those critical parameters which are most affected by the generic process, the cost of the test equipment and test performance, and the feasibility for process testing.

  18. Calibrating the sqHIMMELI v1.0 wetland methane emission model with hierarchical modeling and adaptive MCMC

    NASA Astrophysics Data System (ADS)

    Susiluoto, Jouni; Raivonen, Maarit; Backman, Leif; Laine, Marko; Makela, Jarmo; Peltola, Olli; Vesala, Timo; Aalto, Tuula

    2018-03-01

    Estimating methane (CH4) emissions from natural wetlands is complex, and the estimates contain large uncertainties. The models used for the task are typically heavily parameterized and the parameter values are not well known. In this study, we perform a Bayesian model calibration for a new wetland CH4 emission model to improve the quality of the predictions and to understand the limitations of such models.The detailed process model that we analyze contains descriptions for CH4 production from anaerobic respiration, CH4 oxidation, and gas transportation by diffusion, ebullition, and the aerenchyma cells of vascular plants. The processes are controlled by several tunable parameters. We use a hierarchical statistical model to describe the parameters and obtain the posterior distributions of the parameters and uncertainties in the processes with adaptive Markov chain Monte Carlo (MCMC), importance resampling, and time series analysis techniques. For the estimation, the analysis utilizes measurement data from the Siikaneva flux measurement site in southern Finland. The uncertainties related to the parameters and the modeled processes are described quantitatively. At the process level, the flux measurement data are able to constrain the CH4 production processes, methane oxidation, and the different gas transport processes. The posterior covariance structures explain how the parameters and the processes are related. Additionally, the flux and flux component uncertainties are analyzed both at the annual and daily levels. The parameter posterior densities obtained provide information regarding importance of the different processes, which is also useful for development of wetland methane emission models other than the square root HelsinkI Model of MEthane buiLd-up and emIssion for peatlands (sqHIMMELI). The hierarchical modeling allows us to assess the effects of some of the parameters on an annual basis. The results of the calibration and the cross validation suggest that the early spring net primary production could be used to predict parameters affecting the annual methane production. Even though the calibration is specific to the Siikaneva site, the hierarchical modeling approach is well suited for larger-scale studies and the results of the estimation pave way for a regional or global-scale Bayesian calibration of wetland emission models.

  19. An automatic alignment tool to improve repeatability of left ventricular function and dyssynchrony parameters in serial gated myocardial perfusion SPECT studies

    PubMed Central

    Zhou, Yanli; Faber, Tracy L.; Patel, Zenic; Folks, Russell D.; Cheung, Alice A.; Garcia, Ernest V.; Soman, Prem; Li, Dianfu; Cao, Kejiang; Chen, Ji

    2013-01-01

    Objective Left ventricular (LV) function and dyssynchrony parameters measured from serial gated single-photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI) using blinded processing had a poorer repeatability than when manual side-by-side processing was used. The objective of this study was to validate whether an automatic alignment tool can reduce the variability of LV function and dyssynchrony parameters in serial gated SPECT MPI. Methods Thirty patients who had undergone serial gated SPECT MPI were prospectively enrolled in this study. Thirty minutes after the first acquisition, each patient was repositioned and a gated SPECT MPI image was reacquired. The two data sets were first processed blinded from each other by the same technologist in different weeks. These processed data were then realigned by the automatic tool, and manual side-by-side processing was carried out. All processing methods used standard iterative reconstruction and Butterworth filtering. The Emory Cardiac Toolbox was used to measure the LV function and dyssynchrony parameters. Results The automatic tool failed in one patient, who had a large, severe scar in the inferobasal wall. In the remaining 29 patients, the repeatability of the LV function and dyssynchrony parameters after automatic alignment was significantly improved from blinded processing and was comparable to manual side-by-side processing. Conclusion The automatic alignment tool can be an alternative method to manual side-by-side processing to improve the repeatability of LV function and dyssynchrony measurements by serial gated SPECT MPI. PMID:23211996

  20. In-depth analysis and characterization of a dual damascene process with respect to different CD

    NASA Astrophysics Data System (ADS)

    Krause, Gerd; Hofmann, Detlef; Habets, Boris; Buhl, Stefan; Gutsch, Manuela; Lopez-Gomez, Alberto; Kim, Wan-Soo; Thrun, Xaver

    2018-03-01

    In a 200 mm high volume environment, we studied data from a dual damascene process. Dual damascene is a combination of lithography, etch and CMP that is used to create copper lines and contacts in one single step. During these process steps, different metal CD are measured by different measurement methods. In this study, we analyze the key numbers of the different measurements after different process steps and develop simple models to predict the electrical behavior* . In addition, radial profiles have been analyzed of both inline measurement parameters and electrical parameters. A matching method was developed based on inline and electrical data. Finally, correlation analysis for radial signatures is presented that can be used to predict excursions in electrical signatures.

  1. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  2. Hydraulic parameters in eroding rills and their influence on detachment processes

    NASA Astrophysics Data System (ADS)

    Wirtz, Stefan; Seeger, Manuel; Zell, Andreas; Wagner, Christian; Wengel, René; Ries, Johannes B.

    2010-05-01

    In many experiments as well in laboratory as in field experiments the correlations between the detachment rate and different hydraulic parameters are calculated. The used parameters are water depth, runoff, shear stress, unit length shear force, stream power, Reynolds- and Froude number. The investigations show even contradictory results. In most soil erosion models like the WEPP model, the shear stress is used to predict soil detachment rates. But in none of the WEPP datasets, the shear stress showed the best correlation to the detachment rate. In this poster we present the results of several rill experiments in Andalusia from 2008 and 2009. With the used method, it is possible to measure the needed factors to calculate the mentioned parameters. Water depth is measured by an ultrasonic sensor, the runoff values are calculated by combining flow velocity and flow diameter. The parameters wetted perimeter, flow diameter and hydraulic radius can be calculated from the measured rill cross sections and the measured water levels. In the sample density values, needed for calculation of shear stress, unit length shear force and stream power, the sediment concentration and the grain density are are considered. The viscosity of the samples was measured with a rheometer. The result of this measurements shows, that there is a very high linear correlation (R² = 0.92) between sediment concentration and the dynamic viscosity. The viscosity seems to be an important factor but it is only used in the Reynolds-number-equation, in other equations it is neglected. But the viscosity value increases with increasing sediment concentration and hence the influence also increases and the in multiclications negiligible viscosity value of 1 only counts for clear water. The correlations between shear stress, unit length shear force and stream power at the x-axis and the detachment rate at the ordinate show, that there is not one fixed parameter that always displays the best correlation to the detachment rate. The best hit does not change from one experiment to another, it changes from one measuring point to another. Different processes in rill erosion are responsible for the changing correlations. In some cases no one of the parameters shows an acceptable correlation to the soil detachment, because these factors describe fluvial processes. Our experiments show, that not the fluvial processes cause the main sediment procduction in the rills, but bank failure or knickpoint and headcut retreat and these processes are more gravitative than fluvial. Another sediment producing process is the abrupt spill over of plunge pools, a process not realy fluvial and not realy gravitativ. In some experiments, the highest sediment concentrations were measured at the slowly flowing waterfront that only transports the loose material. But all these processes are not considered in soil erosion models. Hence, hydraulic parameters alone are not sufficient to predict detachment rates. They cover the fluvial incising in the rill's bottom, but the main sediment sources are not considered satisying in its equations.

  3. Rocket measurements within a polar cap arc - Plasma, particle, and electric circuit parameters

    NASA Technical Reports Server (NTRS)

    Weber, E. J.; Ballenthin, J. O.; Basu, S.; Carlson, H. C.; Hardy, D. A.; Maynard, N. C.; Kelley, M. C.; Fleischman, J. R.; Pfaff, R. F.

    1989-01-01

    Results are presented from the Polar Ionospheric Irregularities Experiment (PIIE), conducted from Sondrestrom, Greenland, on March 15, 1985, designed for an investigation of processes which lead to the generation of small-scale (less than 1 km) ionospheric irregularities within polar-cap F-layer auroras. An instrumented rocket was launched into a polar cap F layer aurora to measure energetic electron flux, plasma, and electric circuit parameters of a sun-aligned arc, coordinated with simultaneous measurements from the Sondrestrom incoherent scatter radar and the AFGL Airborne Ionospheric Observatory. Results indicated the existence of two different generation mechanisms on the dawnside and duskside of the arc. On the duskside, parameters are suggestive of an interchange process, while on the dawnside, fluctuation parameters are consistent with a velocity shear instability.

  4. Modeling spray/puddle dissolution processes for deep-ultraviolet acid-hardened resists

    NASA Astrophysics Data System (ADS)

    Hutchinson, John M.; Das, Siddhartha; Qian, Qi-De; Gaw, Henry T.

    1993-10-01

    A study of the dissolution behavior of acid-hardened resists (AHR) was undertaken for spray and spray/puddle development processes. The Site Services DSM-100 end-point detection system is used to measure both spray and puddle dissolution data for a commercially available deep-ultraviolet AHR resist, Shipley SNR-248. The DSM allows in situ measurement of dissolution rate on the wafer chuck and hence allows parameter extraction for modeling spray and puddle processes. The dissolution data for spray and puddle processes was collected across a range of exposure dose and postexposure bake temperature. The development recipe was varied to decouple the contribution of the spray and puddle modes to the overall dissolution characteristics. The mechanisms involved in spray versus puddle dissolution and the impact of spray versus puddle dissolution on process performance metrics has been investigated. We used the effective-dose-modeling approach and the measurement capability of the DSM-100 and developed a lumped parameter model for acid-hardened resists that incorporates the effects of exposure, postexposure bake temperature and time, and development condition. The PARMEX photoresist-modeling program is used to determine parameters for the spray and for the puddle process. The lumped parameter AHR model developed showed good agreement with experimental data.

  5. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  6. Application of dielectric constant measurement in microwave sludge disintegration and wastewater purification processes.

    PubMed

    Kovács, Petra Veszelovszki; Lemmer, Balázs; Keszthelyi-Szabó, Gábor; Hodúr, Cecilia; Beszédes, Sándor

    2018-05-01

    It has been numerously verified that microwave radiation could be advantageous as a pre-treatment for enhanced disintegration of sludge. Very few data related to the dielectric parameters of wastewater of different origins are available; therefore, the objective of our work was to measure the dielectric constant of municipal and meat industrial wastewater during a continuous flow operating microwave process. Determination of the dielectric constant and its change during wastewater and sludge processing make it possible to decide on the applicability of dielectric measurements for detecting the organic matter removal efficiency of wastewater purification process or disintegration degree of sludge. With the measurement of dielectric constant as a function of temperature, total solids (TS) content and microwave specific process parameters regression models were developed. Our results verified that in the case of municipal wastewater sludge, the TS content has a significant effect on the dielectric constant and disintegration degree (DD), as does the temperature. The dielectric constant has a decreasing tendency with increasing temperature for wastewater sludge of low TS content, but an adverse effect was found for samples with high TS and organic matter contents. DD of meat processing wastewater sludge was influenced significantly by the volumetric flow rate and power level, as process parameters of continuously flow microwave pre-treatments. It can be concluded that the disintegration process of food industry sludge can be detected by dielectric constant measurements. From technical purposes the applicability of dielectric measurements was tested in the purification process of municipal wastewater, as well. Determination of dielectric behaviour was a sensitive method to detect the purification degree of municipal wastewater.

  7. The specificity of the effects of stimulant medication on classroom learning-related measures of cognitive processing for attention deficit disorder children.

    PubMed

    Balthazor, M J; Wagner, R K; Pelham, W E

    1991-02-01

    There appear to be beneficial effects of stimulant medication on daily classroom measures of cognitive functioning for Attention Deficit Disorder (ADD) children, but the specificity and origin of such effects is unclear. Consistent with previous results, 0.3 mg/kg methylphenidate improved ADD children's performance on a classroom reading comprehension measure. Using the Posner letting-matching task and four additional measures of phonological processing, we attempted to isolate the effects of methylphenidate to parameter estimates of (a) selective attention, (b) the basic cognitive process of retrieving name codes from permanent memory, and (c) a constant term that represented nonspecific aspects of information processing. Responses to the letter-matching stimuli were faster and more accurate with medication compared to placebo. The improvement in performance was isolated to the parameter estimate that reflected nonspecific aspects of information processing. A lack of medication effect on the other measures of phonological processing supported the Posner task findings in indicating that methylphenidate appears to exert beneficial effects on academic processing through general rather than specific aspects of information processing.

  8. Multirate state and parameter estimation in an antibiotic fermentation with delayed measurements.

    PubMed

    Gudi, R D; Shah, S L; Gray, M R

    1994-12-01

    This article discusses issues related to estimation and monitoring of fermentation processes that exhibit endogenous metabolism and time-varying maintenance activity. Such culture-related activities hamper the use of traditional, software sensor-based algorithms, such as the extended kalman filter (EKF). In the approach presented here, the individual effects of the endogenous decay and the true maintenance processes have been lumped to represent a modified maintenance coefficient, m(c). Model equations that relate measurable process outputs, such as the carbon dioxide evolution rate (CER) and biomass, to the observable process parameters (such as net specific growth rate and the modified maintenance coefficient) are proposed. These model equations are used in an estimator that can formally accommodate delayed, infrequent measurements of the culture states (such as the biomass) as well as frequent, culture-related secondary measurements (such as the CER). The resulting multirate software sensor-based estimation strategy is used to monitor biomass profiles as well as profiles of critical fermentation parameters, such as the specific growth for a fed-batch fermentation of Streptomyces clavuligerus.

  9. On the consistency among different approaches for nuclear track scanning and data processing

    NASA Astrophysics Data System (ADS)

    Inozemtsev, K. O.; Kushin, V. V.; Kodaira, S.; Shurshakov, V. A.

    2018-04-01

    The article describes various approaches for space radiation track measurement using CR-39™ detector (Tastrak). The results of comparing different methods for track scanning and data processing are presented. Basic algorithms for determination of track parameters are described. Every approach involves individual set of measured track parameters. For two sets, track scanning is sufficient in the plane of detector surface (2-D measurement), third set requires scanning in the additional projection (3-D measurement). An experimental comparison of considered techniques was made with the use of accelerated heavy ions Ar, Fe and Kr.

  10. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    NASA Astrophysics Data System (ADS)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  11. Machine processing of ERTS and ground truth data

    NASA Technical Reports Server (NTRS)

    Rogers, R. H. (Principal Investigator); Peacock, K.

    1973-01-01

    The author has identified the following significant results. Results achieved by ERTS-Atmospheric Experiment PR303, whose objective is to establish a radiometric calibration technique, are reported. This technique, which determines and removes solar and atmospheric parameters that degrade the radiometric fidelity of ERTS-1 data, transforms the ERTS-1 sensor radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument and its use in determining atmospheric parameters needed for ground truth are discussed. The procedures used and results achieved in machine processing ERTS-1 computer -compatible tapes and atmospheric parameters to obtain target reflectance are reviewed.

  12. Measurements of gas parameters in plasma-assisted supersonic combustion processes using diode laser spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolshov, Mikhail A; Kuritsyn, Yu A; Liger, V V

    2009-09-30

    We report a procedure for temperature and water vapour concentration measurements in an unsteady-state combustion zone using diode laser absorption spectroscopy. The procedure involves measurements of the absorption spectrum of water molecules around 1.39 {mu}m. It has been used to determine hydrogen combustion parameters in M = 2 gas flows in the test section of a supersonic wind tunnel. The relatively high intensities of the absorption lines used have enabled direct absorption measurements. We describe a differential technique for measurements of transient absorption spectra, the procedure we used for primary data processing and approaches for determining the gas temperature andmore » H{sub 2}O concentration in the probed zone. The measured absorption spectra are fitted with spectra simulated using parameters from spectroscopic databases. The combustion-time-averaged ({approx}50 ms) gas temperature and water vapour partial pressure in the hot wake region are determined to be 1050 K and 21 Torr, respectively. The large signal-to-noise ratio in our measurements allowed us to assess the temporal behaviour of these parameters. The accuracy in our temperature measurements in the probed zone is {approx}40 K. (laser applications and other topics in quantum electronics)« less

  13. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Measurements of gas parameters in plasma-assisted supersonic combustion processes using diode laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Bolshov, Mikhail A.; Kuritsyn, Yu A.; Liger, V. V.; Mironenko, V. R.; Leonov, S. B.; Yarantsev, D. A.

    2009-09-01

    We report a procedure for temperature and water vapour concentration measurements in an unsteady-state combustion zone using diode laser absorption spectroscopy. The procedure involves measurements of the absorption spectrum of water molecules around 1.39 μm. It has been used to determine hydrogen combustion parameters in M = 2 gas flows in the test section of a supersonic wind tunnel. The relatively high intensities of the absorption lines used have enabled direct absorption measurements. We describe a differential technique for measurements of transient absorption spectra, the procedure we used for primary data processing and approaches for determining the gas temperature and H2O concentration in the probed zone. The measured absorption spectra are fitted with spectra simulated using parameters from spectroscopic databases. The combustion-time-averaged (~50 ms) gas temperature and water vapour partial pressure in the hot wake region are determined to be 1050 K and 21 Torr, respectively. The large signal-to-noise ratio in our measurements allowed us to assess the temporal behaviour of these parameters. The accuracy in our temperature measurements in the probed zone is ~40 K.

  14. Investigating the CO 2 laser cutting parameters of MDF wood composite material

    NASA Astrophysics Data System (ADS)

    Eltawahni, H. A.; Olabi, A. G.; Benyounis, K. Y.

    2011-04-01

    Laser cutting of medium density fibreboard (MDF) is a complicated process and the selection of the process parameters combinations is essential to get the highest quality cut section. This paper presents a means for selecting the process parameters for laser cutting of MDF based on the design of experiments (DOE) approach. A CO 2 laser was used to cut three thicknesses, 4, 6 and 9 mm, of MDF panels. The process factors investigated are: laser power, cutting speed, air pressure and focal point position. In this work, cutting quality was evaluated by measuring the upper kerf width, the lower kerf width, the ratio between the upper kerf width to the lower kerf width, the cut section roughness and the operating cost. The effect of each factor on the quality measures was determined. The optimal cutting combinations were presented in favours of high quality process output and in favours of low cutting cost.

  15. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    PubMed

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  16. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks

    PubMed Central

    2010-01-01

    Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862

  17. X-ray Computed Tomography Assessment of Air Void Distribution in Concrete

    NASA Astrophysics Data System (ADS)

    Lu, Haizhu

    Air void size and spatial distribution have long been regarded as critical parameters in the frost resistance of concrete. In cement-based materials, entrained air void systems play an important role in performance as related to durability, permeability, and heat transfer. Many efforts have been made to measure air void parameters in a more efficient and reliable manner in the past several decades. Standardized measurement techniques based on optical microscopy and stereology on flat cut and polished surfaces are widely used in research as well as in quality assurance and quality control applications. Other more automated methods using image processing have also been utilized, but still starting from flat cut and polished surfaces. The emergence of X-ray computed tomography (CT) techniques provides the capability of capturing the inner microstructure of materials at the micrometer and nanometer scale. X-ray CT's less demanding sample preparation and capability to measure 3D distributions of air voids directly provide ample prospects for its wider use in air void characterization in cement-based materials. However, due to the huge number of air voids that can exist within a limited volume, errors can easily arise in the absence of a formalized data processing procedure. In this study, air void parameters in selected types of cement-based materials (lightweight concrete, structural concrete elements, pavements, and laboratory mortars) have been measured using micro X-ray CT. The focus of this study is to propose a unified procedure for processing the data and to provide solutions to deal with common problems that arise when measuring air void parameters: primarily the reliable segmentation of objects of interest, uncertainty estimation of measured parameters, and the comparison of competing segmentation parameters.

  18. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  19. Continuous Odour Measurement with Chemosensor Systems

    NASA Astrophysics Data System (ADS)

    Boeker, Peter; Haas, T.; Diekmann, B.; Lammer, P. Schulze

    2009-05-01

    The continuous odour measurement is a challenging task for chemosensor systems. Firstly, a long term and stable measurement mode must be guaranteed in order to preserve the validity of the time consuming and expensive olfactometric calibration data. Secondly, a method is needed to deal with the incoming sensor data. The continuous online detection of signal patterns, the correlated gas emission and the assigned odour data is essential for the continuous odour measurement. Thirdly, a severe danger of over-fitting in the process of the odour calibration is present, because of the high measurement uncertainty of the olfactometry. In this contribution we present a technical solution for continuous measurements comprising of a hybrid QMB-sensor array and electrochemical cells. A set of software tools enables the efficient data processing and calibration and computes the calibration parameters. The internal software of the measurement systems microcontroller processes the calibration parameters online for the output of the desired odour information.

  20. Color identification and fuzzy reasoning based monitoring and controlling of fermentation process of branched chain amino acid

    NASA Astrophysics Data System (ADS)

    Ma, Lei; Wang, Yizhong; Xu, Qingyang; Huang, Huafang; Zhang, Rui; Chen, Ning

    2009-11-01

    The main production method of branched chain amino acid (BCAA) is microbial fermentation. In this paper, to monitor and to control the fermentation process of BCAA, especially its logarithmic phase, parameters such as the color of fermentation broth, culture temperature, pH, revolution, dissolved oxygen, airflow rate, pressure, optical density, and residual glucose, are measured and/or controlled and/or adjusted. The color of fermentation broth is measured using the HIS color model and a BP neural network. The network's input is the histograms of hue H and saturation S, and output is the color description. Fermentation process parameters are adjusted using fuzzy reasoning, which is performed by inference rules. According to the practical situation of BCAA fermentation process, all parameters are divided into four grades, and different fuzzy rules are established.

  1. Apparatus and method for fluid analysis

    DOEpatents

    Wilson, Bary W.; Peters, Timothy J.; Shepard, Chester L.; Reeves, James H.

    2004-11-02

    The present invention is an apparatus and method for analyzing a fluid used in a machine or in an industrial process line. The apparatus has at least one meter placed proximate the machine or process line and in contact with the machine or process fluid for measuring at least one parameter related to the fluid. The at least one parameter is a standard laboratory analysis parameter. The at least one meter includes but is not limited to viscometer, element meter, optical meter, particulate meter, and combinations thereof.

  2. Autonomous sensor particle for parameter tracking in large vessels

    NASA Astrophysics Data System (ADS)

    Thiele, Sebastian; Da Silva, Marco Jose; Hampel, Uwe

    2010-08-01

    A self-powered and neutrally buoyant sensor particle has been developed for the long-term measurement of spatially distributed process parameters in the chemically harsh environments of large vessels. One intended application is the measurement of flow parameters in stirred fermentation biogas reactors. The prototype sensor particle is a robust and neutrally buoyant capsule, which allows free movement with the flow. It contains measurement devices that log the temperature, absolute pressure (immersion depth) and 3D-acceleration data. A careful calibration including an uncertainty analysis has been performed. Furthermore, autonomous operation of the developed prototype was successfully proven in a flow experiment in a stirred reactor model. It showed that the sensor particle is feasible for future application in fermentation reactors and other industrial processes.

  3. Approach to in-process tool wear monitoring in drilling: Application of Kalman filter theory

    NASA Astrophysics Data System (ADS)

    He, Ning; Zhang, Youzhen; Pan, Liangxian

    1993-05-01

    The two parameters often used in adaptive control, tool wear and wear rate, are the important factors affecting machinability. In this paper, it is attempted to use the modern cybernetics to solve the in-process tool wear monitoring problem by applying the Kalman filter theory to monitor drill wear quantitatively. Based on the experimental results, a dynamic model, a measuring model and a measurement conversion model suitable for Kalman filter are established. It is proved that the monitoring system possesses complete observability but does not possess complete controllability. A discriminant for selecting the characteristic parameters is put forward. The thrust force Fz is selected as the characteristic parameter in monitoring the tool wear by this discriminant. The in-process Kalman filter drill wear monitoring system composed of force sensor microphotography and microcomputer is well established. The results obtained by the Kalman filter, the common indirect measuring method and the real drill wear measured by the aid of microphotography are compared. The result shows that the Kalman filter has high precision of measurement and the real time requirement can be satisfied.

  4. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  5. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  6. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  7. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  8. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  9. Wind speed vector restoration algorithm

    NASA Astrophysics Data System (ADS)

    Baranov, Nikolay; Petrov, Gleb; Shiriaev, Ilia

    2018-04-01

    Impulse wind lidar (IWL) signal processing software developed by JSC «BANS» recovers full wind speed vector by radial projections and provides wind parameters information up to 2 km distance. Increasing accuracy and speed of wind parameters calculation signal processing technics have been studied in this research. Measurements results of IWL and continuous scanning lidar were compared. Also, IWL data processing modeling results have been analyzed.

  10. 300 Area treated effluent disposal facility sampling schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loll, C.M.

    1994-10-11

    This document is the interface between the 300 Area Liquid Effluent Process Engineering (LEPE) group and the Waste Sampling and Characterization Facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.

  11. Single-chip microcomputer for image processing in the photonic measuring system

    NASA Astrophysics Data System (ADS)

    Smoleva, Olga S.; Ljul, Natalia Y.

    2002-04-01

    The non-contact measuring system has been designed for rail- track parameters control on the Moscow Metro. It detects some significant parameters: rail-track width, rail-track height, gage, rail-slums, crosslevel, pickets, and car speed. The system consists of three subsystems: non-contact system of rail-track width, height, and gage inspection, non-contact system of rail-slums inspection and subsystem for crosslevel, speed, and pickets detection. Data from subsystems is transferred to pre-processing unit. In order to process data received from subsystems, the single-chip signal processor ADSP-2185 must be used due to providing required processing speed. After data will be processed, it is send to PC, which processes it and outputs it in the readable form.

  12. Sensitivity study and parameter optimization of OCD tool for 14nm finFET process

    NASA Astrophysics Data System (ADS)

    Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping

    2016-03-01

    Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.

  13. A hybrid artificial neural network as a software sensor for optimal control of a wastewater treatment process.

    PubMed

    Choi, D J; Park, H

    2001-11-01

    For control and automation of biological treatment processes, lack of reliable on-line sensors to measure water quality parameters is one of the most important problems to overcome. Many parameters cannot be measured directly with on-line sensors. The accuracy of existing hardware sensors is also not sufficient and maintenance problems such as electrode fouling often cause trouble. This paper deals with the development of software sensor techniques that estimate the target water quality parameter from other parameters using the correlation between water quality parameters. We focus our attention on the preprocessing of noisy data and the selection of the best model feasible to the situation. Problems of existing approaches are also discussed. We propose a hybrid neural network as a software sensor inferring wastewater quality parameter. Multivariate regression, artificial neural networks (ANN), and a hybrid technique that combines principal component analysis as a preprocessing stage are applied to data from industrial wastewater processes. The hybrid ANN technique shows an enhancement of prediction capability and reduces the overfitting problem of neural networks. The result shows that the hybrid ANN technique can be used to extract information from noisy data and to describe the nonlinearity of complex wastewater treatment processes.

  14. A general model for attitude determination error analysis

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Seidewitz, ED; Nicholson, Mark

    1988-01-01

    An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.

  15. Influence of Different Container Closure Systems and Capping Process Parameters on Product Quality and Container Closure Integrity (CCI) in GMP Drug Product Manufacturing.

    PubMed

    Mathaes, Roman; Mahler, Hanns-Christian; Roggo, Yves; Huwyler, Joerg; Eder, Juergen; Fritsch, Kamila; Posset, Tobias; Mohl, Silke; Streubel, Alexander

    2016-01-01

    Capping equipment used in good manufacturing practice manufacturing features different designs and a variety of adjustable process parameters. The overall capping result is a complex interplay of the different capping process parameters and is insufficiently described in literature. It remains poorly studied how the different capping equipment designs and capping equipment process parameters (e.g., pre-compression force, capping plate height, turntable rotating speed) contribute to the final residual seal force of a sealed container closure system and its relation to container closure integrity and other drug product quality parameters. Stopper compression measured by computer tomography correlated to residual seal force measurements.In our studies, we used different container closure system configurations from different good manufacturing practice drug product fill & finish facilities to investigate the influence of differences in primary packaging, that is, vial size and rubber stopper design on the capping process and the capped drug product. In addition, we compared two large-scale good manufacturing practice manufacturing capping equipment and different capping equipment settings and their impact on product quality and integrity, as determined by residual seal force.The capping plate to plunger distance had a major influence on the obtained residual seal force values of a sealed vial, whereas the capping pre-compression force and the turntable rotation speed showed only a minor influence on the residual seal force of a sealed vial. Capping process parameters could not easily be transferred from capping equipment of different manufacturers. However, the residual seal force tester did provide a valuable tool to compare capping performance of different capping equipment. No vial showed any leakage greater than 10(-8)mbar L/s as measured by a helium mass spectrometry system, suggesting that container closure integrity was warranted in the residual seal force range tested for the tested container closure systems. Capping equipment used in good manufacturing practice manufacturing features different designs and a variety of adjustable process parameters. The overall capping result is a complex interplay of the different capping process parameters and is insufficiently described in the literature. It remains poorly studied how the different capping equipment designs and capping equipment process parameters contribute to the final capping result.In this study, we used different container closure system configurations from different good manufacturing process drug product fill & finish facilities to investigate the influence of the vial size and the rubber stopper design on the capping process. In addition, we compared two examples of large-scale good manufacturing process capping equipment and different capping equipment settings and their impact on product quality and integrity, as determined by residual seal force. © PDA, Inc. 2016.

  16. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  17. Relating memory to functional performance in normal aging to dementia using hierarchical Bayesian cognitive processing models.

    PubMed

    Shankle, William R; Pooley, James P; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D

    2013-01-01

    Determining how cognition affects functional abilities is important in Alzheimer disease and related disorders. A total of 280 patients (normal or Alzheimer disease and related disorders) received a total of 1514 assessments using the functional assessment staging test (FAST) procedure and the MCI Screen. A hierarchical Bayesian cognitive processing model was created by embedding a signal detection theory model of the MCI Screen-delayed recognition memory task into a hierarchical Bayesian framework. The signal detection theory model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the 6 FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. Hierarchical Bayesian cognitive processing models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition into a continuous measure of functional severity for both individuals and FAST groups. Such a translation links 2 levels of brain information processing and may enable more accurate correlations with other levels, such as those characterized by biomarkers.

  18. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  19. Characterization of material parameters for high speed forming and cutting via experiment and inverse simulation

    NASA Astrophysics Data System (ADS)

    Scheffler, Christian; Psyk, Verena; Linnemann, Maik; Tulke, Marc; Brosius, Alexander; Landgrebe, Dirk

    2018-05-01

    High speed velocity effects in production technology provide a broad range of technological and economic advantages [1, 2]. However, exploiting them necessitates the knowledge of strain rate dependent material behavior in process modelling. In general, high speed material data characterization features several difficulties and requires sophisticated approaches in order to provide reliable material data. This paper proposes two innovative concepts with electromagnetic and pneumatic drive and an approach for material characterization in terms of strain rate dependent flow curves and parameters of failure or damage models. The test setups have been designed for investigations of strain rates up to 105 s-1. In principle, knowledge about the temporary courses and local distributions of stress and strain in the specimen is essential for identifying material characteristics, but short process times, fast changes of the measurement values, small specimen size and frequently limited accessibility of the specimen during the test hinder directly measuring these parameters at high-velocity testing. Therefore, auxiliary test parameters, which are easier to measure, are recorded and used as input data for an inverse numerical simulation that provides the desired material characteristics, e.g. the Johnson-Cook parameters, as a result. These parameters are a force equivalent strain signal on a measurement body and the displacement of the upper specimen edge.

  20. Measurement and modeling of unsaturated hydraulic conductivity: Chapter 21

    USGS Publications Warehouse

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K(). The parameters that describe the K() curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  1. Process Parameter Optimization for Wobbling Laser Spot Welding of Ti6Al4V Alloy

    NASA Astrophysics Data System (ADS)

    Vakili-Farahani, F.; Lungershausen, J.; Wasmer, K.

    Laser beam welding (LBW) coupled with "wobble effect" (fast oscillation of the laser beam) is very promising for high precision micro-joining industry. For this process, similarly to the conventional LBW, the laser welding process parameters play a very significant role in determining the quality of a weld joint. Consequently, four process parameters (laser power, wobble frequency, number of rotations within a single laser pulse and focused position) and 5 responses (penetration, width, heat affected zone (HAZ), area of the fusion zone, area of HAZ and hardness) were investigated for spot welding of Ti6Al4V alloy (grade 5) using a design of experiments (DoE) approach. This paper presents experimental results showing the effects of variating the considered most important process parameters on the spot weld quality of Ti6Al4V alloy. Semi-empirical mathematical models were developed to correlate laser welding parameters to each of the measured weld responses. Adequacies of the models were then examined by various methods such as ANOVA. These models not only allows a better understanding of the wobble laser welding process and predict the process performance but also determines optimal process parameters. Therefore, optimal combination of process parameters was determined considering certain quality criteria set.

  2. 300 Area treated effluent disposal facility sampling schedule. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loll, C.M.

    1995-03-28

    This document is the interface between the 300 Area liquid effluent process engineering (LEPE) group and the waste sampling and characterization facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.

  3. Optimization of process parameters in welding of dissimilar steels using robot TIG welding

    NASA Astrophysics Data System (ADS)

    Navaneeswar Reddy, G.; VenkataRamana, M.

    2018-03-01

    Robot TIG welding is a modern technique used for joining two work pieces with high precision. Design of Experiments is used to conduct experiments by varying weld parameters like current, wire feed and travelling speed. The welding parameters play important role in joining of dissimilar stainless steel SS 304L and SS430. In this work, influences of welding parameter on Robot TIG Welded specimens are investigated using Response Surface Methodology. The Micro Vickers hardness tests of the weldments are measured. The process parameters are optimized to maximize the hardness of the weldments.

  4. An indirect method of imaging the Stokes parameters of a submicron particle with sub-diffraction scattering

    NASA Astrophysics Data System (ADS)

    Ullah, Kaleem; Garcia-Camara, Braulio; Habib, Muhammad; Yadav, N. P.; Liu, Xuefeng

    2018-07-01

    In this work, we report an indirect way to image the Stokes parameters of a sample under test (SUT) with sub-diffraction scattering information. We apply our previously reported technique called parametric indirect microscopic imaging (PIMI) based on a fitting and filtration process to measure the Stokes parameters of a submicron particle. A comparison with a classical Stokes measurement is also shown. By modulating the incident field in a precise way, fitting and filtration process at each pixel of the detector in PIMI make us enable to resolve and sense the scattering information of SUT and map them in terms of the Stokes parameters. We believe that our finding can be very useful in fields like singular optics, optical nanoantenna, biomedicine and much more. The spatial signature of the Stokes parameters given by our method has been confirmed with finite difference time domain (FDTD) method.

  5. Measuring the significance of pearlescence in real-time bottle forming

    NASA Astrophysics Data System (ADS)

    Nixon, J.; Menary, G.; Yan, S.

    2018-05-01

    This work examines the optical properties of polyethylene terephthalate (PET) bottles during the stretch-blow-moulding (SBM) process. PET has a relatively large process window with regards to process parameters, however if the boundaries are pushed, the resultant bottle can become insufficient for consumer requirements. One aspect of this process is the onset of pearlescence in the bottle material, where the bottle becomes opaque due to elevated stress whitening. Experimental trials were carried out using a modified free-stretch-blow machine where the deforming bottle was examined in free air. The strain values of the deformation were measured using digital image correlation (DIC) and the optical properties were measured relative to the initial amorphous PET preform. The results reveal that process parameters can significantly affect pearlescence. The detrimental level of pearlescence may be predicted therefore reducing the probability of poorly formed bottles.

  6. Behavioral and Brain Measures of Phasic Alerting Effects on Visual Attention.

    PubMed

    Wiegand, Iris; Petersen, Anders; Finke, Kathrin; Bundesen, Claus; Lansner, Jon; Habekost, Thomas

    2017-01-01

    In the present study, we investigated effects of phasic alerting on visual attention in a partial report task, in which half of the displays were preceded by an auditory warning cue. Based on the computational Theory of Visual Attention (TVA), we estimated parameters of spatial and non-spatial aspects of visual attention and measured event-related lateralizations (ERLs) over visual processing areas. We found that the TVA parameter sensory effectiveness a , which is thought to reflect visual processing capacity, significantly increased with phasic alerting. By contrast, the distribution of visual processing resources according to task relevance and spatial position, as quantified in parameters top-down control α and spatial bias w index , was not modulated by phasic alerting. On the electrophysiological level, the latencies of ERLs in response to the task displays were reduced following the warning cue. These results suggest that phasic alerting facilitates visual processing in a general, unselective manner and that this effect originates in early stages of visual information processing.

  7. A data processing method based on tracking light spot for the laser differential confocal component parameters measurement system

    NASA Astrophysics Data System (ADS)

    Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin

    2013-12-01

    We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.

  8. A method to optimize the processing algorithm of a computed radiography system for chest radiography.

    PubMed

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2007-09-01

    A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.

  9. Low-Cost Detection of Thin Film Stress during Fabrication

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy A.

    2015-01-01

    NASA's Marshall Space Flight Center has developed a simple, cost-effective optical method for thin film stress measurements during growth and/or subsequent annealing processes. Stress arising in thin film fabrication presents production challenges for electronic devices, sensors, and optical coatings; it can lead to substrate distortion and deformation, impacting the performance of thin film products. NASA's technique measures in-situ stress using a simple, noncontact fiber optic probe in the thin film vacuum deposition chamber. This enables real-time monitoring of stress during the fabrication process and allows for efficient control of deposition process parameters. By modifying process parameters in real time during fabrication, thin film stress can be optimized or controlled, improving thin film product performance.

  10. A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.

    PubMed

    Faya, Paul; Stamey, James D; Seaman, John W

    2017-01-01

    For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.

  11. Subsonic flight test evaluation of a propulsion system parameter estimation process for the F100 engine

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Gilyard, Glenn B.

    1992-01-01

    Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newsom, R. K.; Sivaraman, C.; Shippert, T. R.

    Wind speed and direction, together with pressure, temperature, and relative humidity, are the most fundamental atmospheric state parameters. Accurate measurement of these parameters is crucial for numerical weather prediction. Vertically resolved wind measurements in the atmospheric boundary layer are particularly important for modeling pollutant and aerosol transport. Raw data from a scanning coherent Doppler lidar system can be processed to generate accurate height-resolved measurements of wind speed and direction in the atmospheric boundary layer.

  13. Analysing the influence of FSP process parameters on IGC susceptibility of AA5083 using Sugeno - Fuzzy model

    NASA Astrophysics Data System (ADS)

    Jayakarthick, C.; Povendhan, A. P.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Aluminium alloy AA5083 was friction stir processed to improve the intergranular corrosion (IGC) resistance. FSP trials were performed by varying the process parameters as per Taguchi’s L18 orthogonal array. IGC resistance of the friction stir processed specimens were found by immersing them in concentrated nitric acid and measuring the mass loss per unit area. Results indicate that dispersion and partial dissolution of secondary phase increased IGC resistance of the friction stir processed specimens. A Sugeno fuzzy model was developed to study the effect of FSP process parameters on the IGC susceptibility of friction stir processed specimens. Tool Rotation Speed, Tool Traverse Speed and Shoulder Diameter have a significant effect on the IGC susceptibility of the friction stir processed specimens.

  14. The dynamic nature of conflict in Wikipedia

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Sampaio dos Aidos, F.; Carvalho, J.

    2014-10-01

    The voluntary process of Wikipedia edition provides an environment in which the outcome is clearly a collective product of interactions involving a large number of people. We propose a simple agent-based model, developed from real data, to reproduce the collaborative process of Wikipedia edition. With a small number of simple ingredients, our model mimics several interesting features of real human behaviour, namely in the context of edit wars. We show that the level of conflict is determined by a tolerance parameter, which measures the editors' capability to accept different opinions and to change their own opinion. We propose to measure conflict with a parameter based on mutual reverts, which increases only in contentious situations. Using this parameter, we find a distribution for the inter-peace periods that is heavy tailed. The effects of wiki-robots in the conflict levels and in the edition patterns are also studied. Our findings are compared with previous parameters used to measure conflicts in edit wars.

  15. 40 CFR 63.488 - Methods and procedures for batch front-end process vent group determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... direct measurement as specified in paragraph (b)(5) of this section. Engineering assessment may also be... obtained through direct measurement, as defined in paragraph (b)(5) of this section, through engineering...

  16. 40 CFR 63.488 - Methods and procedures for batch front-end process vent group determination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... direct measurement as specified in paragraph (b)(5) of this section. Engineering assessment may also be... obtained through direct measurement, as defined in paragraph (b)(5) of this section, through engineering...

  17. 40 CFR 63.488 - Methods and procedures for batch front-end process vent group determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... direct measurement as specified in paragraph (b)(5) of this section. Engineering assessment may also be... obtained through direct measurement, as defined in paragraph (b)(5) of this section, through engineering...

  18. 40 CFR 63.488 - Methods and procedures for batch front-end process vent group determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... direct measurement as specified in paragraph (b)(5) of this section. Engineering assessment may also be... obtained through direct measurement, as defined in paragraph (b)(5) of this section, through engineering...

  19. Exploring the joint measurability using an information-theoretic approach

    NASA Astrophysics Data System (ADS)

    Hsu, Li-Yi

    2016-12-01

    We explore the legal purity parameters for the joint measurements. Instead of direct unsharpening the measurements, we perform the quantum cloning before the sharp measurements. The necessary fuzziness in the unsharp measurements is equivalently introduced in the imperfect cloning process. Based on the information causality and the consequent noisy nonlocal computation, one can derive the information-theoretic quadratic inequalities that must be satisfied by any physical theory. On the other hand, to guarantee the classicality, the linear Bell-type inequalities deduced by these quadratic ones must be obeyed. As for the joint measurability, the purity parameters must be chosen to obey both types of inequalities. Finally, the quadratic inequalities for purity parameters in the joint measurability region are derived.

  20. Simulation of aerobic and anaerobic biodegradation processes at a crude oil spill site

    USGS Publications Warehouse

    Essaid, Hedeff I.; Bekins, Barbara A.; Godsy, E. Michael; Warren, Ean; Baedecker, Mary Jo; Cozzarelli, Isabelle M.

    1995-01-01

    A two-dimensional, multispecies reactive solute transport model with sequential aerobic and anaerobic degradation processes was developed and tested. The model was used to study the field-scale solute transport and degradation processes at the Bemidji, Minnesota, crude oil spill site. The simulations included the biodegradation of volatile and nonvolatile fractions of dissolved organic carbon by aerobic processes, manganese and iron reduction, and methanogenesis. Model parameter estimates were constrained by published Monod kinetic parameters, theoretical yield estimates, and field biomass measurements. Despite the considerable uncertainty in the model parameter estimates, results of simulations reproduced the general features of the observed groundwater plume and the measured bacterial concentrations. In the simulation, 46% of the total dissolved organic carbon (TDOC) introduced into the aquifer was degraded. Aerobic degradation accounted for 40% of the TDOC degraded. Anaerobic processes accounted for the remaining 60% of degradation of TDOC: 5% by Mn reduction, 19% by Fe reduction, and 36% by methanogenesis. Thus anaerobic processes account for more than half of the removal of DOC at this site.

  1. Quantifying Uranium Isotope Ratios Using Resonance Ionization Mass Spectrometry: The Influence of Laser Parameters on Relative Ionization Probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isselhardt, Brett H.

    2011-09-01

    Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure relative uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process to provide a distinction between uranium atoms and potential isobars without the aid of chemical purification and separation. We explore the laser parameters critical to the ionization process and their effects on the measured isotope ratio. Specifically, the use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of 235U/ 238U ratios to decrease laser-induced isotopic fractionation. By broadening the bandwidth of the first laser inmore » a 3-color, 3-photon ionization process from a bandwidth of 1.8 GHz to about 10 GHz, the variation in sequential relative isotope abundance measurements decreased from >10% to less than 0.5%. This procedure was demonstrated for the direct interrogation of uranium oxide targets with essentially no sample preparation. A rate equation model for predicting the relative ionization probability has been developed to study the effect of variation in laser parameters on the measured isotope ratio. This work demonstrates that RIMS can be used for the robust measurement of uranium isotope ratios.« less

  2. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    PubMed

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  3. Characterizing a porous road pavement using surface impedance measurement: a guided numerical inversion procedure.

    PubMed

    Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel

    2013-12-01

    This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.

  4. Correlations of Melt Pool Geometry and Process Parameters During Laser Metal Deposition by Coaxial Process Monitoring

    NASA Astrophysics Data System (ADS)

    Ocylok, Sörn; Alexeev, Eugen; Mann, Stefan; Weisheit, Andreas; Wissenbach, Konrad; Kelbassa, Ingomar

    One major demand of today's laser metal deposition (LMD) processes is to achieve a fail-save build-up regarding changing conditions like heat accumulations. Especially for the repair of thin parts like turbine blades is the knowledge about the correlations between melt pool behavior and process parameters like laser power, feed rate and powder mass stream indispensable. The paper will show the process layout with the camera based coaxial monitoring system and the quantitative influence of the process parameters on the melt pool geometry. Therefore the diameter, length and area of the melt pool are measured by a video analytic system at various parameters and compared with the track wide in cross-sections and the laser spot diameter. The influence of changing process conditions on the melt pool is also investigated. On the base of these results an enhanced process of the build-up of a multilayer one track fillet geometry will be presented.

  5. Relating Memory To Functional Performance In Normal Aging to Dementia Using Hierarchical Bayesian Cognitive Processing Models

    PubMed Central

    Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.

    2012-01-01

    Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225

  6. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    PubMed

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Uav-Based Automatic Tree Growth Measurement for Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Karpina, M.; Jarząbek-Rychard, M.; Tymków, P.; Borkowski, A.

    2016-06-01

    Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.

  8. Performance analysis and evaluation of direct phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhao, Ping; Gao, Nan; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2018-04-01

    Three-dimensional (3D) shape measurement of specular objects plays an important role in intelligent manufacturing applications. Phase measuring deflectometry (PMD)-based methods are widely used to obtain the 3D shapes of specular surfaces because they offer the advantages of a large dynamic range, high measurement accuracy, full-field and noncontact operation, and automatic data processing. To enable measurement of specular objects with discontinuous and/or isolated surfaces, a direct PMD (DPMD) method has been developed to build a direct relationship between phase and depth. In this paper, a new virtual measurement system is presented and is used to optimize the system parameters and evaluate the system's performance in DPMD applications. Four system parameters are analyzed to obtain accurate measurement results. Experiments are performed using simulated and actual data and the results confirm the effects of these four parameters on the measurement results. Researchers can therefore select suitable system parameters for actual DPMD (including PMD) measurement systems to obtain the 3D shapes of specular objects with high accuracy.

  9. Study on voids of epoxy matrix composites sandwich structure parts

    NASA Astrophysics Data System (ADS)

    He, Simin; Wen, Youyi; Yu, Wenjun; Liu, Hong; Yue, Cheng; Bao, Jing

    2017-03-01

    Void is the most common tiny defect of composite materials. Porosity is closely related to composite structure property. The voids forming behaviour in the composites sandwich structural parts with the carbon fiber reinforced epoxy resin skins was researched by adjusting the manufacturing process parameters. The composites laminate with different porosities were prepared with the different process parameter. The ultrasonic non-destructive measurement method for the porosity was developed and verified through microscopic examination. The analysis results show that compaction pressure during the manufacturing process had influence on the porosity in the laminate area. Increasing the compaction pressure and compaction time will reduce the porosity of the laminates. The bond-line between honeycomb core and carbon fiber reinforced epoxy resin skins were also analyzed through microscopic examination. The mechanical properties of sandwich structure composites were studied. The optimization process parameters and porosity ultrasonic measurement method for composites sandwich structure have been applied to the production of the composite parts.

  10. Determination of the mobility profile in GaAs-MESFETs. Thesis

    NASA Technical Reports Server (NTRS)

    Prost, W.

    1985-01-01

    A process for measuring charge carrier mobility for gallium-arsenide metal semiconductor field effect transistors is described in an attempt to optimize the relationship between this factor and production. The measuring procedure allows an actual determination of local mobility in the channel. The physical basis for the process and features of the measuring room are outlined. The measuring technique is described and recommendations are made for setting measuring parameters.

  11. Uncertainty assessment and implications for data acquisition in support of integrated hydrologic models

    NASA Astrophysics Data System (ADS)

    Brunner, Philip; Doherty, J.; Simmons, Craig T.

    2012-07-01

    The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.

  12. Quantitative Experimental Study of Defects Induced by Process Parameters in the High-Pressure Die Cast Process

    NASA Astrophysics Data System (ADS)

    Sharifi, P.; Jamali, J.; Sadayappan, K.; Wood, J. T.

    2018-05-01

    A quantitative experimental study of the effects of process parameters on the formation of defects during solidification of high-pressure die cast magnesium alloy components is presented. The parameters studied are slow-stage velocity, fast-stage velocity, intensification pressure, and die temperature. The amount of various defects are quantitatively characterized. Multiple runs of the commercial casting simulation package, ProCAST™, are used to model the mold-filling and solidification events. Several locations in the component including knit lines, last-to-fill region, and last-to-solidify region are identified as the critical regions that have a high concentration of defects. The area fractions of total porosity, shrinkage porosity, gas porosity, and externally solidified grains are separately measured. This study shows that the process parameters, fluid flow and local solidification conditions, play major roles in the formation of defects during HPDC process.

  13. Temporal variations in parameters reflecting terminal-electron-accepting processes in an aquifer contaminated with waste fuel and chlorinated solvents

    USGS Publications Warehouse

    McGuire, Jennifer T.; Smith, Erik W.; Long, David T.; Hyndman, David W.; Haack, Sheridan K.; Klug, Michael J.; Velbel, Michael A.

    2000-01-01

    A fundamental issue in aquifer biogeochemistry is the means by which solute transport, geochemical processes, and microbiological activity combine to produce spatial and temporal variations in redox zonation. In this paper, we describe the temporal variability of TEAP conditions in shallow groundwater contaminated with both waste fuel and chlorinated solvents. TEAP parameters (including methane, dissolved iron, and dissolved hydrogen) were measured to characterize the contaminant plume over a 3-year period. We observed that concentrations of TEAP parameters changed on different time scales and appear to be related, in part, to recharge events. Changes in all TEAP parameters were observed on short time scales (months), and over a longer 3-year period. The results indicate that (1) interpretations of TEAP conditions in aquifers contaminated with a variety of organic chemicals, such as those with petroleum hydrocarbons and chlorinated solvents, must consider additional hydrogen-consuming reactions (e.g., dehalogenation); (2) interpretations must consider the roles of both in situ (at the sampling point) biogeochemical and solute transport processes; and (3) determinations of microbial communities are often necessary to confirm the interpretations made from geochemical and hydrogeological measurements on these processes.

  14. Monitoring and control of the biogas process based on propionate concentration using online VFA measurement.

    PubMed

    Boe, Kanokwan; Steyer, Jean-Philippe; Angelidaki, Irini

    2008-01-01

    Simple logic control algorithms were tested for automatic control of a lab-scale CSTR manure digester. Using an online VFA monitoring system, propionate concentration in the reactor was used as parameter for control of the biogas process. The propionate concentration was kept below a threshold of 10 mM by manipulating the feed flow. Other online parameters such as pH, biogas production, total VFA, and other individual VFA were also measured to examine process performance. The experimental results showed that a simple logic control can successfully prevent the reactor from overload, but with fluctuations of the propionate level due to the nature of control approach. The fluctuation of propionate concentration could be reduced, by adding a lower feed flow limit into the control algorithm to prevent undershooting of propionate response. It was found that use of the biogas production as a main control parameter, rather than propionate can give a more stable process, since propionate was very persistent and only responded very slowly to the decrease of the feed flow which lead to high fluctuation of biogas production. Propionate, however, was still an excellent parameter to indicate process stress under gradual overload and thus recommended as an alarm in the control algorithm. Copyright IWA Publishing 2008.

  15. Systems for monitoring and digitally recording water-quality parameters

    USGS Publications Warehouse

    Smoot, George F.; Blakey, James F.

    1966-01-01

    Digital recording of water-quality parameters is a link in the automated data collection and processing system of the U.S. Geological Survey. The monitoring and digital recording systems adopted by the Geological Survey, while punching all measurements on a standard paper tape, provide a choice of compatible components to construct a system to meet specific physical problems and data needs. As many as 10 parameters can be recorded by an Instrument, with the only limiting criterion being that measurements are expressed as electrical signals.

  16. Automated egg grading system using computer vision: Investigation on weight measure versus shape parameters

    NASA Astrophysics Data System (ADS)

    Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul

    2018-04-01

    Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.

  17. Evolution of process control parameters during extended co-composting of green waste and solid fraction of cattle slurry to obtain growing media.

    PubMed

    Cáceres, Rafaela; Coromina, Narcís; Malińska, Krystyna; Marfà, Oriol

    2015-03-01

    This study aimed to monitor process parameters when two by-products (green waste - GW, and the solid fraction of cattle slurry - SFCS) were composted to obtain growing media. Using compost in growing medium mixtures involves prolonged composting processes that can last at least half a year. It is therefore crucial to study the parameters that affect compost stability as measured in the field in order to shorten the composting process at composting facilities. Two mixtures were prepared: GW25 (25% GW and 75% SFCS, v/v) and GW75 (75% GW and 25% SFCS, v/v). The different raw mixtures resulted in the production of two different growing media, and the evolution of process management parameters was different. A new parameter has been proposed to deal with attaining the thermophilic temperature range and maintaining it during composting, not only it would be useful to optimize composting processes, but also to assess the hygienization degree. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Measurements of characteristic parameters of extremely small cogged wheels with low module by means of low-coherence interferometry

    NASA Astrophysics Data System (ADS)

    Pakula, Anna; Tomczewski, Slawomir; Skalski, Andrzej; Biało, Dionizy; Salbut, Leszek

    2010-05-01

    This paper presents novel application of Low Coherence Interferometry (LCI) in measurements of characteristic parameters as circular pitch, foot diameter, heads diameter, in extremely small cogged wheels (cogged wheel diameter lower than θ=3 mm and module m = 0.15) produced from metal and ceramics. The most interesting issue concerning small diameter cogged wheels occurs during their production. The characteristic parameters of the wheel depend strongly on the manufacturing process and while inspecting small diameter wheels the shrinkage during the cast varies with the slight change of fabrication process. In the paper the LCI interferometric Twyman - Green setup with pigtailed high power light emitting diode, for cogged wheels measurement, is described. Due to its relatively big field of view the whole wheel can be examined in one measurement, without the necessity of numerical stitching. For purposes of small cogged wheel's characteristic parameters measurement the special binarization algorithm was developed and successfully applied. At the end the results of measurement of heads and foot diameters of two cogged wheels obtained by proposed LCI setup are presented and compared with the results obtained by the commercial optical profiler. The results of examination of injection moulds used for fabrication of measured cogged wheels are also presented. Additionally, the value of cogged wheels shrinkage is calculated as a conclusion for obtained results. Proposed method is suitable for complex measurements of small diameter cogged wheels with low module especially when there are no measurements standards for such objects.

  19. Attitude determination of a high altitude balloon system. Part 2: Development of the parameter determination process

    NASA Technical Reports Server (NTRS)

    Nigro, N. J.; Elkouh, A. F.

    1975-01-01

    The attitude of the balloon system is determined as a function of time if: (a) a method for simulating the motion of the system is available, and (b) the initial state is known. The initial state is obtained by fitting the system motion (as measured by sensors) to the corresponding output predicted by the mathematical model. In the case of the LACATE experiment the sensors consisted of three orthogonally oriented rate gyros and a magnetometer all mounted on the research platform. The initial state was obtained by fitting the angular velocity components measured with the gyros to the corresponding values obtained from the solution of the math model. A block diagram illustrating the attitude determination process employed for the LACATE experiment is shown. The process consists of three essential parts; a process for simulating the balloon system, an instrumentation system for measuring the output, and a parameter estimation process for systematically and efficiently solving the initial state. Results are presented and discussed.

  20. Exact solutions for kinetic models of macromolecular dynamics.

    PubMed

    Chemla, Yann R; Moffitt, Jeffrey R; Bustamante, Carlos

    2008-05-15

    Dynamic biological processes such as enzyme catalysis, molecular motor translocation, and protein and nucleic acid conformational dynamics are inherently stochastic processes. However, when such processes are studied on a nonsynchronized ensemble, the inherent fluctuations are lost, and only the average rate of the process can be measured. With the recent development of methods of single-molecule manipulation and detection, it is now possible to follow the progress of an individual molecule, measuring not just the average rate but the fluctuations in this rate as well. These fluctuations can provide a great deal of detail about the underlying kinetic cycle that governs the dynamical behavior of the system. However, extracting this information from experiments requires the ability to calculate the general properties of arbitrarily complex theoretical kinetic schemes. We present here a general technique that determines the exact analytical solution for the mean velocity and for measures of the fluctuations. We adopt a formalism based on the master equation and show how the probability density for the position of a molecular motor at a given time can be solved exactly in Fourier-Laplace space. With this analytic solution, we can then calculate the mean velocity and fluctuation-related parameters, such as the randomness parameter (a dimensionless ratio of the diffusion constant and the velocity) and the dwell time distributions, which fully characterize the fluctuations of the system, both commonly used kinetic parameters in single-molecule measurements. Furthermore, we show that this formalism allows calculation of these parameters for a much wider class of general kinetic models than demonstrated with previous methods.

  1. Reconstruction of atmospheric pollutant concentrations from remote sensing data - An application of distributed parameter observer theory

    NASA Technical Reports Server (NTRS)

    Koda, M.; Seinfeld, J. H.

    1982-01-01

    The reconstruction of a concentration distribution from spatially averaged and noise-corrupted data is a central problem in processing atmospheric remote sensing data. Distributed parameter observer theory is used to develop reconstructibility conditions for distributed parameter systems having measurements typical of those in remote sensing. The relation of the reconstructibility condition to the stability of the distributed parameter observer is demonstrated. The theory is applied to a variety of remote sensing situations, and it is found that those in which concentrations are measured as a function of altitude satisfy the conditions of distributed state reconstructibility.

  2. Measurement of drill grinding parameters using laser sensor

    NASA Astrophysics Data System (ADS)

    Yanping, Peng; Kumehara, Hiroyuki; Wei, Zhang; Nomura, Takashi

    2005-12-01

    To measure the grinding parameters and geometry parameters accurately for a drill point is essential to its design and reconditioning. In recent years, a number of non-contact coordinate measuring apparatuses, using CCD camera or laser sensors, are developed. But, a lot work is to be done for further improvement. This paper reports another kind of laser coordinate meter. As an example of its application, the method for geometry inspection of the drill flank surface is detailed. Measured data from laser scanning on the flank surface around some points with several 2-dimensional curves are analyzed with mathematical procedure. If one of these curves turns to be a straight line, it must be the generatrix of the grinding cone. Thus, the grinding parameters are determined by a set of three generatrices. Then, the measurement method and data processing procedure are proposed. Its validity is assessed by measuring a sample with given parameters. The point geometry measured agrees well with the known values. In comparison with other methods in the published literature, it is simpler in computation and more accurate in results.

  3. Distribution and avoidance of debris on epoxy resin during UV ns-laser scanning processes

    NASA Astrophysics Data System (ADS)

    Veltrup, Markus; Lukasczyk, Thomas; Ihde, Jörg; Mayer, Bernd

    2018-05-01

    In this paper the distribution of debris generated by a nanosecond UV laser (248 nm) on epoxy resin and the prevention of the corresponding re-deposition effects by parameter selection for a ns-laser scanning process were investigated. In order to understand the mechanisms behind the debris generation, in-situ particle measurements were performed during laser treatment. These measurements enabled the determination of the ablation threshold of the epoxy resin as well as the particle density and size distribution in relation to the applied laser parameters. The experiments showed that it is possible to reduce debris on the surface with an adapted selection of pulse overlap with respect to laser fluence. A theoretical model for the parameter selection was developed and tested. Based on this model, the correct choice of laser parameters with reduced laser fluence resulted in a surface without any re-deposited micro-particles.

  4. Standardization of domestic frying processes by an engineering approach.

    PubMed

    Franke, K; Strijowski, U

    2011-05-01

    An approach was developed to enable a better standardization of domestic frying of potato products. For this purpose, 5 domestic fryers differing in heating power and oil capacity were used. A very defined frying process using a highly standardized model product and a broad range of frying conditions was carried out in these fryers and the development of browning representing an important quality parameter was measured. Product-to-oil ratio, oil temperature, and frying time were varied. Quite different color changes were measured in the different fryers although the same frying process parameters were applied. The specific energy consumption for water evaporation (spECWE) during frying related to product amount was determined for all frying processes to define an engineering parameter for characterizing the frying process. A quasi-linear regression approach was applied to calculate this parameter from frying process settings and fryer properties. The high significance of the regression coefficients and a coefficient of determination close to unity confirmed the suitability of this approach. Based on this regression equation, curves for standard frying conditions (SFC curves) were calculated which describe the frying conditions required to obtain the same level of spECWE in the different domestic fryers. Comparison of browning results from the different fryers operated at conditions near the SFC curves confirmed the applicability of the approach. © 2011 Institute of Food Technologists®

  5. Process and apparatus for measuring degree of polarization and angle of major axis of polarized beam of light

    DOEpatents

    Decker, Derek E.; Toeppen, John S.

    1994-01-01

    Apparatus and process are disclosed for calibrating measurements of the phase of the polarization of a polarized beam and the angle of the polarized optical beam's major axis of polarization at a diagnostic point with measurements of the same parameters at a point of interest along the polarized beam path prior to the diagnostic point. The process is carried out by measuring the phase angle of the polarization of the beam and angle of the major axis at the point of interest, using a rotatable polarizer and a detector, and then measuring these parameters again at a diagnostic point where a compensation apparatus, including a partial polarizer, which may comprise a stack of glass plates, is disposed normal to the beam path between a rotatable polarizer and a detector. The partial polarizer is then rotated both normal to the beam path and around the axis of the beam path until the detected phase of the beam polarization equals the phase measured at the point of interest. The rotatable polarizer at the diagnostic point may then be rotated manually to determine the angle of the major axis of the beam and this is compared with the measured angle of the major axis of the beam at the point of interest during calibration. Thereafter, changes in the polarization phase, and in the angle of the major axis, at the point of interest can be monitored by measuring the changes in these same parameters at the diagnostic point.

  6. The aging process of optical couplers by gamma irradiation

    NASA Astrophysics Data System (ADS)

    Bednarek, Lukas; Marcinka, Ondrej; Perecar, Frantisek; Papes, Martin; Hajek, Lukas; Nedoma, Jan; Vasinek, Vladimir

    2015-08-01

    Scientists have recently discovered that the ageing process of optical elements is faster than it was originally anticipated. It is mostly due to the multiple increases of the optical power in optical components, the introduction of wavelength division multiplexers and, overall, the increased flow of traffic in optical communications. This article examines the ageing process of optical couplers and it focuses on their performance parameters. It describes the measurement procedure followed by the evaluation of the measurement results. To accelerate the ageing process, gamma irradiation from 60Co was used. The results of the measurements of the optical coupler with one input and eight outputs (1:8) were summarized. The results gained by measuring of the optical coupler with one input and four outputs (1:4) as well as of the optical couplers with one input and two outputs (1:2) with different split ratios were also processed. The optical powers were measured on the input and the outputs of each branch of each optical coupler at the wavelengths of 1310 nm and 1550 nm. The parameters of the optical couplers were subsequently calculated according to the appropriate formulas. These parameters were the insertion loss of the individual branches, split ratio, total losses, homogeneity of the losses and directionalities alias cross-talk between the individual output branches. The gathered data were summarized before and after the first irradiation when the configuration of the couplers was 1:8 and 1:4. The data were summarized after the third irradiation when the configuration of the couplers was 1:2.

  7. The effect of orientation difference in fused deposition modeling of ABS polymer on the processing time, dimension accuracy, and strength

    NASA Astrophysics Data System (ADS)

    Tanoto, Yopi Y.; Anggono, Juliana; Siahaan, Ian H.; Budiman, Wesley

    2017-01-01

    There are several parameters that must be set before manufacturing a product using 3D printing. These parameters include the orientation deposition of that product, type of material, form fill, fill density, and other parameters. The finished product of 3D printing has some responses that can be observed, measured, and tested. Some of those responses are the processing time, the dimensions of the end product, its surface roughness and the mechanical properties, i.e. its yield strength, ultimate tensile strength, and impact resistance. This research was conducted to study the relationship between process parameters of 3D printing machine using a technology of fused deposition modeling (FDM) and the generated responses. The material used was ABS plastic that was commonly used in the industry. Understanding the relationship between the parameters and the responses thus the resulting product can be manufactured to meet the user needs. Three different orientations in depositing the ABS polymer named XY(first orientation), YX (second orientation), and ZX (third orientation) were studied. Processing time, dimensional accuracy, and the product strength were the responses that were measured and tested. The study reports that the printing process with third orientation was the fastest printing process with the processing time 2432 seconds followed by orientation 1 and 2 with a processing time of 2688 and 2780 seconds respectively. Dimension accuracy was also measured from the width and the length of gauge area of tensile test specimens printed in comparison with the dimensions required by ASTM 638-02. It was found that the smallest difference was in thickness dimension, i.e. 0.1 mm thicker in printed sample using second orientation than as required by the standard. The smallest thickness deviation from the standard was measured in width dimension of a sample printed using first orientation (0.13 mm). As with the length dimension, the closest dimension to the standard was resulted from the third orientation product, i.e 0.2 mm. Tensile test done on all the specimens produced with those three orientations shows that the highest tensile strength was obtained in sample from second orientation deposition, i.e. 7.66 MPa followed by the first and third orientations products, i.e. 6.8 MPa and 3.31 MPa, respectively.

  8. Use of scatterometry for resist process control

    NASA Astrophysics Data System (ADS)

    Bishop, Kenneth P.; Milner, Lisa-Michelle; Naqvi, S. Sohail H.; McNeil, John R.; Draper, B. L.

    1992-06-01

    The formation of resist lines having submicron critical dimensions (CDs) is a complex multistep process, requiring precise control of each processing step. Optimization of parameters for each processing step may be accomplished through theoretical modeling techniques and/or the use of send-ahead wafers followed by scanning electron microscope measurements. Once the optimum parameters for any process having been selected, (e.g., time duration and temperature for post-exposure bake process), no in-situ CD measurements are made. In this paper we describe the use of scatterometry to provide this essential metrology capability. It involves focusing a laser beam on a periodic grating and predicting the shape of the grating lines from a measurement of the scattered power in the diffraction orders. The inverse prediction of lineshape from a measurement of the scatter power is based on a vector diffraction analysis used in conjunction with photolithography simulation tools to provide an accurate scatter model for latent image gratings. This diffraction technique has previously been applied to looking at latent image grating formation, as exposure is taking place. We have broadened the scope of the application and consider the problem of determination of optimal focus.

  9. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    NASA Technical Reports Server (NTRS)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  10. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  11. Tomographical process monitoring of laser transmission welding with OCT

    NASA Astrophysics Data System (ADS)

    Ackermann, Philippe; Schmitt, Robert

    2017-06-01

    Process control of laser processes still encounters many obstacles. Although these processes are stable, a narrow process parameter window during the process or process deviations have led to an increase on the requirements for the process itself and on monitoring devices. Laser transmission welding as a contactless and locally limited joining technique is well-established in a variety of demanding production areas. For example, sensitive parts demand a particle-free joining technique which does not affect the inner components. Inline integrated non-destructive optical measurement systems capable of providing non-invasive tomographical images of the transparent material, the weld seam and its surrounding areas with micron resolution would improve the overall process. Obtained measurement data enable qualitative feedback into the system to adapt parameters for a more robust process. Within this paper we present the inline monitoring device based on Fourier-domain optical coherence tomography developed within the European-funded research project "Manunet Weldable". This device, after adaptation to the laser transmission welding process is optically and mechanically integrated into the existing laser system. The main target lies within the inline process control destined to extract tomographical geometrical measurement data from the weld seam forming process. Usage of this technology makes offline destructive testing of produced parts obsolete. 1,2,3,4

  12. Assessment of Process Capability: the case of Soft Drinks Processing Unit

    NASA Astrophysics Data System (ADS)

    Sri Yogi, Kottala

    2018-03-01

    The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.

  13. Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.

    PubMed

    Munir, Mohammad

    2018-06-01

    Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Analysis of Generator Oscillation Characteristics Based on Multiple Synchronized Phasor Measurements

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Takuhei; Yoshimoto, Masamichi; Mitani, Yasunori; Saeki, Osamu; Tsuji, Kiichiro

    In recent years, there has been considerable interest in the on-line measurement, such as observation of power system dynamics and evaluation of machine parameters. On-line methods are particularly attractive since the machine’s service need not be interrupted and parameter estimation is performed by processing measurements obtained during the normal operation of the machine. Authors placed PMU (Phasor Measurement Unit) connected to 100V outlets in some Universities in the 60Hz power system and examine oscillation characteristics in power system. PMU is synchronized based on the global positioning system (GPS) and measured data are transmitted via Internet. This paper describes an application of PMU for generator oscillation analysis. The purpose of this paper is to show methods for processing phase difference and to estimate damping coeffcient and natural angular frequency from phase difference at steady state.

  15. Optimization of Surface Roughness Parameters of Al-6351 Alloy in EDC Process: A Taguchi Coupled Fuzzy Logic Approach

    NASA Astrophysics Data System (ADS)

    Kar, Siddhartha; Chakraborty, Sujoy; Dey, Vidyut; Ghosh, Subrata Kumar

    2017-10-01

    This paper investigates the application of Taguchi method with fuzzy logic for multi objective optimization of roughness parameters in electro discharge coating process of Al-6351 alloy with powder metallurgical compacted SiC/Cu tool. A Taguchi L16 orthogonal array was employed to investigate the roughness parameters by varying tool parameters like composition and compaction load and electro discharge machining parameters like pulse-on time and peak current. Crucial roughness parameters like Centre line average roughness, Average maximum height of the profile and Mean spacing of local peaks of the profile were measured on the coated specimen. The signal to noise ratios were fuzzified to optimize the roughness parameters through a single comprehensive output measure (COM). Best COM obtained with lower values of compaction load, pulse-on time and current and 30:70 (SiC:Cu) composition of tool. Analysis of variance is carried out and a significant COM model is observed with peak current yielding highest contribution followed by pulse-on time, compaction load and composition. The deposited layer is characterised by X-Ray Diffraction analysis which confirmed the presence of tool materials on the work piece surface.

  16. PMMA/PS coaxial electrospinning: a statistical analysis on processing parameters

    NASA Astrophysics Data System (ADS)

    Rahmani, Shahrzad; Arefazar, Ahmad; Latifi, Masoud

    2017-08-01

    Coaxial electrospinning, as a versatile method for producing core-shell fibers, is known to be very sensitive to two classes of influential factors including material and processing parameters. Although coaxial electrospinning has been the focus of many studies, the effects of processing parameters on the outcomes of this method have not yet been well investigated. A good knowledge of the impacts of processing parameters and their interactions on coaxial electrospinning can make it possible to better control and optimize this process. Hence, in this study, the statistical technique of response surface method (RSM) using the design of experiments on four processing factors of voltage, distance, core and shell flow rates was applied. Transmission electron microscopy (TEM), scanning electron microscopy (SEM), oil immersion and Fluorescent microscopy were used to characterize fiber morphology. The core and shell diameters of fibers were measured and the effects of all factors and their interactions were discussed. Two polynomial models with acceptable R-squares were proposed to describe the core and shell diameters as functions of the processing parameters. Voltage and distance were recognized as the most significant and influential factors on shell diameter, while core diameter was mainly under the influence of core and shell flow rates besides the voltage.

  17. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  18. Reflow dynamics of thin patterned viscous films

    NASA Astrophysics Data System (ADS)

    Leveder, T.; Landis, S.; Davoust, L.

    2008-01-01

    This letter presents a study of viscous smoothening dynamics of a nanopatterned thin film. Ultrathin film manufacturing processes appearing to be a key point of nanotechnology engineering and numerous studies have been recently led in order to exhibit driving parameters of this transient surface motion, focusing on time scale accuracy method. Based on nanomechanical analysis, this letter shows that controlled shape measurements provided much more detailed information about reflow mechanism. Control of reflow process of any complex surface shape, or measurement of material parameter as thin film viscosity, free surface energy, or even Hamaker constant are therefore possible.

  19. Information Use Differences in Hot and Cold Risk Processing: When Does Information About Probability Count in the Columbia Card Task?

    PubMed

    Markiewicz, Łukasz; Kubińska, Elżbieta

    2015-01-01

    This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks.

  20. Information Use Differences in Hot and Cold Risk Processing: When Does Information About Probability Count in the Columbia Card Task?

    PubMed Central

    Markiewicz, Łukasz; Kubińska, Elżbieta

    2015-01-01

    Objective: This paper aims to provide insight into information processing differences between hot and cold risk taking decision tasks within a single domain. Decision theory defines risky situations using at least three parameters: outcome one (often a gain) with its probability and outcome two (often a loss) with a complementary probability. Although a rational agent should consider all of the parameters, s/he could potentially narrow their focus to only some of them, particularly when explicit Type 2 processes do not have the resources to override implicit Type 1 processes. Here we investigate differences in risky situation parameters' influence on hot and cold decisions. Although previous studies show lower information use in hot than in cold processes, they do not provide decision weight changes and therefore do not explain whether this difference results from worse concentration on each parameter of a risky situation (probability, gain amount, and loss amount) or from ignoring some parameters. Methods: Two studies were conducted, with participants performing the Columbia Card Task (CCT) in either its Cold or Hot version. In the first study, participants also performed the Cognitive Reflection Test (CRT) to monitor their ability to override Type 1 processing cues (implicit processes) with Type 2 explicit processes. Because hypothesis testing required comparison of the relative importance of risky situation decision weights (gain, loss, probability), we developed a novel way of measuring information use in the CCT by employing a conjoint analysis methodology. Results: Across the two studies, results indicated that in the CCT Cold condition decision makers concentrate on each information type (gain, loss, probability), but in the CCT Hot condition they concentrate mostly on a single parameter: probability of gain/loss. We also show that an individual's CRT score correlates with information use propensity in cold but not hot tasks. Thus, the affective dimension of hot tasks inhibits correct information processing, probably because it is difficult to engage Type 2 processes in such circumstances. Individuals' Type 2 processing abilities (measured by the CRT) assist greater use of information in cold tasks but do not help in hot tasks. PMID:26635652

  1. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  2. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion

    PubMed Central

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-01-01

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product’s performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner’s ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499

  3. Stochastic Modeling and Analysis of Multiple Nonlinear Accelerated Degradation Processes through Information Fusion.

    PubMed

    Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao

    2016-08-06

    Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product's performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner's ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters.

  4. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor

    NASA Astrophysics Data System (ADS)

    Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping

    2017-05-01

    To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.

  5. Analysis of pressure-flow data in terms of computer-derived urethral resistance parameters.

    PubMed

    van Mastrigt, R; Kranse, M

    1995-01-01

    The simultaneous measurement of detrusor pressure and flow rate during voiding is at present the only way to measure or grade infravesical obstruction objectively. Numerous methods have been introduced to analyze the resulting data. These methods differ in aim (measurement of urethral resistance and/or diagnosis of obstruction), method (manual versus computerized data processing), theory or model used, and resolution (continuously variable parameters or a limited number of classes, the so-called monogram). In this paper, some aspects of these fundamental differences are discussed and illustrated. Subsequently, the properties and clinical performance of two computer-based methods for deriving continuous urethral resistance parameters are treated.

  6. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  7. A Multinomial Model of Event-Based Prospective Memory

    ERIC Educational Resources Information Center

    Smith, Rebekah E.; Bayen, Ute J.

    2004-01-01

    Prospective memory is remembering to perform an action in the future. The authors introduce the 1st formal model of event-based prospective memory, namely, a multinomial model that includes 2 separate parameters related to prospective memory processes. The 1st measures preparatory attentional processes, and the 2nd measures retrospective memory…

  8. Optimization of Primary Drying in Lyophilization during Early Phase Drug Development using a Definitive Screening Design with Formulation and Process Factors.

    PubMed

    Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram

    2018-06-08

    Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.

  9. Decision support for operations and maintenance (DSOM) system

    DOEpatents

    Jarrell, Donald B [Kennewick, WA; Meador, Richard J [Richland, WA; Sisk, Daniel R [Richland, WA; Hatley, Darrel D [Kennewick, WA; Brown, Daryl R [Richland, WA; Keibel, Gary R [Richland, WA; Gowri, Krishnan [Richland, WA; Reyes-Spindola, Jorge F [Richland, WA; Adams, Kevin J [San Bruno, CA; Yates, Kenneth R [Lake Oswego, OR; Eschbach, Elizabeth J [Fort Collins, CO; Stratton, Rex C [Richland, WA

    2006-03-21

    A method for minimizing the life cycle cost of processes such as heating a building. The method utilizes sensors to monitor various pieces of equipment used in the process, for example, boilers, turbines, and the like. The method then performs the steps of identifying a set optimal operating conditions for the process, identifying and measuring parameters necessary to characterize the actual operating condition of the process, validating data generated by measuring those parameters, characterizing the actual condition of the process, identifying an optimal condition corresponding to the actual condition, comparing said optimal condition with the actual condition and identifying variances between the two, and drawing from a set of pre-defined algorithms created using best engineering practices, an explanation of at least one likely source and at least one recommended remedial action for selected variances, and providing said explanation as an output to at least one user.

  10. A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing.

    PubMed

    Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi

    2009-01-01

    Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs.

  11. A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing

    PubMed Central

    Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi

    2009-01-01

    Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs. PMID:22399990

  12. Multiscale metrologies for process optimization of carbon nanotube polymer composites

    DOE PAGES

    Natarajan, Bharath; Orloff, Nathan D.; Ashkar, Rana; ...

    2016-07-18

    Carbon nanotube (CNT) polymer nanocomposites are attractive multifunctional materials with a growing range of commercial applications. With the increasing demand for these materials, it is imperative to develop and validate methods for on-line quality control and process monitoring during production. In this work, a novel combination of characterization techniques is utilized, that facilitates the non-invasive assessment of CNT dispersion in epoxy produced by the scalable process of calendering. First, the structural parameters of these nanocomposites are evaluated across multiple length scales (10 -10 m to 10 -3 m) using scanning gallium-ion microscopy, transmission electron microscopy and small-angle neutron scattering. Then,more » a non-contact resonant microwave cavity perturbation (RCP) technique is employed to accurately measure the AC electrical conductivity of the nanocomposites. Quantitative correlations between the conductivity and structural parameters find the RCP measurements to be sensitive to CNT mass fraction, spatial organization and, therefore, the processing parameters. These results, and the non-contact nature and speed of RCP measurements identify this technique as being ideally suited for quality control of CNT nanocomposites in a nanomanufacturing environment. In conclusion, when validated by the multiscale characterization suite, RCP may be broadly applicable in the production of hybrid functional materials, such as graphene, gold nanorod, and carbon black nanocomposites.« less

  13. Optimization and Surface Modification of Al-6351 Alloy Using SiC-Cu Green Compact Electrode by Electro Discharge Coating Process

    NASA Astrophysics Data System (ADS)

    Chakraborty, Sujoy; Kar, Siddhartha; Dey, Vidyut; Ghosh, Subrata Kumar

    2017-06-01

    This paper introduces the surface modification of Al-6351 alloy by green compact SiC-Cu electrode using electro-discharge coating (EDC) process. A Taguchi L-16 orthogonal array is employed to investigate the process by varying tool parameters like composition and compaction load and electro-discharge machining (EDM) parameters like pulse-on time and peak current. Material deposition rate (MDR), tool wear rate (TWR) and surface roughness (SR) are measured on the coated specimens. An optimum condition is achieved by formulating overall evaluation criteria (OEC), which combines multi-objective task into a single index. The signal-to-noise (S/N) ratio, and the analysis of variance (ANOVA) is employed to investigate the effect of relevant process parameters. A confirmation test is conducted based on optimal process parameters and experimental results are provided to illustrate the effectiveness of this approach. The modified surface is characterized by optical microscope and X-ray diffraction (XRD) analysis. XRD analysis of the deposited layer confirmed the transfer of tool materials to the work surface and formation of inter-metallic phases. The micro-hardness of the resulting composite layer is also measured which is 1.5-3 times more than work material’s one and highest layer thickness (LT) of 83.644μm has been successfully achieved.

  14. Atomic layer deposition for fabrication of HfO2/Al2O3 thin films with high laser-induced damage thresholds.

    PubMed

    Wei, Yaowei; Pan, Feng; Zhang, Qinghua; Ma, Ping

    2015-01-01

    Previous research on the laser damage resistance of thin films deposited by atomic layer deposition (ALD) is rare. In this work, the ALD process for thin film generation was investigated using different process parameters such as various precursor types and pulse duration. The laser-induced damage threshold (LIDT) was measured as a key property for thin films used as laser system components. Reasons for film damaged were also investigated. The LIDTs for thin films deposited by improved process parameters reached a higher level than previously measured. Specifically, the LIDT of the Al2O3 thin film reached 40 J/cm(2). The LIDT of the HfO2/Al2O3 anti-reflector film reached 18 J/cm(2), the highest value reported for ALD single and anti-reflect films. In addition, it was shown that the LIDT could be improved by further altering the process parameters. All results show that ALD is an effective film deposition technique for fabrication of thin film components for high-power laser systems.

  15. Post-processing of seismic parameter data based on valid seismic event determination

    DOEpatents

    McEvilly, Thomas V.

    1985-01-01

    An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.

  16. Automatic temperature adjustment apparatus

    DOEpatents

    Chaplin, James E.

    1985-01-01

    An apparatus for increasing the efficiency of a conventional central space heating system is disclosed. The temperature of a fluid heating medium is adjusted based on a measurement of the external temperature, and a system parameter. The system parameter is periodically modified based on a closed loop process that monitors the operation of the heating system. This closed loop process provides a heating medium temperature value that is very near the optimum for energy efficiency.

  17. Coherence Measurements for Excited to Excited State Transitions in Barium

    NASA Technical Reports Server (NTRS)

    Trajmar, S.; Kanik, I.; Karaganov, V.; Zetner, P. W.; Csanak, G.

    2000-01-01

    Experimental studies concerning elastic and inelastic electron scattering by coherently ensembles of Ba (...6s6p (sub 1)P(sub 1)) atoms with various degrees of alignment will be described. An in-plane, linearly-polarized laser beam was utilized to prepare these target ensembles and the electron scattering signal as a function of polarization angle was measured for several laser geometries at fixed impact energies and scattering angles. From these measurements, we derived cross sections and electron-impact coherence parameters associated with the electron scattering process which is time reverse of the actual experimentally studied process. This interpretation of the experiment is based on the theory of Macek and Herte. The experimental results were also interpreted in terms of cross sections and collision parameters associated with the actual experimental processes. Results obtained so far will be presented and plans for further studies will be discussed.

  18. Image processing for IMRT QA dosimetry.

    PubMed

    Zaini, Mehran R; Forest, Gary J; Loshek, David D

    2005-01-01

    We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.

  19. Thermocouple and infrared sensor-based measurement of temperature distribution in metal cutting.

    PubMed

    Kus, Abdil; Isik, Yahya; Cakir, M Cemal; Coşkun, Salih; Özdemir, Kadir

    2015-01-12

    In metal cutting, the magnitude of the temperature at the tool-chip interface is a function of the cutting parameters. This temperature directly affects production; therefore, increased research on the role of cutting temperatures can lead to improved machining operations. In this study, tool temperature was estimated by simultaneous temperature measurement employing both a K-type thermocouple and an infrared radiation (IR) pyrometer to measure the tool-chip interface temperature. Due to the complexity of the machining processes, the integration of different measuring techniques was necessary in order to obtain consistent temperature data. The thermal analysis results were compared via the ANSYS finite element method. Experiments were carried out in dry machining using workpiece material of AISI 4140 alloy steel that was heat treated by an induction process to a hardness of 50 HRC. A PVD TiAlN-TiN-coated WNVG 080404-IC907 carbide insert was used during the turning process. The results showed that with increasing cutting speed, feed rate and depth of cut, the tool temperature increased; the cutting speed was found to be the most effective parameter in assessing the temperature rise. The heat distribution of the cutting tool, tool-chip interface and workpiece provided effective and useful data for the optimization of selected cutting parameters during orthogonal machining.

  20. Fluorescence lifetime measurements in flow cytometry

    NASA Astrophysics Data System (ADS)

    Beisker, Wolfgang; Klocke, Axel

    1997-05-01

    Fluorescence lifetime measurements provide insights int eh dynamic and structural properties of dyes and their micro- environment. The implementation of fluorescence lifetime measurements in flow cytometric systems allows to monitor large cell and particle populations with high statistical significance. In our system, a modulated laser beam is used for excitation and the phase shift of the fluorescence signal recorded with a fast computer controlled digital oscilloscope is processed digitally to determine the phase shift with respect to a reference beam by fast fourier transform. Total fluorescence intensity as well as other parameters can be determined simultaneously from the same fluorescence signal. We use the epi-illumination design to allow the use of high numerical apertures to collect as much light as possible to ensure detection of even weak fluorescence. Data storage and processing is done comparable to slit-scan flow cytometric data using data analysis system. The results are stored, displayed, combined with other parameters and analyzed as normal listmode data. In our report we discuss carefully the signal to noise ratio for analog and digital processed lifetime signals to evaluate the theoretical minimum fluorescence intensity for lifetime measurements. Applications to be presented include DNA staining, parameters of cell functions as well as different applications in non-mammalian cells such as algae.

  1. Thermocouple and Infrared Sensor-Based Measurement of Temperature Distribution in Metal Cutting

    PubMed Central

    Kus, Abdil; Isik, Yahya; Cakir, M. Cemal; Coşkun, Salih; Özdemir, Kadir

    2015-01-01

    In metal cutting, the magnitude of the temperature at the tool-chip interface is a function of the cutting parameters. This temperature directly affects production; therefore, increased research on the role of cutting temperatures can lead to improved machining operations. In this study, tool temperature was estimated by simultaneous temperature measurement employing both a K-type thermocouple and an infrared radiation (IR) pyrometer to measure the tool-chip interface temperature. Due to the complexity of the machining processes, the integration of different measuring techniques was necessary in order to obtain consistent temperature data. The thermal analysis results were compared via the ANSYS finite element method. Experiments were carried out in dry machining using workpiece material of AISI 4140 alloy steel that was heat treated by an induction process to a hardness of 50 HRC. A PVD TiAlN-TiN-coated WNVG 080404-IC907 carbide insert was used during the turning process. The results showed that with increasing cutting speed, feed rate and depth of cut, the tool temperature increased; the cutting speed was found to be the most effective parameter in assessing the temperature rise. The heat distribution of the cutting tool, tool-chip interface and workpiece provided effective and useful data for the optimization of selected cutting parameters during orthogonal machining. PMID:25587976

  2. Towards better process understanding: chemometrics and multivariate measurements in manufacturing of solid dosage forms.

    PubMed

    Matero, Sanni; van Den Berg, Frans; Poutiainen, Sami; Rantanen, Jukka; Pajander, Jari

    2013-05-01

    The manufacturing of tablets involves many unit operations that possess multivariate and complex characteristics. The interactions between the material characteristics and process related variation are presently not comprehensively analyzed due to univariate detection methods. As a consequence, current best practice to control a typical process is to not allow process-related factors to vary i.e. lock the production parameters. The problem related to the lack of sufficient process understanding is still there: the variation within process and material properties is an intrinsic feature and cannot be compensated for with constant process parameters. Instead, a more comprehensive approach based on the use of multivariate tools for investigating processes should be applied. In the pharmaceutical field these methods are referred to as Process Analytical Technology (PAT) tools that aim to achieve a thorough understanding and control over the production process. PAT includes the frames for measurement as well as data analyzes and controlling for in-depth understanding, leading to more consistent and safer drug products with less batch rejections. In the optimal situation, by applying these techniques, destructive end-product testing could be avoided. In this paper the most prominent multivariate data analysis measuring tools within tablet manufacturing and basic research on operations are reviewed. Copyright © 2013 Wiley Periodicals, Inc.

  3. Quantum structures: An attempt to explain the origin of their appearance in nature

    NASA Astrophysics Data System (ADS)

    Aerts, Diederik

    1995-08-01

    We explain quantum structure as due to two effects: (a) a real change of state of the entity under the influence of the measurement and (b) a lack of knowledge about a deeper deterministic reality of the measurement process. We present a quantum machine, with which we can illustrate in a simple way how the quantum structure arises as a consequence of the two mentioned effects. We introduce a parameter ɛ that measures the size of the lack of knowledge of the measurement process, and by varying this parameter, we describe a continuous evolution from a quantum structure (maximal lack of knowledge) to a classical structure (zero lack of knowledge). We show that for intermediate values of ɛ we find a new type of structure that is neither quantum nor classical. We apply the model to situations of lack of knowledge about the measurement process appearing in other aspects of reality. Specifically, we investigate the quantumlike structures that appear in the situation of psychological decision processes, where the subject is influenced during the testing and forms some opinions during the testing process. Our conclusion is that in the light of this explanation, the quantum probabilities are epistemic and not ontological, which means that quantum mechanics is compatible with a determinism of the whole.

  4. Impact of various operating modes on performance and emission parameters of small heat source

    NASA Astrophysics Data System (ADS)

    Vician, Peter; Holubčík, Michal; Palacka, Matej; Jandačka, Jozef

    2016-06-01

    Thesis deals with the measurement of performance and emission parameters of small heat source for combustion of biomass in each of its operating modes. As the heat source was used pellet boiler with an output of 18 kW. The work includes design of experimental device for measuring the impact of changes in air supply and method for controlling the power and emission parameters of heat sources for combustion of woody biomass. The work describes the main factors that affect the combustion process and analyze the measurements of emissions at the heat source. The results of experiment demonstrate the values of performance and emissions parameters for the different operating modes of the boiler, which serve as a decisive factor in choosing the appropriate mode.

  5. A combined model to assess technical and economic consequences of changing conditions and management options for wastewater utilities.

    PubMed

    Giessler, Mathias; Tränckner, Jens

    2018-02-01

    The paper presents a simplified model that quantifies economic and technical consequences of changing conditions in wastewater systems on utility level. It has been developed based on data from stakeholders and ministries, collected by a survey that determined resulting effects and adapted measures. The model comprises all substantial cost relevant assets and activities of a typical German wastewater utility. It consists of three modules: i) Sewer for describing the state development of sewer systems, ii) WWTP for process parameter consideration of waste water treatment plants (WWTP) and iii) Cost Accounting for calculation of expenses in the cost categories and resulting charges. Validity and accuracy of this model was verified by using historical data from an exemplary wastewater utility. Calculated process as well as economic parameters shows a high accuracy compared to measured parameters and given expenses. Thus, the model is proposed to support strategic, process oriented decision making on utility level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Fed-batch control based upon the measurement of intracellular NADH

    NASA Technical Reports Server (NTRS)

    Armiger, W. B.; Lee, J. F.; Montalvo, L. M.; Forro, J. R.

    1987-01-01

    A series of experiments demonstrating that on-line measurements of intracellular NADH by culture fluorescence can be used to monitor and control the fermentation process are described. A distinct advantage of intercellular NADH measurements over other monitoring techniques such as pH and dissolved oxygen is that it directly measures real time events occurring within the cell rather than changes in the environment. When coupled with other measurement parameters, it can provide a finer degree of sophistication in process control.

  7. Parameter extraction using global particle swarm optimization approach and the influence of polymer processing temperature on the solar cell parameters

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Singh, A.; Dhar, A.

    2017-08-01

    The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.

  8. 40 CFR 63.115 - Process vent provisions-methods and procedures for process vent group determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer... (d)(3) of this section. (1) Engineering assessment may be used to determine vent stream flow rate...

  9. 40 CFR 63.115 - Process vent provisions-methods and procedures for process vent group determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer... (d)(3) of this section. (1) Engineering assessment may be used to determine vent stream flow rate...

  10. 40 CFR 63.115 - Process vent provisions-methods and procedures for process vent group determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer... (d)(3) of this section. (1) Engineering assessment may be used to determine vent stream flow rate...

  11. 40 CFR 63.115 - Process vent provisions-methods and procedures for process vent group determination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... From the Synthetic Organic Chemical Manufacturing Industry for Process Vents, Storage Vessels, Transfer... (d)(3) of this section. (1) Engineering assessment may be used to determine vent stream flow rate...

  12. Uncertainty Analysis of the Grazing Flow Impedance Tube

    NASA Technical Reports Server (NTRS)

    Brown, Martha C.; Jones, Michael G.; Watson, Willie R.

    2012-01-01

    This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.

  13. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  14. A Community Database of Quartz Microstructures: Can we make measurements that constrain rheology?

    NASA Astrophysics Data System (ADS)

    Toy, Virginia; Peternell, Mark; Morales, Luiz; Kilian, Ruediger

    2014-05-01

    Rheology can be explored by performing deformation experiments, and by examining resultant microstructures and textures as links to naturally deformed rocks. Certain deformation processes are assumed to result in certain microstructures or textures, of which some might be uniquely indicative, while most cannot be unequivocally used to interpret the deformation mechanism and hence rheology. Despite our lack of a sufficient understanding of microstructure and texture forming processes, huge advances in texture measurements and quantification of microstructural parameters have been made. Unfortunately, there are neither standard procedures nor a common consensus on interpretation of many parameters (e.g. texture, grain size, shape preferred orientation). Textures (crystallographic preferred orientations) have been extensively correlated to the interpretation of deformation mechanisms. For example the strength of textures can be measured either from the orientation distribution function (e.g. the J-index (Bunge, 1983) or texture entropy (Hielscher et al., 2007) or via the intensity of polefigures. However, there are various ways to identify a representative volume, to measure, to process the data and to calculate an odf and texture descriptors, which restricts their use as a comparative and diagnostic measurement. Microstructural parameters such as grain size, grain shape descriptors and fabric descriptors are similarly used to deduce and quantify deformation mechanisms. However there is very little consensus on how to measure and calculate some of these very important parameters, e.g. grain size which makes comparison of a vast amount of precious data in the literature very difficult. We propose establishing a community database of a standard set of such measurements, made using typical samples of different types of quartz rocks through standard methods of microstructural and texture quantification. We invite suggestions and discussion from the community about the worth of proposed parameters, methodology and usefulness and willingness to contribute to a database with free access of the community. We further invite institutions to participate on a benchmark analysis of a set of 'standard' thin sections. Bunge, H.J. 1983, Texture Analysis in Materials Science: mathematical methods. Butterworth-Heinemann, 593pp. Hielscher, R., Schaeben, H., Chateigner, D., 2007, On the entropy to texture index relationship in quantitative texture analysis: Journal of Applied Crystallography 40, 371-375.

  15. Flexible Carbon Nanotube Films for High Performance Strain Sensors

    PubMed Central

    Kanoun, Olfa; Müller, Christian; Benchirouf, Abderahmane; Sanli, Abdulkadir; Dinh, Trong Nghia; Al-Hamry, Ammar; Bu, Lei; Gerlach, Carina; Bouhamed, Ayda

    2014-01-01

    Compared with traditional conductive fillers, carbon nanotubes (CNTs) have unique advantages, i.e., excellent mechanical properties, high electrical conductivity and thermal stability. Nanocomposites as piezoresistive films provide an interesting approach for the realization of large area strain sensors with high sensitivity and low manufacturing costs. A polymer-based nanocomposite with carbon nanomaterials as conductive filler can be deposited on a flexible substrate of choice and this leads to mechanically flexible layers. Such sensors allow the strain measurement for both integral measurement on a certain surface and local measurement at a certain position depending on the sensor geometry. Strain sensors based on carbon nanostructures can overcome several limitations of conventional strain sensors, e.g., sensitivity, adjustable measurement range and integral measurement on big surfaces. The novel technology allows realizing strain sensors which can be easily integrated even as buried layers in material systems. In this review paper, we discuss the dependence of strain sensitivity on different experimental parameters such as composition of the carbon nanomaterial/polymer layer, type of polymer, fabrication process and processing parameters. The insights about the relationship between film parameters and electromechanical properties can be used to improve the design and fabrication of CNT strain sensors. PMID:24915183

  16. Melt-Pool Temperature and Size Measurement During Direct Laser Sintering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    List, III, Frederick Alyious; Dinwiddie, Ralph Barton; Carver, Keith

    2017-08-01

    Additive manufacturing has demonstrated the ability to fabricate complex geometries and components not possible with conventional casting and machining. In many cases, industry has demonstrated the ability to fabricate complex geometries with improved efficiency and performance. However, qualification and certification of processes is challenging, leaving companies to focus on certification of material though design allowable based approaches. This significantly reduces the business case for additive manufacturing. Therefore, real time monitoring of the melt pool can be used to detect the development of flaws, such as porosity or un-sintered powder and aid in the certification process. Characteristics of the melt poolmore » in the Direct Laser Sintering (DLS) process is also of great interest to modelers who are developing simulation models needed to improve and perfect the DLS process. Such models could provide a means to rapidly develop the optimum processing parameters for new alloy powders and optimize processing parameters for specific part geometries. Stratonics’ ThermaViz system will be integrated with the Renishaw DLS system in order to demonstrate its ability to measure melt pool size, shape and temperature. These results will be compared with data from an existing IR camera to determine the best approach for the determination of these critical parameters.« less

  17. Measurement of an asymmetry parameter in the decay of the cascade-minus hyperon

    NASA Astrophysics Data System (ADS)

    Chakravorty, Alak

    2000-10-01

    Fermilab experiment E756 collected a large dataset of polarized Ξ -hyperon decays, produced by 800-GeV/c unpolarized protons on a beryllium target. Of principal interest was the decay process Ξ - --> Λ0π- --> pπ-π-. An analysis of the asymmetry parameters of this decay was carried out on a sample of 1.3 × 106 Ξ- decays. φ Ξ was measured to be -1.33° +/- 2.66° +/- 1.22°, where the first error is statistical and the second is systematic. This corresponds to a measurement of the asymmetry parameter βΞ = -0.021 +/- 0.042 +/- 0.019, which is consistent with current theoretical estimates.

  18. In Situ Roughness Measurements for the Solar Cell Industry Using an Atomic Force Microscope

    PubMed Central

    González-Jorge, Higinio; Alvarez-Valado, Victor; Valencia, Jose Luis; Torres, Soledad

    2010-01-01

    Areal roughness parameters always need to be under control in the thin film solar cell industry because of their close relationship with the electrical efficiency of the cells. In this work, these parameters are evaluated for measurements carried out in a typical fabrication area for this industry. Measurements are made using a portable atomic force microscope on the CNC diamond cutting machine where an initial sample of transparent conductive oxide is cut into four pieces. The method is validated by making a comparison between the parameters obtained in this process and in the laboratory under optimal conditions. Areal roughness parameters and Fourier Spectral Analysis of the data show good compatibility and open the possibility to use this type of measurement instrument to perform in situ quality control. This procedure gives a sample for evaluation without destroying any of the transparent conductive oxide; in this way 100% of the production can be tested, so improving the measurement time and rate of production. PMID:22319338

  19. In situ roughness measurements for the solar cell industry using an atomic force microscope.

    PubMed

    González-Jorge, Higinio; Alvarez-Valado, Victor; Valencia, Jose Luis; Torres, Soledad

    2010-01-01

    Areal roughness parameters always need to be under control in the thin film solar cell industry because of their close relationship with the electrical efficiency of the cells. In this work, these parameters are evaluated for measurements carried out in a typical fabrication area for this industry. Measurements are made using a portable atomic force microscope on the CNC diamond cutting machine where an initial sample of transparent conductive oxide is cut into four pieces. The method is validated by making a comparison between the parameters obtained in this process and in the laboratory under optimal conditions. Areal roughness parameters and Fourier Spectral Analysis of the data show good compatibility and open the possibility to use this type of measurement instrument to perform in situ quality control. This procedure gives a sample for evaluation without destroying any of the transparent conductive oxide; in this way 100% of the production can be tested, so improving the measurement time and rate of production.

  20. Synchrotron-Based X-ray Microtomography Characterization of the Effect of Processing Variables on Porosity Formation in Laser Power-Bed Additive Manufacturing of Ti-6Al-4V

    NASA Astrophysics Data System (ADS)

    Cunningham, Ross; Narra, Sneha P.; Montgomery, Colt; Beuth, Jack; Rollett, A. D.

    2017-03-01

    The porosity observed in additively manufactured (AM) parts is a potential concern for components intended to undergo high-cycle fatigue without post-processing to remove such defects. The morphology of pores can help identify their cause: irregularly shaped lack of fusion or key-holing pores can usually be linked to incorrect processing parameters, while spherical pores suggest trapped gas. Synchrotron-based x-ray microtomography was performed on laser powder-bed AM Ti-6Al-4V samples over a range of processing conditions to investigate the effects of processing parameters on porosity. The process mapping technique was used to control melt pool size. Tomography was also performed on the powder to measure porosity within the powder that may transfer to the parts. As observed previously in experiments with electron beam powder-bed fabrication, significant variations in porosity were found as a function of the processing parameters. A clear connection between processing parameters and resulting porosity formation mechanism was observed in that inadequate melt pool overlap resulted in lack-of-fusion pores whereas excess power density produced keyhole pores.

  1. Laser-Ultrasonic Measurement of Elastic Properties of Anodized Aluminum Coatings

    NASA Astrophysics Data System (ADS)

    Singer, F.

    Anodized aluminum oxide plays a great role in many industrial applications, e.g. in order to achieve greater wear resistance. Since the hardness of the anodized films strongly depends on its processing parameters, it is important to characterize the influence of the processing parameters on the film properties. In this work the elastic material parameters of anodized aluminum were investigated using a laser-based ultrasound system. The anodized films were characterized analyzing the dispersion of Rayleigh waves with a one-layer model. It was shown that anodizing time and temperature strongly influence Rayleigh wave propagation.

  2. The degree of mutual anisotropy of biological liquids polycrystalline nets as a parameter in diagnostics and differentiations of hominal inflammatory processes

    NASA Astrophysics Data System (ADS)

    Angelsky, O. V.; Ushenko, Yu. A.; Balanetska, V. O.

    2011-09-01

    To characterize the degree of consistency of parameters of the optically uniaxial birefringent protein nets of blood plasma a new parameter - complex degree of mutual anisotropy is suggested. The technique of polarization measuring the coordinate distributions of the complex degree of mutual anisotropy of blood plasma is developed. It is shown that statistic approach to the analysis of the complex degree of mutual anisotropy distributions of blood plasma is effective during the diagnostics and differentiation of an acute inflammatory processes as well as acute and gangrenous appendicitis.

  3. A real-time multi-channel monitoring system for stem cell culture process.

    PubMed

    Xicai Yue; Drakakis, E M; Lim, M; Radomska, A; Hua Ye; Mantalaris, A; Panoskaltsis, N; Cass, A

    2008-06-01

    A novel, up to 128 channels, multi-parametric physiological measurement system suitable for monitoring hematopoietic stem cell culture processes and cell cultures in general is presented in this paper. The system aims to measure in real-time the most important physical and chemical culture parameters of hematopoietic stem cells, including physicochemical parameters, nutrients, and metabolites, in a long-term culture process. The overarching scope of this research effort is to control and optimize the whole bioprocess by means of the acquisition of real-time quantitative physiological information from the culture. The system is designed in a modular manner. Each hardware module can operate as an independent gain programmable, level shift adjustable, 16 channel data acquisition system specific to a sensor type. Up to eight such data acquisition modules can be combined and connected to the host PC to realize the whole system hardware. The control of data acquisition and the subsequent management of data is performed by the system's software which is coded in LabVIEW. Preliminary experimental results presented here show that the system not only has the ability to interface to various types of sensors allowing the monitoring of different types of culture parameters. Moreover, it can capture dynamic variations of culture parameters by means of real-time multi-channel measurements thus providing additional information on both temporal and spatial profiles of these parameters within a bioreactor. The system is by no means constrained in the hematopoietic stem cell culture field only. It is suitable for cell growth monitoring applications in general.

  4. Monte Carlo analysis for the determination of the conic constant of an aspheric micro lens based on a scanning white light interferometric measurement

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon A.; Davies, Angela

    2005-08-01

    Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.

  5. Comparison of results from simple expressions for MOSFET parameter extraction

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Lin, Y.-S.

    1988-01-01

    In this paper results are compared from a parameter extraction procedure applied to the linear, saturation, and subthreshold regions for enhancement-mode MOSFETs fabricated in a 3-micron CMOS process. The results indicate that the extracted parameters differ significantly depending on the extraction algorithm and the distribution of I-V data points. It was observed that KP values vary by 30 percent, VT values differ by 50 mV, and Delta L values differ by 1 micron. Thus for acceptance of wafers from foundries and for modeling purposes, the extraction method and data point distribution must be specified. In this paper measurement and extraction procedures that will allow a consistent evaluation of measured parameters are discussed.

  6. Effects of Various Architectural Parameters on Six Room Acoustical Measures in Auditoria.

    NASA Astrophysics Data System (ADS)

    Chiang, Wei-Hwa

    The effects of architectural parameters on six room acoustical measures were investigated by means of correlation analyses, factor analyses and multiple regression analyses based on data taken in twenty halls. Architectural parameters were used to estimate acoustical measures taken at individual locations within each room as well as the averages and standard deviations of all measured values in the rooms. The six acoustical measures were Early Decay Time (EDT10), Clarity Index (C80), Overall Level (G), Bass Ratio based on Early Decay Time (BR(EDT)), Treble Ratio based on Early Decay Time (TR(EDT)), and Early Inter-aural Cross Correlation (IACC80). A comprehensive method of quantifying various architectural characteristics of rooms was developed to define a large number of architectural parameters that were hypothesized to effect the acoustical measurements made in the rooms. This study quantitatively confirmed many of the principles used in the design of concert halls and auditoria. Three groups of room architectural parameters such as the parameters associated with the depth of diffusing surfaces were significantly correlated with the hall standard deviations of most of the acoustical measures. Significant differences of statistical relations among architectural parameters and receiver specific acoustical measures were found between a group of music halls and a group of lecture halls. For example, architectural parameters such as the relative distance from the receiver to the overhead ceiling increased the percentage of the variance of acoustical measures that was explained by Barron's revised theory from approximately 70% to 80% only when data were taken in the group of music halls. This study revealed the major architectural parameters which have strong relations with individual acoustical measures forming the basis for a more quantitative method for advancing the theoretical design of concert halls and other auditoria. The results of this study provide designers the information to predict acoustical measures in buildings at very early stages of the design process without using computer models or scale models.

  7. Sensitivity study of experimental measures for the nuclear liquid-gas phase transition in the statistical multifragmentation model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.

    2018-05-01

    The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.

  8. Theoretical and experimental studies in ultraviolet solar physics

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Reeves, E. M.

    1975-01-01

    The processes and parameters in atomic and molecular physics that are relevant to solar physics are investigated. The areas covered include: (1) measurement of atomic and molecular parameters that contribute to discrete and continous sources of opacity and abundance determinations in the sun; (2) line broadening and scattering phenomena; and (3) development of an ion beam spectroscopic source which is used for the measurement of electron excitation cross sections of transition region and coronal ions.

  9. Using the Multipole Resonance Probe to Stabilize the Electron Density During a Reactive Sputter Process

    NASA Astrophysics Data System (ADS)

    Oberberg, Moritz; Styrnoll, Tim; Ries, Stefan; Bienholz, Stefan; Awakowicz, Peter

    2015-09-01

    Reactive sputter processes are used for the deposition of hard, wear-resistant and non-corrosive ceramic layers such as aluminum oxide (Al2O3) . A well known problem is target poisoning at high reactive gas flows, which results from the reaction of the reactive gas with the metal target. Consequently, the sputter rate decreases and secondary electron emission increases. Both parameters show a non-linear hysteresis behavior as a function of the reactive gas flow and this leads to process instabilities. This work presents a new control method of Al2O3 deposition in a multiple frequency CCP (MFCCP) based on plasma parameters. Until today, process controls use parameters such as spectral line intensities of sputtered metal as an indicator for the sputter rate. A coupling between plasma and substrate is not considered. The control system in this work uses a new plasma diagnostic method: The multipole resonance probe (MRP) measures plasma parameters such as electron density by analyzing a typical resonance frequency of the system response. This concept combines target processes and plasma effects and directly controls the sputter source instead of the resulting target parameters.

  10. Quantum Hamiltonian identification from measurement time traces.

    PubMed

    Zhang, Jun; Sarovar, Mohan

    2014-08-22

    Precise identification of parameters governing quantum processes is a critical task for quantum information and communication technologies. In this Letter, we consider a setting where system evolution is determined by a parametrized Hamiltonian, and the task is to estimate these parameters from temporal records of a restricted set of system observables (time traces). Based on the notion of system realization from linear systems theory, we develop a constructive algorithm that provides estimates of the unknown parameters directly from these time traces. We illustrate the algorithm and its robustness to measurement noise by applying it to a one-dimensional spin chain model with variable couplings.

  11. Measurement accuracy of a stressed contact lens during its relaxation period

    NASA Astrophysics Data System (ADS)

    Compertore, David C.; Ignatovich, Filipp V.

    2018-02-01

    We examine the dioptric power and transmitted wavefront of a contact lens as it releases its handling stresses. Handling stresses are introduced as part of the contact lens loading process and are common across all contact lens measurement procedures and systems. The latest advances in vision correction require tighter quality control during the manufacturing of the contact lenses. The optical power of contact lenses is one of the critical characteristics for users. Power measurements are conducted in the hydrated state, where the lens is resting inside a solution-filled glass cuvette. In a typical approach, the contact lens must be subject to long settling times prior to any measurements. Alternatively, multiple measurements must be averaged. Apart from potential operator dependency of such approach, it is extremely time-consuming, and therefore it precludes higher rates of testing. Comprehensive knowledge about the settling process can be obtained by monitoring multiple parameters of the lens simultaneously. We have developed a system that combines co-aligned a Shack-Hartmann transmitted wavefront sensor and a time-domain low coherence interferometer to measure several optical and physical parameters (power, cylinder power, aberrations, center thickness, sagittal depth, and diameter) simultaneously. We monitor these parameters during the stress relaxation period and show correlations that can be used by manufacturers to devise methods for improved quality control procedures.

  12. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  13. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  14. Measuring self-aligned quadruple patterning pitch walking with scatterometry-based metrology utilizing virtual reference

    NASA Astrophysics Data System (ADS)

    Kagalwala, Taher; Vaid, Alok; Mahendrakar, Sridhar; Lenahan, Michael; Fang, Fang; Isbester, Paul; Shifrin, Michael; Etzioni, Yoav; Cepler, Aron; Yellai, Naren; Dasari, Prasad; Bozdog, Cornel

    2016-10-01

    Advanced technology nodes, 10 nm and beyond, employing multipatterning techniques for pitch reduction pose new process and metrology challenges in maintaining consistent positioning of structural features. A self-aligned quadruple patterning (SAQP) process is used to create the fins in FinFET devices with pitch values well below optical lithography limits. The SAQP process bears the compounding effects from successive reactive ion etch and spacer depositions. These processes induce a shift in the pitch value from one fin compared to another neighboring fin. This is known as pitch walking. Pitch walking affects device performance as well as later processes, which work on an assumption that there is consistent spacing between fins. In SAQP, there are three pitch walking parameters of interest, each linked to specific process steps in the flow. These pitch walking parameters are difficult to discriminate at a specific process step by singular evaluation technique or even with reference metrology, such as transmission electron microscopy. We will utilize a virtual reference to generate a scatterometry model to measure pitch walk for SAQP process flow.

  15. Experiments for practical education in process parameter optimization for selective laser sintering to increase workpiece quality

    NASA Astrophysics Data System (ADS)

    Reutterer, Bernd; Traxler, Lukas; Bayer, Natascha; Drauschke, Andreas

    2016-04-01

    Selective Laser Sintering (SLS) is considered as one of the most important additive manufacturing processes due to component stability and its broad range of usable materials. However the influence of the different process parameters on mechanical workpiece properties is still poorly studied, leading to the fact that further optimization is necessary to increase workpiece quality. In order to investigate the impact of various process parameters, laboratory experiments are implemented to improve the understanding of the SLS limitations and advantages on an educational level. Experiments are based on two different workstations, used to teach students the fundamentals of SLS. First of all a 50 W CO2 laser workstation is used to investigate the interaction of the laser beam with the used material in accordance with varied process parameters to analyze a single-layered test piece. Second of all the FORMIGA P110 laser sintering system from EOS is used to print different 3D test pieces in dependence on various process parameters. Finally quality attributes are tested including warpage, dimension accuracy or tensile strength. For dimension measurements and evaluation of the surface structure a telecentric lens in combination with a camera is used. A tensile test machine allows testing of the tensile strength and the interpreting of stress-strain curves. The developed laboratory experiments are suitable to teach students the influence of processing parameters. In this context they will be able to optimize the input parameters depending on the component which has to be manufactured and to increase the overall quality of the final workpiece.

  16. Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory

    2007-01-01

    NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.

  17. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  18. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  19. The influence of process parameters on porosity formation in hybrid LASER-GMA welding of AA6082 aluminum alloy

    NASA Astrophysics Data System (ADS)

    Ascari, Alessandro; Fortunato, Alessandro; Orazi, Leonardo; Campana, Giampaolo

    2012-07-01

    This paper deals with an experimental campaign carried out on AA6082 8 mm thick plates in order to investigate the role of process parameters on porosity formation in hybrid LASER-GMA welding. Bead on plate weldments were obtained on the above mentioned aluminum alloy considering the variation of the following process parameters: GMAW current (120 and 180 A for short-arc mode, 90 and 130 A for pulsed-arc mode), arc transfer mode (short-arc and pulsed-arc) and mutual distance between arc and LASER sources (0, 3 and 6 mm). Porosities occurring in the fused zone were observed by means of X-ray inspection and measured exploiting an image analysis software. In order to understand the possible correlation between process parameters and porosity formation an analysis of variance statistical approach was exploited. The obtained results pointed out that GMAW current is significant on porosity formation, while the distance between the sources do not affect this aspect.

  20. Progressive freezing and sweating in a test unit

    NASA Astrophysics Data System (ADS)

    Ulrich, J.; Özoğuz, Y.

    1990-01-01

    Crystallization from melts is applied in several fields like waste water treatment, fruit juice or liquid food concentration and purification of organic chemicals. Investigations to improve the understanding, the performance and the control of the process have been carried out. The experimental unit used a vertical tube with a falling film on the outside. With an specially designed measuring technique process controlling parameters have been studied. The results demonstrate the dependency of those parameters upon each other and indicate the way to control the process by controlling the dominant parameter. This is the growth rate of the crystal coat. A further purification of the crystal layer can be achieved by introducing the procedure of sweating, which is a controlled partial melting of the crystal coat. Here again process parameters have been varied and results are presented. The strong effect upon the final purity of the product by an efficient executed sweating which is effectively tuned on the crystallization procedure should save crystallization steps, energy and time.

  1. Sensitivity of low-energy incomplete fusion to various entrance-channel parameters

    NASA Astrophysics Data System (ADS)

    Kumar, Harish; Tali, Suhail A.; Afzal Ansari, M.; Singh, D.; Ali, Rahbar; Kumar, Kamal; Sathik, N. P. M.; Ali, Asif; Parashari, Siddharth; Dubey, R.; Bala, Indu; Kumar, R.; Singh, R. P.; Muralithar, S.

    2018-03-01

    The disentangling of incomplete fusion dependence on various entrance channel parameters has been made from the forward recoil range distribution measurement for the 12C+175Lu system at ≈ 88 MeV energy. It gives the direct measure of full and/or partial linear momentum transfer from the projectile to the target nucleus. The comparison of observed recoil ranges with theoretical ranges calculated using the code SRIM infers the production of evaporation residues via complete and/or incomplete fusion process. Present results show that incomplete fusion process contributes significantly in the production of α xn and 2α xn emission channels. The deduced incomplete fusion probability (F_{ICF}) is compared with that obtained for systems available in the literature. An interesting behavior of F_{ICF} with ZP ZT is observed in the reinvestigation of incomplete fusion dependency with the Coulomb factor (ZPZT), contrary to the recent observations. The present results based on (ZPZT) are found in good agreement with recent observations of our group. A larger F_{ICF} value for 12C induced reactions is found than that for 13C, although both have the same ZPZT. A nonsystematic behavior of the incomplete fusion process with the target deformation parameter (β2) is observed, which is further correlated with a new parameter (ZP ZT . β2). The projectile α -Q-value is found to explain more clearly the discrepancy observed in incomplete fusion dependency with parameters ( ZPZT) and (ZP ZT . β2). It may be pointed out that any single entrance channel parameter (mass-asymmetry or (ZPZT) or β2 or projectile α-Q-value) may not be able to explain completely the incomplete fusion process.

  2. Flow Tube Studies of Gas Phase Chemical Processes of Atmospheric Importance

    NASA Technical Reports Server (NTRS)

    Molina, Mario J.

    1997-01-01

    The objective of this project is to conduct measurements of elementary reaction rate constants and photochemistry parameters for processes of importance in the atmosphere. These measurements are being carried out under temperature and pressure conditions covering those applicable to the stratosphere and upper troposphere, using the chemical ionization mass spectrometry turbulent flow technique developed in our laboratory.

  3. Experimental Validation of Strategy for the Inverse Estimation of Mechanical Properties and Coefficient of Friction in Flat Rolling

    NASA Astrophysics Data System (ADS)

    Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker

    2017-08-01

    Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.

  4. Effect of Processing Parameters on the Physical, Thermal, and Combustion Properties of Plasma-Synthesized Aluminum Nanopowders

    DTIC Science & Technology

    2011-02-01

    only a couple of processing parameters. Table 2 Statistical results of the DOE Run no. Plasma power Feed rate System pressure Quench rate...and quench rate. Particle size was chosen as the measured response due to its predominant effect on material properties. The results of the DOE...showed that feed rate and quench rate have the largest effect on particle size. All synthesized powders were characterized by thermogravimetric

  5. Analysis of the shrinkage at the thick plate part using response surface methodology

    NASA Astrophysics Data System (ADS)

    Hatta, N. M.; Azlan, M. Z.; Shayfull, Z.; Roselina, S.; Nasir, S. M.

    2017-09-01

    Injection moulding is well known for its manufacturing process especially in producing plastic products. To measure the final product quality, there are lots of precautions to be taken into such as parameters setting at the initial stage of the process. Sometimes, if these parameters were set up wrongly, defects may be occurred and one of the well-known defects in the injection moulding process is a shrinkage. To overcome this problem, a maximisation at the precaution stage by making an optimal adjustment on the parameter setting need to be done and this paper focuses on analysing the shrinkage by optimising the parameter at thick plate part with the help of Response Surface Methodology (RSM) and ANOVA analysis. From the previous study, the outstanding parameter gained from the optimisation method in minimising the shrinkage at the moulded part was packing pressure. Therefore, with the reference from the previous literature, packing pressure was selected as the parameter setting for this study with other three parameters which are melt temperature, cooling time and mould temperature. The analysis of the process was obtained from the simulation by Autodesk Moldflow Insight (AMI) software and the material used for moulded part was Acrylonitrile Butadiene Styrene (ABS). The analysis and result were obtained and it found that the shrinkage can be minimised and the significant parameters were found as packing pressure, mould temperature and melt temperature.

  6. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  7. Online total organic carbon (TOC) monitoring for water and wastewater treatment plants processes and operations optimization

    NASA Astrophysics Data System (ADS)

    Assmann, Céline; Scott, Amanda; Biller, Dondra

    2017-08-01

    Organic measurements, such as biological oxygen demand (BOD) and chemical oxygen demand (COD) were developed decades ago in order to measure organics in water. Today, these time-consuming measurements are still used as parameters to check the water treatment quality; however, the time required to generate a result, ranging from hours to days, does not allow COD or BOD to be useful process control parameters - see (1) Standard Method 5210 B; 5-day BOD Test, 1997, and (2) ASTM D1252; COD Test, 2012. Online organic carbon monitoring allows for effective process control because results are generated every few minutes. Though it does not replace BOD or COD measurements still required for compliance reporting, it allows for smart, data-driven and rapid decision-making to improve process control and optimization or meet compliances. Thanks to the smart interpretation of generated data and the capability to now take real-time actions, municipal drinking water and wastewater treatment facility operators can positively impact their OPEX (operational expenditure) efficiencies and their capabilities to meet regulatory requirements. This paper describes how three municipal wastewater and drinking water plants gained process insights, and determined optimization opportunities thanks to the implementation of online total organic carbon (TOC) monitoring.

  8. Novel Online Diagnostic Analysis for In-Flight Particle Properties in Cold Spraying

    NASA Astrophysics Data System (ADS)

    Koivuluoto, Heli; Matikainen, Ville; Larjo, Jussi; Vuoristo, Petri

    2018-02-01

    In cold spraying, powder particles are accelerated by preheated supersonic gas stream to high velocities and sprayed on a substrate. The particle velocities depend on the equipment design and process parameters, e.g., on the type of the process gas and its pressure and temperature. These, in turn, affect the coating structure and the properties. The particle velocities in cold spraying are high, and the particle temperatures are low, which can, therefore, be a challenge for the diagnostic methods. A novel optical online diagnostic system, HiWatch HR, will open new possibilities for measuring particle in-flight properties in cold spray processes. The system employs an imaging measurement technique called S-PTV (sizing-particle tracking velocimetry), first introduced in this research. This technique enables an accurate particle size measurement also for small diameter particles with a large powder volume. The aim of this study was to evaluate the velocities of metallic particles sprayed with HPCS and LPCS systems and with varying process parameters. The measured in-flight particle properties were further linked to the resulting coating properties. Furthermore, the camera was able to provide information about variations during the spraying, e.g., fluctuating powder feeding, which is important from the process control and quality control point of view.

  9. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  10. New approach to statistical description of fluctuating particle fluxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saenko, V. V.

    2009-01-15

    The probability density functions (PDFs) of the increments of fluctuating particle fluxes are investigated. It is found that the PDFs have heavy power-law tails decreasing as x{sup -{alpha}-1} at x {yields} {infinity}. This makes it possible to describe these PDFs in terms of fractionally stable distributions (FSDs) q(x; {alpha}, {beta}, {theta}, {lambda}). The parameters {alpha}, {beta}, {gamma}, and {lambda} were estimated statistically using as an example the time samples of fluctuating particle fluxes measured in the edge plasma of the L-2M stellarator. Two series of fluctuating fluxes measured before and after boronization of the vacuum chamber were processed. It ismore » shown that the increments of fluctuating fluxes are well described by DSDs. The effect of boronization on the parameters of FSDs is analyzed. An algorithm for statistically estimating the FSD parameters and a procedure for processing experimental data are described.« less

  11. Insitu measurement and control of processing properties of composite resins in a production tool

    NASA Technical Reports Server (NTRS)

    Kranbuehl, D.; Hoff, M.; Haverty, P.; Loos, A.; Freeman, T.

    1988-01-01

    An in situ measuring technique for use in automated composite processing and quality control is discussed. Frequency dependent electromagnetic sensors are used to measure processing parameters at four ply positions inside a thick section 192-ply graphite-epoxy composite during cure in an 8 x 4 in. autoclave. Viscosity measurements obtained using the sensors are compared with the viscosities calculated using the Loos-Springer cure process model. Good overall agreement is obtained. In a subsequent autoclave run, the output from the four sensors was used to control the autoclave temperature. Using the 'closed loop' sensor controlled autoclave temperature resulted in a more uniform and more rapid cure cycle.

  12. 40 CFR 63.11925 - What are my initial and continuous compliance requirements for process vents?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... scale. (iv) Engineering assessment including, but not limited to, the following: (A) Previous test..., and procedures used in the engineering assessment shall be documented. (3) For miscellaneous process...

  13. 40 CFR 63.11925 - What are my initial and continuous compliance requirements for process vents?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... scale. (iv) Engineering assessment including, but not limited to, the following: (A) Previous test..., and procedures used in the engineering assessment shall be documented. (3) For miscellaneous process...

  14. 40 CFR 63.11925 - What are my initial and continuous compliance requirements for process vents?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... scale. (iv) Engineering assessment including, but not limited to, the following: (A) Previous test..., and procedures used in the engineering assessment shall be documented. (3) For miscellaneous process...

  15. Impact parameter sensitive study of inner-shell atomic processes in the experimental storage ring

    NASA Astrophysics Data System (ADS)

    Gumberidze, A.; Kozhuharov, C.; Zhang, R. T.; Trotsenko, S.; Kozhedub, Y. S.; DuBois, R. D.; Beyer, H. F.; Blumenhagen, K.-H.; Brandau, C.; Bräuning-Demian, A.; Chen, W.; Forstner, O.; Gao, B.; Gassner, T.; Grisenti, R. E.; Hagmann, S.; Hillenbrand, P.-M.; Indelicato, P.; Kumar, A.; Lestinsky, M.; Litvinov, Yu. A.; Petridis, N.; Schury, D.; Spillmann, U.; Trageser, C.; Trassinelli, M.; Tu, X.; Stöhlker, Th.

    2017-10-01

    In this work, we present a pilot experiment in the experimental storage ring (ESR) at GSI devoted to impact parameter sensitive studies of inner shell atomic processes for low-energy (heavy-) ion-atom collisions. The experiment was performed with bare and He-like xenon ions (Xe54+, Xe52+) colliding with neutral xenon gas atoms, resulting in a symmetric collision system. This choice of the projectile charge states was made in order to compare the effect of a filled K-shell with the empty one. The projectile and target X-rays have been measured at different observation angles for all impact parameters as well as for the impact parameter range of ∼35-70 fm.

  16. Optimization of process parameters in CNC turning of aluminium alloy using hybrid RSM cum TLBO approach

    NASA Astrophysics Data System (ADS)

    Rudrapati, R.; Sahoo, P.; Bandyopadhyay, A.

    2016-09-01

    The main aim of the present work is to analyse the significance of turning parameters on surface roughness in computer numerically controlled (CNC) turning operation while machining of aluminium alloy material. Spindle speed, feed rate and depth of cut have been considered as machining parameters. Experimental runs have been conducted as per Box-Behnken design method. After experimentation, surface roughness is measured by using stylus profile meter. Factor effects have been studied through analysis of variance. Mathematical modelling has been done by response surface methodology, to made relationships between the input parameters and output response. Finally, process optimization has been made by teaching learning based optimization (TLBO) algorithm. Predicted turning condition has been validated through confirmatory experiment.

  17. ZASPE: A Code to Measure Stellar Atmospheric Parameters and their Covariance from Spectra

    NASA Astrophysics Data System (ADS)

    Brahm, Rafael; Jordán, Andrés; Hartman, Joel; Bakos, Gáspár

    2017-05-01

    We describe the Zonal Atmospheric Stellar Parameters Estimator (zaspe), a new algorithm, and its associated code, for determining precise stellar atmospheric parameters and their uncertainties from high-resolution echelle spectra of FGK-type stars. zaspe estimates stellar atmospheric parameters by comparing the observed spectrum against a grid of synthetic spectra only in the most sensitive spectral zones to changes in the atmospheric parameters. Realistic uncertainties in the parameters are computed from the data itself, by taking into account the systematic mismatches between the observed spectrum and the best-fitting synthetic one. The covariances between the parameters are also estimated in the process. zaspe can in principle use any pre-calculated grid of synthetic spectra, but unbiased grids are required to obtain accurate parameters. We tested the performance of two existing libraries, and we concluded that neither is suitable for computing precise atmospheric parameters. We describe a process to synthesize a new library of synthetic spectra that was found to generate consistent results when compared with parameters obtained with different methods (interferometry, asteroseismology, equivalent widths).

  18. Radar systems for the water resources mission, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, R. K.; Claassen, J. P.; Erickson, R. L.; Fong, R. K. T.; Hanson, B. C.; Komen, M. J.; Mcmillan, S. B.; Parashar, S. K.

    1976-01-01

    The state of the art determination was made for radar measurement of: soil moisture, snow, standing and flowing water, lake and river ice, determination of required spacecraft radar parameters, study of synthetic-aperture radar systems to meet these parametric requirements, and study of techniques for on-board processing of the radar data. Significant new concepts developed include the following: scanning synthetic-aperture radar to achieve wide-swath coverage; single-sideband radar; and comb-filter range-sequential, range-offset SAR processing. The state of the art in radar measurement of water resources parameters is outlined. The feasibility for immediate development of a spacecraft water resources SAR was established. Numerous candidates for the on-board processor were examined.

  19. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  20. How well can we measure supermassive black hole spin?

    NASA Astrophysics Data System (ADS)

    Bonson, K.; Gallo, L. C.

    2016-05-01

    Being one of only two fundamental properties black holes possess, the spin of supermassive black holes (SMBHs) is of great interest for understanding accretion processes and galaxy evolution. However, in these early days of spin measurements, consistency and reproducibility of spin constraints have been a challenge. Here, we focus on X-ray spectral modelling of active galactic nuclei (AGN), examining how well we can truly return known reflection parameters such as spin under standard conditions. We have created and fit over 4000 simulated Seyfert 1 spectra each with 375±1k counts. We assess the fits with reflection fraction of R = 1 as well as reflection-dominated AGN with R = 5. We also examine the consequence of permitting fits to search for retrograde spin. In general, we discover that most parameters are overestimated when spectroscopy is restricted to the 2.5-10.0 keV regime and that models are insensitive to inner emissivity index and ionization. When the bandpass is extended out to 70 keV, parameters are more accurately estimated. Repeating the process for R = 5 reduces our ability to measure photon index (˜3 to 8 per cent error and overestimated), but increases precision in all other parameters - most notably ionization, which becomes better constrained (±45 erg cm s^{-1}) for low-ionization parameters (ξ < 200 erg cm s^{-1}). In all cases, we find the spin parameter is only well measured for the most rapidly rotating SMBHs (I.e. a > 0.8 to about ±0.10) and that inner emissivity index is never well constrained. Allowing our model to search for retrograde spin did not improve the results.

  1. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing✩

    PubMed Central

    Seppala, Jonathan E.; Migler, Kalman D.

    2016-01-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters. PMID:29167755

  2. Infrared thermography of welding zones produced by polymer extrusion additive manufacturing.

    PubMed

    Seppala, Jonathan E; Migler, Kalman D

    2016-10-01

    In common thermoplastic additive manufacturing (AM) processes, a solid polymer filament is melted, extruded though a rastering nozzle, welded onto neighboring layers and solidified. The temperature of the polymer at each of these stages is the key parameter governing these non-equilibrium processes, but due to its strong spatial and temporal variations, it is difficult to measure accurately. Here we utilize infrared (IR) imaging - in conjunction with necessary reflection corrections and calibration procedures - to measure these temperature profiles of a model polymer during 3D printing. From the temperature profiles of the printed layer (road) and sublayers, the temporal profile of the crucially important weld temperatures can be obtained. Under typical printing conditions, the weld temperature decreases at a rate of approximately 100 °C/s and remains above the glass transition temperature for approximately 1 s. These measurement methods are a first step in the development of strategies to control and model the printing processes and in the ability to develop models that correlate critical part strength with material and processing parameters.

  3. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondy, Lisa Ann; Rao, Rekha Ranjana; Shelden, Bion

    We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions,more » following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.« less

  5. A real-time measurement system for parameters of live biology metabolism process with fiber optics

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Zhao, Hui; Liu, Zemin; Cheng, Jinke; Cai, Rong

    2010-08-01

    Energy metabolism is one of the basic life activities of cellular in which lactate, O2 and CO2 will be released into the extracellular environment. By monitoring the quantity of these parameters, the mitochondrial performance will be got. A continuous measurement system for the concentration of O2, CO2 and PH value is introduced in this paper. The system is made up of several small-sized fiber optics biosensors corresponding to the container. The setup of the system and the principle of measurement of several parameters are explained. The setup of the fiber PH sensor based on principle of light absorption is also introduced in detail and some experimental results are given. From the results we can see that the system can measure the PH value precisely suitable for cell cultivation. The linear and repeatable accuracies are 3.6% and 6.7% respectively, which can fulfill the measurement task.

  6. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  7. Physical Processes in Coastal Stratocumulus Clouds from Aircraft Measurements During UPPEF 2012

    DTIC Science & Technology

    2013-09-01

    pressure, dew point, water vapor, absolute humidity, and carbon dioxide concentration. There were various upward and downward looking pyranometers ...Meteorological parameters IR Temperature -50 to +20 °C Up-looking modified Kipp & Zonen CM-22 pyranometer (CIRPAS/NRL) Meteorological parameters Down...welling Solar Irradiance 0-1400 W m -2 Down-looking modified Kipp & Zonen CM-22 pyranometer (CIRPAS/NRL) Meteorological parameters Up-welling Solar

  8. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    USGS Publications Warehouse

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  9. Invariant polarimetric contrast parameters of coherent light.

    PubMed

    Réfrégier, Philippe; Goudail, François

    2002-06-01

    Many applications use an active coherent illumination and analyze the variation of the polarization state of optical signals. However, as a result of the use of coherent light, these signals are generally strongly perturbed with speckle noise. This is the case, for example, for active polarimetric imaging systems that are useful for enhancing contrast between different elements in a scene. We propose a rigorous definition of the minimal set of parameters that characterize the difference between two coherent and partially polarized states. Indeed, two states of partially polarized light are a priori defined by eight parameters, for example, their two Stokes vectors. We demonstrate that the processing performance for such signal processing tasks as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by two scalar functions of these eight parameters. These two scalar functions are the invariant parameters that define the polarimetric contrast between two polarized states of coherent light. Different polarization configurations with the same invariant contrast parameters will necessarily lead to the same performance for a given task, which is a desirable quality for a rigorous contrast measure. The definition of these polarimetric contrast parameters simplifies the analysis and the specification of processing techniques for coherent polarimetric signals.

  10. Automation of data processing and calculation of retention parameters and thermodynamic data for gas chromatography

    NASA Astrophysics Data System (ADS)

    Makarycheva, A. I.; Faerman, V. A.

    2017-02-01

    The analyses of automation patterns is performed and the programming solution for the automation of data processing of the chromatographic data and their further information storage with a help of a software package, Mathcad and MS Excel spreadsheets, is developed. The offered approach concedes the ability of data processing algorithm modification and does not require any programming experts participation. The approach provides making a measurement of the given time and retention volumes, specific retention volumes, a measurement of differential molar free adsorption energy, and a measurement of partial molar solution enthalpies and isosteric heats of adsorption. The developed solution is focused on the appliance in a small research group and is tested on the series of some new gas chromatography sorbents. More than 20 analytes were submitted to calculation of retention parameters and thermodynamic sorption quantities. The received data are provided in the form accessible to comparative analysis, and they are able to find sorbing agents with the most profitable properties to solve some concrete analytic issues.

  11. Modelling uncertainties in the diffusion-advection equation for radon transport in soil using interval arithmetic.

    PubMed

    Chakraverty, S; Sahoo, B K; Rao, T D; Karunakar, P; Sapra, B K

    2018-02-01

    Modelling radon transport in the earth crust is a useful tool to investigate the changes in the geo-physical processes prior to earthquake event. Radon transport is modeled generally through the deterministic advection-diffusion equation. However, in order to determine the magnitudes of parameters governing these processes from experimental measurements, it is necessary to investigate the role of uncertainties in these parameters. Present paper investigates this aspect by combining the concept of interval uncertainties in transport parameters such as soil diffusivity, advection velocity etc, occurring in the radon transport equation as applied to soil matrix. The predictions made with interval arithmetic have been compared and discussed with the results of classical deterministic model. The practical applicability of the model is demonstrated through a case study involving radon flux measurements at the soil surface with an accumulator deployed in steady-state mode. It is possible to detect the presence of very low levels of advection processes by applying uncertainty bounds on the variations in the observed concentration data in the accumulator. The results are further discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. 40 CFR 65.85 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... permit limit applicable to the process vent. (4) Design analysis based on accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or properties. (5) All data... tested for vapor tightness. (b) Engineering assessment. Engineering assessment to determine if a vent...

  13. 40 CFR 65.85 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... permit limit applicable to the process vent. (4) Design analysis based on accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or properties. (5) All data... tested for vapor tightness. (b) Engineering assessment. Engineering assessment to determine if a vent...

  14. 40 CFR 65.85 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... permit limit applicable to the process vent. (4) Design analysis based on accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or properties. (5) All data... tested for vapor tightness. (b) Engineering assessment. Engineering assessment to determine if a vent...

  15. 40 CFR 65.85 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... permit limit applicable to the process vent. (4) Design analysis based on accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or properties. (5) All data... tested for vapor tightness. (b) Engineering assessment. Engineering assessment to determine if a vent...

  16. 40 CFR 65.85 - Procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... permit limit applicable to the process vent. (4) Design analysis based on accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or properties. (5) All data... tested for vapor tightness. (b) Engineering assessment. Engineering assessment to determine if a vent...

  17. Optimal configuration of partial Mueller matrix polarimeter for measuring the ellipsometric parameters in the presence of Poisson shot noise and Gaussian noise

    NASA Astrophysics Data System (ADS)

    Quan, Naicheng; Zhang, Chunmin; Mu, Tingkui

    2018-05-01

    We address the optimal configuration of a partial Mueller matrix polarimeter used to determine the ellipsometric parameters in the presence of additive Gaussian noise and signal-dependent shot noise. The numerical results show that, for the PSG/PSA consisting of a variable retarder and a fixed polarizer, the detection process immune to these two types of noise can be optimally composed by 121.2° retardation with a pair of azimuths ±71.34° and a 144.48° retardation with a pair of azimuths ±31.56° for four Mueller matrix elements measurement. Compared with the existing configurations, the configuration presented in this paper can effectively decrease the measurement variance and thus statistically improve the measurement precision of the ellipsometric parameters.

  18. A simulation of air pollution model parameter estimation using data from a ground-based LIDAR remote sensor

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Suttles, J. T.

    1977-01-01

    One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.

  19. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets

    PubMed Central

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-01-01

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801

  20. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets.

    PubMed

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-03-03

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.

  1. Determination of thermodynamic and transport parameters of naphthenic acids and organic process chemicals in oil sand tailings pond water.

    PubMed

    Wang, Xiaomeng; Robinson, Lisa; Wen, Qing; Kasperski, Kim L

    2013-07-01

    Oil sand tailings pond water contains naphthenic acids and process chemicals (e.g., alkyl sulphates, quaternary ammonium compounds, and alkylphenol ethoxylates). These chemicals are toxic and can seep through the foundation of the tailings pond to the subsurface, potentially affecting the quality of groundwater. As a result, it is important to measure the thermodynamic and transport parameters of these chemicals in order to study the transport behavior of contaminants through the foundation as well as underground. In this study, batch adsorption studies and column experiments were performed. It was found that the transport parameters of these chemicals are related to their molecular structures and other properties. The computer program (CXTFIT) was used to further evaluate the transport process in the column experiments. The results from this study show that the transport of naphthenic acids in a glass column is an equilibrium process while the transport of process chemicals seems to be a non-equilibrium process. At the end of this paper we present a real-world case study in which the transport of the contaminants through the foundation of an external tailings pond is calculated using the lab-measured data. The results show that long-term groundwater monitoring of contaminant transport at the oil sand mining site may be necessary to avoid chemicals from reaching any nearby receptors.

  2. Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT.

    PubMed

    Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel

    2010-02-01

    To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.

  3. Multi-wavelength dual polarisation lidar for monitoring precipitation process in the cloud seeding technique

    NASA Astrophysics Data System (ADS)

    Sudhakar, P.; Sheela, K. Anitha; Ramakrishna Rao, D.; Malladi, Satyanarayana

    2016-05-01

    In recent years weather modification activities are being pursued in many countries through cloud seeding techniques to facilitate the increased and timely precipitation from the clouds. In order to induce and accelerate the precipitation process clouds are artificially seeded with suitable materials like silver iodide, sodium chloride or other hygroscopic materials. The success of cloud seeding can be predicted with confidence if the precipitation process involving aerosol, the ice water balance, water vapor content and size of the seeding material in relation to aerosol in the cloud is monitored in real time and optimized. A project on the enhancement of rain fall through cloud seeding is being implemented jointly with Kerala State Electricity Board Ltd. Trivandrum, Kerala, India at the catchment areas of the reservoir of one of the Hydro electric projects. The dual polarization lidar is being used to monitor and measure the microphysical properties, the extinction coefficient, size distribution and related parameters of the clouds. The lidar makes use of the Mie, Rayleigh and Raman scattering techniques for the various measurement proposed. The measurements with the dual polarization lidar as above are being carried out in real time to obtain the various parameters during cloud seeding operations. In this paper we present the details of the multi-wavelength dual polarization lidar being used and the methodology to monitor the various cloud parameters involved in the precipitation process. The necessary retrieval algorithms for deriving the microphysical properties of clouds, aerosols characteristics and water vapor profiles are incorporated as a software package working under Lab-view for online and off line analysis. Details on the simulation studies and the theoretical model developed in this regard for the optimization of various parameters are discussed.

  4. An overview on the use of backscattered sound for measuring suspended particle size and concentration profiles in non-cohesive inorganic sediment transport studies

    NASA Astrophysics Data System (ADS)

    Thorne, Peter D.; Hurther, David

    2014-02-01

    For over two decades, coastal marine scientists studying boundary layer sediment transport processes have been using, and developing, the application of sound for high temporal-spatial resolution measurements of suspended particle size and concentration profiles. To extract the suspended sediment parameters from the acoustic data requires an understanding of the interaction of sound with a suspension of sediments and an inversion methodology. This understanding is distributed around journals in a number of scientific fields and there is no single article that succinctly draws together the different components. In the present work the aim is to provide an overview on the acoustic approach to measuring suspended sediment parameters and assess its application in the study of non-cohesive inorganic suspended sediment transport processes.

  5. High speed demodulation systems for fiber optic grating sensors

    NASA Technical Reports Server (NTRS)

    Udd, Eric (Inventor); Weisshaar, Andreas (Inventor)

    2002-01-01

    Fiber optic grating sensor demodulation systems are described that offer high speed and multiplexing options for both single and multiple parameter fiber optic grating sensors. To attain very high speeds for single parameter fiber grating sensors ratio techniques are used that allow a series of sensors to be placed in a single fiber while retaining high speed capability. These methods can be extended to multiparameter fiber grating sensors. Optimization of speeds can be obtained by minimizing the number of spectral peaks that must be processed and it is shown that two or three spectral peak measurements may in specific multiparameter applications offer comparable or better performance than processing four spectral peaks. Combining the ratio methods with minimization of peak measurements allows very high speed measurement of such important environmental effects as transverse strain and pressure.

  6. Standard Reference Specimens in Quality Control of Engineering Surfaces

    PubMed Central

    Song, J. F.; Vorburger, T. V.

    1991-01-01

    In the quality control of engineering surfaces, we aim to understand and maintain a good relationship between the manufacturing process and surface function. This is achieved by controlling the surface texture. The control process involves: 1) learning the functional parameters and their control values through controlled experiments or through a long history of production and use; 2) maintaining high accuracy and reproducibility with measurements not only of roughness calibration specimens but also of real engineering parts. In this paper, the characteristics, utilizations, and limitations of different classes of precision roughness calibration specimens are described. A measuring procedure of engineering surfaces, based on the calibration procedure of roughness specimens at NIST, is proposed. This procedure involves utilization of check specimens with waveform, wavelength, and other roughness parameters similar to functioning engineering surfaces. These check specimens would be certified under standardized reference measuring conditions, or by a reference instrument, and could be used for overall checking of the measuring procedure and for maintaining accuracy and agreement in engineering surface measurement. The concept of “surface texture design” is also suggested, which involves designing the engineering surface texture, the manufacturing process, and the quality control procedure to meet the optimal functional needs. PMID:28184115

  7. Subsurface damage distribution in the lapping process.

    PubMed

    Wang, Zhuo; Wu, Yulie; Dai, Yifan; Li, Shengyi

    2008-04-01

    To systematically investigate the influence of lapping parameters on subsurface damage (SSD) depth and characterize the damage feature comprehensively, maximum depth and distribution of SSD generated in the optical lapping process were measured with the magnetorheological finishing wedge technique. Then, an interaction of adjacent indentations was applied to interpret the generation of maximum depth of SSD. Eventually, the lapping procedure based on the influence of lapping parameters on the material removal rate and SSD depth was proposed to improve the lapping efficiency.

  8. Polishing tool and the resulting TIF for three variable machine parameters as input for the removal simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Robert; Haberl, Alexander; Rascher, Rolf

    2017-06-01

    The trend in the optic industry shows, that it is increasingly important to be able to manufacture complex lens geometries on a high level of precision. From a certain limit on the required shape accuracy of optical workpieces, the processing is changed from the two-dimensional to point-shaped processing. It is very important that the process is as stable as possible during the in point-shaped processing. To ensure stability, usually only one process parameter is varied during processing. It is common that this parameter is the feed rate, which corresponds to the dwell time. In the research project ArenA-FOi (Application-oriented analysis of resource-saving and energy-efficient design of industrial facilities for the optical industry), a touching procedure is used in the point-attack, and in this case a close look is made as to whether a change of several process parameters is meaningful during a processing. The ADAPT tool in size R20 from Satisloh AG is used, which is also available for purchase. The behavior of the tool is tested under constant conditions in the MCP 250 CNC by OptoTech GmbH. A series of experiments should enable the TIF (tool influence function) to be determined using three variable parameters. Furthermore, the maximum error frequency that can be processed is calculated as an example for one parameter set and serves as an outlook for further investigations. The test results serve as the basic for the later removal simulation, which must be able to deal with a variable TIF. This topic has already been successfully implemented in another research project of the Institute for Precision Manufacturing and High-Frequency Technology (IPH) and thus this algorithm can be used. The next step is the useful implementation of the collected knowledge. The TIF must be selected on the basis of the measured data. It is important to know the error frequencies to select the optimal TIF. Thus, it is possible to compare the simulated results with real measurement data and to carry out a revision. From this point onwards, it is possible to evaluate the potential of this approach, and in the ideal case it will be further researched and later found in the production.

  9. Quantitative fluorescence angiography for neurosurgical interventions.

    PubMed

    Weichelt, Claudia; Duscha, Philipp; Steinmeier, Ralf; Meyer, Tobias; Kuß, Julia; Cimalla, Peter; Kirsch, Matthias; Sobottka, Stephan B; Koch, Edmund; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    Present methods for quantitative measurement of cerebral perfusion during neurosurgical operations require additional technology for measurement, data acquisition, and processing. This study used conventional fluorescence video angiography--as an established method to visualize blood flow in brain vessels--enhanced by a quantifying perfusion software tool. For these purposes, the fluorescence dye indocyanine green is given intravenously, and after activation by a near-infrared light source the fluorescence signal is recorded. Video data are analyzed by software algorithms to allow quantification of the blood flow. Additionally, perfusion is measured intraoperatively by a reference system. Furthermore, comparing reference measurements using a flow phantom were performed to verify the quantitative blood flow results of the software and to validate the software algorithm. Analysis of intraoperative video data provides characteristic biological parameters. These parameters were implemented in the special flow phantom for experimental validation of the developed software algorithms. Furthermore, various factors that influence the determination of perfusion parameters were analyzed by means of mathematical simulation. Comparing patient measurement, phantom experiment, and computer simulation under certain conditions (variable frame rate, vessel diameter, etc.), the results of the software algorithms are within the range of parameter accuracy of the reference methods. Therefore, the software algorithm for calculating cortical perfusion parameters from video data presents a helpful intraoperative tool without complex additional measurement technology.

  10. Hybrid scatterometry measurement for BEOL process control

    NASA Astrophysics Data System (ADS)

    Timoney, Padraig; Vaid, Alok; Kang, Byeong Cheol; Liu, Haibo; Isbester, Paul; Cheng, Marjorie; Ng-Emans, Susan; Yellai, Naren; Sendelbach, Matt; Koret, Roy; Gedalia, Oram

    2017-03-01

    Scaling of interconnect design rules in advanced nodes has been accompanied by a reducing metrology budget for BEOL process control. Traditional inline optical metrology measurements of BEOL processes rely on 1-dimensional (1D) film pads to characterize film thickness. Such pads are designed on the assumption that solid copper blocks from previous metallization layers prevent any light from penetrating through the copper, thus simplifying the effective film stack for the 1D optical model. However, the reduction of the copper thickness in each metallization layer and CMP dishing effects within the pad, have introduced undesired noise in the measurement. To resolve this challenge and to measure structures that are more representative of product, scatterometry has been proposed as an alternative measurement. Scatterometry is a diffraction based optical measurement technique using Rigorous Coupled Wave Analysis (RCWA), where light diffracted from a periodic structure is used to characterize the profile. Scatterometry measurements on 3D structures have been shown to demonstrate strong correlation to electrical resistance parameters for BEOL Etch and CMP processes. However, there is significant modeling complexity in such 3D scatterometry models, in particlar due to complexity of front-end-of-line (FEOL) and middle-of-line (MOL) structures. The accompanying measurement noise associated with such structures can contribute significant measurement error. To address the measurement noise of the 3D structures and the impact of incoming process variation, a hybrid scatterometry technique is proposed that utilizes key information from the structure to significantly reduce the measurement uncertainty of the scatterometry measurement. Hybrid metrology combines measurements from two or more metrology techniques to enable or improve the measurement of a critical parameter. In this work, the hybrid scatterometry technique is evaluated for 7nm and 14nm node BEOL measurements of interlayer dielectric (ILD) thickness, hard mask thickness and dielectric trench etch in complex 3D structures. The data obtained from the hybrid scatterometry technique demonstrates stable measurement precision, improved within wafer and wafer to wafer range, robustness in cases where 3D scatterometry measurements incur undesired shifts in the measurements, accuracy as compared to TEM and correlation to process deposition time. Process capability indicator comparisons also demonstrate improvement as compared to conventional scatterometry measurements. The results validate the suitability of the method for monitoring of production BEOL processes.

  11. Selected algorithms for measurement data processing in impulse-radar-based system for monitoring of human movements

    NASA Astrophysics Data System (ADS)

    Miękina, Andrzej; Wagner, Jakub; Mazurek, Paweł; Morawski, Roman Z.

    2016-11-01

    The importance of research on new technologies that could be employed in care services for elderly and disabled persons is highlighted. Advantages of impulse-radar sensors, when applied for non-intrusive monitoring of such persons in their home environment, are indicated. Selected algorithms for the measurement data preprocessing - viz. the algorithms for clutter suppression and echo parameter estimation, as well as for estimation of the twodimensional position of a monitored person - are proposed. The capability of an impulse-radar- based system to provide some application-specific parameters, viz. the parameters characterising the patient's health condition, is also demonstrated.

  12. LCE: leaf carbon exchange data set for tropical, temperate, and boreal species of North and Central America.

    PubMed

    Smith, Nicholas G; Dukes, Jeffrey S

    2017-11-01

    Leaf canopy carbon exchange processes, such as photosynthesis and respiration, are substantial components of the global carbon cycle. Climate models base their simulations of photosynthesis and respiration on an empirical understanding of the underlying biochemical processes, and the responses of those processes to environmental drivers. As such, data spanning large spatial scales are needed to evaluate and parameterize these models. Here, we present data on four important biochemical parameters defining leaf carbon exchange processes from 626 individuals of 98 species at 12 North and Central American sites spanning ~53° of latitude. The four parameters are the maximum rate of Rubisco carboxylation (V cmax ), the maximum rate of electron transport for the regeneration of Ribulose-1,5,-bisphosphate (J max ), the maximum rate of phosphoenolpyruvate carboxylase carboxylation (V pmax ), and leaf dark respiration (R d ). The raw net photosynthesis by intercellular CO 2 (A/C i ) data used to calculate V cmax , J max , and V pmax rates are also presented. Data were gathered on the same leaf of each individual (one leaf per individual), allowing for the examination of each parameter relative to others. Additionally, the data set contains a number of covariates for the plants measured. Covariate data include (1) leaf-level traits (leaf mass, leaf area, leaf nitrogen and carbon content, predawn leaf water potential), (2) plant-level traits (plant height for herbaceous individuals and diameter at breast height for trees), (3) soil moisture at the time of measurement, (4) air temperature from nearby weather stations for the day of measurement and each of the 90 d prior to measurement, and (5) climate data (growing season mean temperature, precipitation, photosynthetically active radiation, vapor pressure deficit, and aridity index). We hope that the data will be useful for obtaining greater understanding of the abiotic and biotic determinants of these important biochemical parameters and for evaluating and improving large-scale models of leaf carbon exchange. © 2017 by the Ecological Society of America.

  13. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  14. Interactions of solutes and streambed sediment: 2. A dynamic analysis of coupled hydrologic and chemical processes that determine solute transport

    USGS Publications Warehouse

    Bencala, Kenneth E.

    1984-01-01

    Solute transport in streams is determined by the interaction of physical and chemical processes. Data from an injection experiment for chloride and several cations indicate significant influence of solutestreambed processes on transport in a mountain stream. These data are interpreted in terms of transient storage processes for all tracers and sorption processes for the cations. Process parameter values are estimated with simulations based on coupled quasi-two-dimensional transport and first-order mass transfer sorption. Comparative simulations demonstrate the relative roles of the physical and chemical processes in determining solute transport. During the first 24 hours of the experiment, chloride concentrations were attenuated relative to expected plateau levels. Additional attenuation occurred for the sorbing cation strontium. The simulations account for these storage processes. Parameter values determined by calibration compare favorably with estimates from other studies in mountain streams. Without further calibration, the transport of potassium and lithium is adequately simulated using parameters determined in the chloride-strontium simulation and with measured cation distribution coefficients.

  15. Mathematical form models of tree trunks

    Treesearch

    Rudolfs Ozolins

    2000-01-01

    Assortment structure analysis of tree trunks is a characteristic and proper problem that can be solved by using mathematical modeling and standard computer programs. Mathematical form model of tree trunks consists of tapering curve equations and their parameters. Parameters for nine species were obtained by processing measurements of 2,794 model trees and studying the...

  16. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar

    2016-08-01

    In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.

  17. Development of real-time rotating waveplate Stokes polarimeter using multi-order retardation for ITER poloidal polarimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imazawa, R., E-mail: imazawa.ryota@jaea.go.jp; Kawano, Y.; Ono, T.

    The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000more » rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.« less

  18. Development of real-time rotating waveplate Stokes polarimeter using multi-order retardation for ITER poloidal polarimeter.

    PubMed

    Imazawa, R; Kawano, Y; Ono, T; Itami, K

    2016-01-01

    The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000 rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.

  19. A new procedure of modal parameter estimation for high-speed digital image correlation

    NASA Astrophysics Data System (ADS)

    Huňady, Róbert; Hagara, Martin

    2017-09-01

    The paper deals with the use of 3D digital image correlation in determining modal parameters of mechanical systems. It is a non-contact optical method, which for the measurement of full-field spatial displacements and strains of bodies uses precise digital cameras with high image resolution. Most often this method is utilized for testing of components or determination of material properties of various specimens. In the case of using high-speed cameras for measurement, the correlation system is capable of capturing various dynamic behaviors, including vibration. This enables the potential use of the mentioned method in experimental modal analysis. For that purpose, the authors proposed a measuring chain for the correlation system Q-450 and developed a software application called DICMAN 3D, which allows the direct use of this system in the area of modal testing. The created application provides the post-processing of measured data and the estimation of modal parameters. It has its own graphical user interface, in which several algorithms for the determination of natural frequencies, mode shapes and damping of particular modes of vibration are implemented. The paper describes the basic principle of the new estimation procedure which is crucial in the light of post-processing. Since the FRF matrix resulting from the measurement is usually relatively large, the estimation of modal parameters directly from the FRF matrix may be time-consuming and may occupy a large part of computer memory. The procedure implemented in DICMAN 3D provides a significant reduction in memory requirements and computational time while achieving a high accuracy of modal parameters. Its computational efficiency is particularly evident when the FRF matrix consists of thousands of measurement DOFs. The functionality of the created software application is presented on a practical example in which the modal parameters of a composite plate excited by an impact hammer were determined. For the verification of the obtained results a verification experiment was conducted during which the vibration responses were measured using conventional acceleration sensors. In both cases MIMO analysis was realized.

  20. PLAN-TA9-2443(U), Rev. B Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing Standard Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Geoffrey Wayne

    2016-03-16

    This document identifies scope and some general procedural steps for performing Remediated Nitrate Salt (RNS) Surrogate Formulation and Testing. This Test Plan describes the requirements, responsibilities, and process for preparing and testing a range of chemical surrogates intended to mimic the energetic response of waste created during processing of legacy nitrate salts. The surrogates developed are expected to bound1 the thermal and mechanical sensitivity of such waste, allowing for the development of process parameters required to minimize the risk to worker and public when processing this waste. Such parameters will be based on the worst-case kinetic parameters as derived frommore » APTAC measurements as well as the development of controls to mitigate sensitivities that may exist due to friction, impact, and spark. This Test Plan will define the scope and technical approach for activities that implement Quality Assurance requirements relevant to formulation and testing.« less

  1. A technique for correcting ERTS data for solar and atmospheric effects

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Peacock, K.; Shah, N. J.

    1974-01-01

    A technique is described by which ERTS investigators can obtain and utilize solar and atmospheric parameters to transform spacecraft radiance measurements to absolute target reflectance signatures. A radiant power measuring instrument (RPMI) and its use in determining atmospheric paramaters needed for ground truth are discussed. The procedures used and results achieved in processing ERTS CCTs to correct for atmospheric parameters to obtain imagery are reviewed. Examples are given which demonstrate the nature and magnitude of atmospheric effects on computer classification programs.

  2. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  3. Gravitational orientation of the orbital complex, Salyut-6--Soyuz

    NASA Technical Reports Server (NTRS)

    Grecho, G. M.; Sarychev, V. A.; Legostayev, V. P.; Sazonov, V. V.; Gansvind, I. N.

    1983-01-01

    A simple mathematical model is proposed for the Salyut-6-Soyuz orbital complex motion with respect to the center of mass under the one-axis gravity-gradient orientation regime. This model was used for processing the measurements of the orbital complex motion parameters when the above orientation region was implemented. Some actual satellite motions are simulated and the satellite's aerodynamic parameters are determined. Estimates are obtained for the accuracy of measurements as well as that of the mathematical model.

  4. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  5. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  6. Implementing an Automated Antenna Measurement System

    NASA Technical Reports Server (NTRS)

    Valerio, Matthew D.; Romanofsky, Robert R.; VanKeuls, Fred W.

    2003-01-01

    We developed an automated measurement system using a PC running a LabView application, a Velmex BiSlide X-Y positioner, and a HP85l0C network analyzer. The system provides high positioning accuracy and requires no user supervision. After the user inputs the necessary parameters into the LabView application, LabView controls the motor positioning and performs the data acquisition. Current parameters and measured data are shown on the PC display in two 3-D graphs and updated after every data point is collected. The final output is a formatted data file for later processing.

  7. Identification of open quantum systems from observable time traces

    DOE PAGES

    Zhang, Jun; Sarovar, Mohan

    2015-05-27

    Estimating the parameters that dictate the dynamics of a quantum system is an important task for quantum information processing and quantum metrology, as well as fundamental physics. In our paper we develop a method for parameter estimation for Markovian open quantum systems using a temporal record of measurements on the system. Furthermore, the method is based on system realization theory and is a generalization of our previous work on identification of Hamiltonian parameters.

  8. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  9. Estimation of teleported and gained parameters in a non-inertial frame

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2017-04-01

    Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.

  10. A Module Experimental Process System Development Unit (MEPSDU)

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The purpose of this program is to demonstrate the technical readiness of a cost effective process sequence that has the potential for the production of flat plate photovoltaic modules which met the price goal in 1986 of $.70 or less per watt peak. Program efforts included: preliminary design review, preliminary cell fabrication using the proposed process sequence, verification of sandblasting back cleanup, study of resist parameters, evaluation of pull strength of the proposed metallization, measurement of contact resistance of Electroless Ni contacts, optimization of process parameter, design of the MEPSDU module, identification and testing of insulator tapes, development of a lamination process sequence, identification, discussions, demonstrations and visits with candidate equipment vendors, evaluation of proposals for tabbing and stringing machine.

  11. Electrochemical Behavior of Sulfur in Aqueous Alkaline Solutions

    NASA Astrophysics Data System (ADS)

    Mamyrbekova, Aigul; Mamitova, A. D.; Mamyrbekova, Aizhan

    2018-03-01

    The kinetics and mechanism of the electrode oxidation-reduction of sulfur on an electrically conductive sulfur-graphite electrode in an alkaline solution was studied by the potentiodynamic method. To examine the mechanism of electrode processes occurring during AC polarization on a sulfur-graphite electrode, the cyclic polarization in both directions and anodic polarization curves were recorded. The kinetic parameters: charge transfer coefficients (α), diffusion coefficients ( D), heterogeneous rate constants of electrode process ( k s), and effective activation energies of the process ( E a) were calculated from the results of polarization measurements. An analysis of the results and calculated kinetic parameters of electrode processes showed that discharge ionization of sulfur in alkaline solutions occurs as a sequence of two stages and is a quasireversible process.

  12. Active vs. Passive Television Viewing: A Model of the Development of Television Information Processing by Children.

    ERIC Educational Resources Information Center

    Wright, John C.; And Others

    A conceptual model of how children process televised information was developed with the goal of identifying those parameters of the process that are both measurable and manipulable in research settings. The model presented accommodates the nature of information processing both by the child and by the presentation by the medium. Presentation is…

  13. Process parameters in the manufacture of ceramic ZnO nanofibers made by electrospinning

    NASA Astrophysics Data System (ADS)

    Nonato, Renato C.; Morales, Ana R.; Rocha, Mateus C.; Nista, Silvia V. G.; Mei, Lucia H. I.; Bonse, Baltus C.

    2017-01-01

    Zinc oxide (ZnO) nanofibers were prepared by electrospinning under different conditions using a solution of poly(vinyl alcohol) and zinc acetate as precursor. A 23 factorial design was made to study the influence of the process parameters in the electrospinning (collector distance, flow rate and voltage), and a 22 factorial design was made to study the influence of the calcination process (time and temperature). SEM images were made to analyze the fiber morphology before and after calcination process, and the images were made to measure the nanofiber diameter. X-ray diffraction was made to analyze the total precursor conversion to ZnO and the elimination of the polymeric carrier.

  14. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    NASA Technical Reports Server (NTRS)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.

  15. Loss-resistant unambiguous phase measurement

    NASA Astrophysics Data System (ADS)

    Dinani, Hossein T.; Berry, Dominic W.

    2014-08-01

    Entangled multiphoton states have the potential to provide improved measurement accuracy, but are sensitive to photon loss. It is possible to calculate ideal loss-resistant states that maximize the Fisher information, but it is unclear how these could be experimentally generated. Here we propose a set of states that can be obtained by processing the output from parametric down-conversion. Although these states are not optimal, they provide performance very close to that of optimal states for a range of parameters. Moreover, we show how to use sequences of such states in order to obtain an unambiguous phase measurement that beats the standard quantum limit. We consider the optimization of parameters in order to minimize the final phase variance, and find that the optimum parameters are different from those that maximize the Fisher information.

  16. A Design of Experiments Approach Defining the Relationships Between Processing and Microstructure for Ti-6Al-4V

    NASA Technical Reports Server (NTRS)

    Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.

    2004-01-01

    A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .

  17. Experimental Research and Mathematical Modeling of Parameters Effecting on Cutting Force and SurfaceRoughness in CNC Turning Process

    NASA Astrophysics Data System (ADS)

    Zeqiri, F.; Alkan, M.; Kaya, B.; Toros, S.

    2018-01-01

    In this paper, the effects of cutting parameters on cutting forces and surface roughness based on Taguchi experimental design method are determined. Taguchi L9 orthogonal array is used to investigate the effects of machining parameters. Optimal cutting conditions are determined using the signal/noise (S/N) ratio which is calculated by average surface roughness and cutting force. Using results of analysis, effects of parameters on both average surface roughness and cutting forces are calculated on Minitab 17 using ANOVA method. The material that was investigated is Inconel 625 steel for two cases with heat treatment and without heat treatment. The predicted and calculated values with measurement are very close to each other. Confirmation test of results showed that the Taguchi method was very successful in the optimization of machining parameters for maximum surface roughness and cutting forces in the CNC turning process.

  18. Four dimensional observations of clouds from geosynchronous orbit using stereo display and measurement techniques on an interactive information processing system

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Desjardins, M.; Shenk, W. E.

    1979-01-01

    Simultaneous Geosynchronous Operational Environmental Satellite (GOES) 1 km resolution visible image pairs can provide quantitative three dimensional measurements of clouds. These data have great potential for severe storms research and as a basic parameter measurement source for other areas of meteorology (e.g. climate). These stereo cloud height measurements are not subject to the errors and ambiguities caused by unknown cloud emissivity and temperature profiles that are associated with infrared techniques. This effort describes the display and measurement of stereo data using digital processing techniques.

  19. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  20. Fluorescence lifetime as a new parameter in analytical cytology measurements

    NASA Astrophysics Data System (ADS)

    Steinkamp, John A.; Deka, Chiranjit; Lehnert, Bruce E.; Crissman, Harry A.

    1996-05-01

    A phase-sensitive flow cytometer has been developed to quantify fluorescence decay lifetimes on fluorochrome-labeled cells/particles. This instrument combines flow cytometry (FCM) and frequency-domain fluorescence spectroscopy measurement principles to provide unique capabilities for making phase-resolved lifetime measurements, while preserving conventional FCM capabilities. Cells are analyzed as they intersect a high-frequency, intensity-modulated (sine wave) laser excitation beam. Fluorescence signals are processed by conventional and phase-sensitive signal detection electronics and displayed as frequency distribution histograms. In this study we describe results of fluorescence intensity and lifetime measurements on fluorescently labeled particles, cells, and chromosomes. Examples of measurements on intrinsic cellular autofluorescence, cells labeled with immunofluorescence markers for cell- surface antigens, mitochondria stains, and on cellular DNA and protein binding fluorochromes will be presented to illustrate unique differences in measured lifetimes and changes caused by fluorescence quenching. This innovative technology will be used to probe fluorochrome/molecular interactions in the microenvironment of cells/chromosomes as a new parameter and thus expand the researchers' understanding of biochemical processes and structural features at the cellular and molecular level.

  1. CO2 fluxes and ecosystem dynamics at five European treeless peatlands - merging data and process oriented modelling

    NASA Astrophysics Data System (ADS)

    Metzger, C.; Jansson, P.-E.; Lohila, A.; Aurela, M.; Eickenscheidt, T.; Belelli-Marchesini, L.; Dinsmore, K. J.; Drewer, J.; van Huissteden, J.; Drösler, M.

    2014-06-01

    The carbon dioxide (CO2) exchange of five different peatland systems across Europe with a wide gradient in landuse intensity, water table depth, soil fertility and climate was simulated with the process oriented CoupModel. The aim of the study was to find out to what extent CO2 fluxes measured at different sites, can be explained by common processes and parameters implemented in the model. The CoupModel was calibrated to fit measured CO2 fluxes, soil temperature, snow depth and leaf area index (LAI) and resulting differences in model parameters were analysed. Finding site independent model parameters would mean that differences in the measured fluxes could be explained solely by model input data: water table, meteorological data, management and soil inventory data. The model, utilizing a site independent configuration for most of the parameters, captured seasonal variability in the major fluxes well. Parameters that differed between sites included the rate of soil organic decomposition, photosynthetic efficiency, and regulation of the mobile carbon (C) pool from senescence to shooting in the next year. The largest difference between sites was the rate coefficient for heterotrophic respiration. Setting it to a common value would lead to underestimation of mean total respiration by a factor of 2.8 up to an overestimation by a factor of 4. Despite testing a wide range of different responses to soil water and temperature, heterotrophic respiration rates were consistently lowest on formerly drained sites and highest on the managed sites. Substrate decomposability, pH and vegetation characteristics are possible explanations for the differences in decomposition rates. Applying common parameter values for the timing of plant shooting and senescence, and a minimum temperature for photosynthesis, had only a minor effect on model performance, even though the gradient in site latitude ranged from 48° N (South-Germany) to 68° N (northern Finland). This was also true for common parameters defining the moisture and temperature response for decomposition. CoupModel is able to describe measured fluxes at different sites or under different conditions, providing that the rate of soil organic decomposition, photosynthetic efficiency, and the regulation of the mobile carbon (C) pool are estimated from available information on specific soil conditions, vegetation and management of the ecosystems.

  2. Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.

  3. 40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...

  4. 40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...

  5. 40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...

  6. 40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...

  7. 40 CFR 63.1412 - Continuous process vent applicability assessment procedures and methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... values, and engineering assessment control applicability assessment requirements are to be determined... by using the engineering assessment procedures in paragraph (k) of this section. (f) Volumetric flow...

  8. Assessment of input uncertainty by seasonally categorized latent variables using SWAT

    USDA-ARS?s Scientific Manuscript database

    Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...

  9. Using continuous underway isotope measurements to map water residence time in hydrodynamically complex tidal environments

    USGS Publications Warehouse

    Downing, Bryan D.; Bergamaschi, Brian; Kendall, Carol; Kraus, Tamara; Dennis, Kate J.; Carter, Jeffery A.; von Dessonneck, Travis

    2016-01-01

    Stable isotopes present in water (δ2H, δ18O) have been used extensively to evaluate hydrological processes on the basis of parameters such as evaporation, precipitation, mixing, and residence time. In estuarine aquatic habitats, residence time (τ) is a major driver of biogeochemical processes, affecting trophic subsidies and conditions in fish-spawning habitats. But τ is highly variable in estuaries, owing to constant changes in river inflows, tides, wind, and water height, all of which combine to affect τ in unpredictable ways. It recently became feasible to measure δ2H and δ18O continuously, at a high sampling frequency (1 Hz), using diffusion sample introduction into a cavity ring-down spectrometer. To better understand the relationship of τ to biogeochemical processes in a dynamic estuarine system, we continuously measured δ2H and δ18O, nitrate and water quality parameters, on board a small, high-speed boat (5 to >10 m s–1) fitted with a hull-mounted underwater intake. We then calculated τ as is classically done using the isotopic signals of evaporation. The result was high-resolution (∼10 m) maps of residence time, nitrate, and other parameters that showed strong spatial gradients corresponding to geomorphic attributes of the different channels in the area. The mean measured value of τ was 30.5 d, with a range of 0–50 d. We used the measured spatial gradients in both τ and nitrate to calculate whole-ecosystem uptake rates, and the values ranged from 0.006 to 0.039 d–1. The capability to measure residence time over single tidal cycles in estuaries will be useful for evaluating and further understanding drivers of phytoplankton abundance, resolving differences attributable to mixing and water sources, explicitly calculating biogeochemical rates, and exploring the complex linkages among time-dependent biogeochemical processes in hydrodynamically complex environments such as estuaries.

  10. Measuring systems of hard to get objects: problems with analysis of measurement results

    NASA Astrophysics Data System (ADS)

    Gilewska, Grazyna

    2005-02-01

    The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.

  11. Computer-Assisted Sperm Analysis (CASA) parameters and their evolution during preparation as predictors of pregnancy in intrauterine insemination with frozen-thawed donor semen cycles.

    PubMed

    Fréour, Thomas; Jean, Miguel; Mirallié, Sophie; Dubourdieu, Sophie; Barrière, Paul

    2010-04-01

    To study the potential of CASA parameters in frozen-thawed donor semen before and after preparation on silica gradient as predictors of pregnancy in IUI with donor semen cycles. CASA parameters were measured in thawed donor semen before and after preparation on a silica gradient in 132 couples undergoing 168 IUI cycles with donor semen. The evolution of these parameters throughout this process was calculated. The relationship with cycle outcome was then studied. Clinical pregnancy rate was 18.4% per cycle. CASA parameters on donor semen before or after preparation were not significantly different between pregnancy and failure groups. However, amplitude of lateral head displacement (ALH) of spermatozoa improved in all cycles where pregnancy occurred, thus predicting pregnancy with a sensitivity of 100% and a specificity of 20%. Even if CASA parameters do not seem to predict pregnancy in IUI with donor semen cycles, their evolution during the preparation process should be evaluated, especially for ALH. However, the link between ALH improvement during preparation process and pregnancy remains to be explored. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  12. Characterization and Effects of Fiber Pull-Outs in Hole Quality of Carbon Fiber Reinforced Plastics Composite.

    PubMed

    Alizadeh Ashrafi, Sina; Miller, Peter W; Wandro, Kevin M; Kim, Dave

    2016-10-13

    Hole quality plays a crucial role in the production of close-tolerance holes utilized in aircraft assembly. Through drilling experiments of carbon fiber-reinforced plastic composites (CFRP), this study investigates the impact of varying drilling feed and speed conditions on fiber pull-out geometries and resulting hole quality parameters. For this study, hole quality parameters include hole size variance, hole roundness, and surface roughness. Fiber pull-out geometries are quantified by using scanning electron microscope (SEM) images of the mechanically-sectioned CFRP-machined holes, to measure pull-out length and depth. Fiber pull-out geometries and the hole quality parameter results are dependent on the drilling feed and spindle speed condition, which determines the forces and undeformed chip thickness during the process. Fiber pull-out geometries influence surface roughness parameters from a surface profilometer, while their effect on other hole quality parameters obtained from a coordinate measuring machine is minimal.

  13. Interdiffusion of Polycarbonate in Fused Deposition Modeling Welds

    NASA Astrophysics Data System (ADS)

    Seppala, Jonathan; Forster, Aaron; Satija, Sushil; Jones, Ronald; Migler, Kalman

    2015-03-01

    Fused deposition modeling (FDM), a now common and inexpensive additive manufacturing method, produces 3D objects by extruding molten polymer layer-by-layer. Compared to traditional polymer processing methods (injection, vacuum, and blow molding), FDM parts have inferior mechanical properties, surface finish, and dimensional stability. From a polymer processing point of view the polymer-polymer weld between each layer limits the mechanical strength of the final part. Unlike traditional processing methods, where the polymer is uniformly melted and entangled, FDM welds are typically weaker due to the short time available for polymer interdiffusion and entanglement. To emulate the FDM process thin film bilayers of polycarbonate/d-polycarbonate were annealed using scaled times and temperatures accessible in FDM. Shift factors from Time-Temperature Superposition, measured by small amplitude oscillatory shear, were used to calculate reasonable annealing times (min) at temperatures below the actual extrusion temperature. The extent of interdiffusion was then measured using neutron reflectivity. Analogous specimens were prepared to characterize the mechanical properties. FDM build parameters were then related to interdiffusion between welded layers and mechanical properties. Understating the relationship between build parameters, interdiffusion, and mechanical strength will allow FDM users to print stronger parts in an intelligent manner rather than using trial-and-error and build parameter lock-in.

  14. Detection of cylinder unbalance from Bayesian inference combining cylinder pressure and vibration block measurement in a Diesel engine

    NASA Astrophysics Data System (ADS)

    Nguyen, Emmanuel; Antoni, Jerome; Grondin, Olivier

    2009-12-01

    In the automotive industry, the necessary reduction of pollutant emission for new Diesel engines requires the control of combustion events. This control is efficient provided combustion parameters such as combustion occurrence and combustion energy are relevant. Combustion parameters are traditionally measured from cylinder pressure sensors. However this kind of sensor is expensive and has a limited lifetime. Thus this paper proposes to use only one cylinder pressure on a multi-cylinder engine and to extract combustion parameters from the other cylinders with low cost knock sensors. Knock sensors measure the vibration circulating on the engine block, hence they do not all contain the information on the combustion processes, but they are also contaminated by other mechanical noises that disorder the signal. The question is how to combine the information coming from one cylinder pressure and knock sensors to obtain the most relevant combustion parameters in all engine cylinders. In this paper, the issue is addressed trough the Bayesian inference formalism. In that cylinder where a cylinder pressure sensor is mounted, combustion parameters will be measured directly. In the other cylinders, they will be measured indirectly from Bayesian inference. Experimental results obtained on a four cylinder Diesel engine demonstrate the effectiveness of the proposed algorithm toward that purpose.

  15. Analysis and modeling of wafer-level process variability in 28 nm FD-SOI using split C-V measurements

    NASA Astrophysics Data System (ADS)

    Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard

    2018-07-01

    This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.

  16. Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR

    NASA Astrophysics Data System (ADS)

    Ma, Yongjun

    The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.

  17. Normalized Polarization Ratios for the Analysis of Cell Polarity

    PubMed Central

    Shimoni, Raz; Pham, Kim; Yassin, Mohammed; Ludford-Menting, Mandy J.; Gu, Min; Russell, Sarah M.

    2014-01-01

    The quantification and analysis of molecular localization in living cells is increasingly important for elucidating biological pathways, and new methods are rapidly emerging. The quantification of cell polarity has generated much interest recently, and ratiometric analysis of fluorescence microscopy images provides one means to quantify cell polarity. However, detection of fluorescence, and the ratiometric measurement, is likely to be sensitive to acquisition settings and image processing parameters. Using imaging of EGFP-expressing cells and computer simulations of variations in fluorescence ratios, we characterized the dependence of ratiometric measurements on processing parameters. This analysis showed that image settings alter polarization measurements; and that clustered localization is more susceptible to artifacts than homogeneous localization. To correct for such inconsistencies, we developed and validated a method for choosing the most appropriate analysis settings, and for incorporating internal controls to ensure fidelity of polarity measurements. This approach is applicable to testing polarity in all cells where the axis of polarity is known. PMID:24963926

  18. Strain localization parameters of AlCu4MgSi processed by high-energy electron beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunev, A. G., E-mail: agl@ispms.ru; Nadezhkin, M. V., E-mail: mvn@ispms.ru; National Research Tomsk Polytechnic University, Tomsk, 634050

    2015-10-27

    The influence of the electron beam surface treatment of AlCu4MgSi on the strain localization parameters and on the critical strain value of the Portevin–Le Chatelier effect has been considered. The strain localization parameters were measured using speckle imaging of the specimens subjected to the constant strain rate uniaxial tension at a room temperature. Impact of the surface treatment on the Portevin–Le Chatelier effect has been investigated.

  19. AGARD Flight Test Instrumentation Series. Volume 14. The Analysis of Random Data

    DTIC Science & Technology

    1981-11-01

    obtained at arbitrary times during a number of flights. No constraints have been placed upon the controlling parameters, so that the process is non ...34noisy" environment controlling a non -linear system (the aircraft) using a redundant net of control parameters. when aircraft were flown manually with...structure. Cuse 2. Non -Stationary Measurements. When the 114S value of a random signal varies with parameters which cannot be controlled , then the method

  20. Basic research on design analysis methods for rotorcraft vibrations

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1991-01-01

    The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.

  1. Benchmarking of Touschek Beam Lifetime Calculations for the Advanced Photon Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, A.; Yang, B.

    2017-06-25

    Particle loss from Touschek scattering is one of the most significant issues faced by present and future synchrotron light source storage rings. For example, the predicted, Touschek-dominated beam lifetime for the Advanced Photon Source (APS) Upgrade lattice in 48-bunch, 200-mA timing mode is only ~ 2 h. In order to understand the reliability of the predicted lifetime, a series of measurements with various beam parameters was performed on the present APS storage ring. This paper first describes the entire process of beam lifetime measurement, then compares measured lifetime with the calculated one by applying the measured beam parameters. The resultsmore » show very good agreement.« less

  2. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  3. Estimation of adsorption isotherm and mass transfer parameters in protein chromatography using artificial neural networks.

    PubMed

    Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen

    2017-03-03

    Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  4. Investigation of uncertainty in CO 2 reservoir models: A sensitivity analysis of relative permeability parameter values

    DOE PAGES

    Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.

    2016-03-22

    Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less

  5. Investigation of uncertainty in CO 2 reservoir models: A sensitivity analysis of relative permeability parameter values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.

    Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less

  6. Correlation between the structural and optical properties of ion-assisted hafnia thin films

    NASA Astrophysics Data System (ADS)

    Scaglione, Salvatore; Sarto, Francesca; Alvisi, Marco; Rizzo, Antonella; Perrone, Maria R.; Protopapa, Maria L.

    2000-03-01

    The ion beam assistance during the film growth is one of the most useful method to obtain dense film along with improved optical and structural properties. Afnia material is widely used in optical coating operating in the UV region of the spectrum and its optical properties depend on the production method and the physical parameters of the species involved in the deposition process. In this work afnia thin films were evaporated by an e-gun and assisted during the growth process. The deposition parameters, ion beam energy, density of ions impinging on the growing film and the number of arrival atoms from the crucible, have been related to the optical and structural properties of the film itself. The absorption coefficient and the refractive index were measured by spectrophotometric technique while the microstructure has been studied by means of x-ray diffraction. A strictly correlation between the grain size, the optical properties and the laser damage threshold measurements at 248 nm was found for the samples deposited at different deposition parameters.

  7. Automated live cell screening system based on a 24-well-microplate with integrated micro fluidics.

    PubMed

    Lob, V; Geisler, T; Brischwein, M; Uhl, R; Wolf, B

    2007-11-01

    In research, pharmacologic drug-screening and medical diagnostics, the trend towards the utilization of functional assays using living cells is persisting. Research groups working with living cells are confronted with the problem, that common endpoint measurement methods are not able to map dynamic changes. With consideration of time as a further dimension, the dynamic and networked molecular processes of cells in culture can be monitored. These processes can be investigated by measuring several extracellular parameters. This paper describes a high-content system that provides real-time monitoring data of cell parameters (metabolic and morphological alterations), e.g., upon treatment with drug compounds. Accessible are acidification rates, the oxygen consumption and changes in adhesion forces within 24 cell cultures in parallel. Addressing the rising interest in biomedical and pharmacological high-content screening assays, a concept has been developed, which integrates multi-parametric sensor readout, automated imaging and probe handling into a single embedded platform. A life-maintenance system keeps important environmental parameters (gas, humidity, sterility, temperature) constant.

  8. 40 CFR 63.1323 - Batch process vents-methods and procedures for group determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... recovering monomer, reaction products, by-products, or solvent from a stripper operated in batch mode, and the primary condenser recovering monomer, reaction products, by-products, or solvent from a...

  9. Prediction of Continuous Cooling Transformation Diagrams for Dual-Phase Steels from the Intercritical Region

    NASA Astrophysics Data System (ADS)

    Colla, V.; Desanctis, M.; Dimatteo, A.; Lovicu, G.; Valentini, R.

    2011-09-01

    The purpose of the present work is the implementation and validation of a model able to predict the microstructure changes and the mechanical properties in the modern high-strength dual-phase steels after the continuous annealing process line (CAPL) and galvanizing (Galv) process. Experimental continuous cooling transformation (CCT) diagrams for 13 differently alloying dual-phase steels were measured by dilatometry from the intercritical range and were used to tune the parameters of the microstructural prediction module of the model. Mechanical properties and microstructural features were measured for more than 400 dual-phase steels simulating the CAPL and Galv industrial process, and the results were used to construct the mechanical model that predicts mechanical properties from microstructural features, chemistry, and process parameters. The model was validated and proved its efficiency in reproducing the transformation kinetic and mechanical properties of dual-phase steels produced by typical industrial process. Although it is limited to the dual-phase grades and chemical compositions explored, this model will constitute a useful tool for the steel industry.

  10. Active chatter suppression with displacement-only measurement in turning process

    NASA Astrophysics Data System (ADS)

    Ma, Haifeng; Wu, Jianhua; Yang, Liuqing; Xiong, Zhenhua

    2017-08-01

    Regenerative chatter is a major hindrance for achieving high quality and high production rate in machining processes. Various active controllers have been proposed to mitigate chatter. However, most of existing controllers were developed on the basis of multi-states feedback of the system and state observers were usually needed. Moreover, model parameters of the machining process (mass, damping and stiffness) were required in existing active controllers. In this study, an active sliding mode controller, which employs a dynamic output feedback sliding surface for the unmatched condition and an adaptive law for disturbance estimation, is designed, analyzed, and validated for chatter suppression in turning process. Only displacement measurement is required by this approach. Other sensors and state observers are not needed. Moreover, it facilitates a rapid implementation since the designed controller is established without using model parameters of the turning process. Theoretical analysis, numerical simulations and experiments on a computer numerical control (CNC) lathe are presented. It shows that the chatter can be substantially attenuated and the chatter-free region can be significantly expanded with the presented method.

  11. X-ray microtomography study of the compaction process of rods under tapping.

    PubMed

    Fu, Yang; Xi, Yan; Cao, Yixin; Wang, Yujie

    2012-05-01

    We present an x-ray microtomography study of the compaction process of cylindrical rods under tapping. The process is monitored by measuring the evolution of the orientational order parameter, local, and overall packing densities as a function of the tapping number for different tapping intensities. The slow relaxation dynamics of the orientational order parameter can be well fitted with a stretched-exponential law with stretching exponents ranging from 0.9 to 1.6. The corresponding relaxation time versus tapping intensity follows an Arrhenius behavior which is reminiscent of the slow dynamics in thermal glassy systems. We also investigated the boundary effect on the ordering process and found that boundary rods order faster than interior ones. In searching for the underlying mechanism of the slow dynamics, we estimated the initial random velocities of the rods under tapping and found that the ordering process is compatible with a diffusion mechanism. The average coordination number as a function of the tapping number at different tapping intensities has also been measured, which spans a range from 6 to 8.

  12. A Novel Method for Measuring the Diffusion, Partition and Convective Mass Transfer Coefficients of Formaldehyde and VOC in Building Materials

    PubMed Central

    Xiong, Jianyin; Huang, Shaodan; Zhang, Yinping

    2012-01-01

    The diffusion coefficient (D m) and material/air partition coefficient (K) are two key parameters characterizing the formaldehyde and volatile organic compounds (VOC) sorption behavior in building materials. By virtue of the sorption process in airtight chamber, this paper proposes a novel method to measure the two key parameters, as well as the convective mass transfer coefficient (h m). Compared to traditional methods, it has the following merits: (1) the K, D m and h m can be simultaneously obtained, thus is convenient to use; (2) it is time-saving, just one sorption process in airtight chamber is required; (3) the determination of h m is based on the formaldehyde and VOC concentration data in the test chamber rather than the generally used empirical correlations obtained from the heat and mass transfer analogy, thus is more accurate and can be regarded as a significant improvement. The present method is applied to measure the three parameters by treating the experimental data in the literature, and good results are obtained, which validates the effectiveness of the method. Our new method also provides a potential pathway for measuring h m of semi-volatile organic compounds (SVOC) by using that of VOC. PMID:23145156

  13. Wave data processing toolbox manual

    USGS Publications Warehouse

    Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul

    2006-01-01

    Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata structure is embedded with the data to document pertinent information regarding the deployment and the parameters used to process the data. Using this format ensures that the relevant information about how the data was collected and converted to physical units is maintained with the actual data. EPIC-standard variable names have been utilized where appropriate. These standards, developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) (http://www.pmel.noaa.gov/epic/), provide a universal vernacular allowing researchers to share data without translation.

  14. Optimization of process parameters for a quasi-continuous tablet coating system using design of experiments.

    PubMed

    Cahyadi, Christine; Heng, Paul Wan Sia; Chan, Lai Wah

    2011-03-01

    The aim of this study was to identify and optimize the critical process parameters of the newly developed Supercell quasi-continuous coater for optimal tablet coat quality. Design of experiments, aided by multivariate analysis techniques, was used to quantify the effects of various coating process conditions and their interactions on the quality of film-coated tablets. The process parameters varied included batch size, inlet temperature, atomizing pressure, plenum pressure, spray rate and coating level. An initial screening stage was carried out using a 2(6-1(IV)) fractional factorial design. Following these preliminary experiments, optimization study was carried out using the Box-Behnken design. Main response variables measured included drug-loading efficiency, coat thickness variation, and the extent of tablet damage. Apparent optimum conditions were determined by using response surface plots. The process parameters exerted various effects on the different response variables. Hence, trade-offs between individual optima were necessary to obtain the best compromised set of conditions. The adequacy of the optimized process conditions in meeting the combined goals for all responses was indicated by the composite desirability value. By using response surface methodology and optimization, coating conditions which produced coated tablets of high drug-loading efficiency, low incidences of tablet damage and low coat thickness variation were defined. Optimal conditions were found to vary over a large spectrum when different responses were considered. Changes in processing parameters across the design space did not result in drastic changes to coat quality, thereby demonstrating robustness in the Supercell coating process. © 2010 American Association of Pharmaceutical Scientists

  15. Calibration of the ARID robot

    NASA Technical Reports Server (NTRS)

    Doty, Keith L

    1992-01-01

    The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.

  16. Selected physical properties of various diesel blends

    NASA Astrophysics Data System (ADS)

    Hlaváčová, Zuzana; Božiková, Monika; Hlaváč, Peter; Regrut, Tomáš; Ardonová, Veronika

    2018-01-01

    The quality determination of biofuels requires identifying the chemical and physical parameters. The key physical parameters are rheological, thermal and electrical properties. In our study, we investigated samples of diesel blends with rape-seed methyl esters content in the range from 3 to 100%. In these, we measured basic thermophysical properties, including thermal conductivity and thermal diffusivity, using two different transient methods - the hot-wire method and the dynamic plane source. Every thermophysical parameter was measured 100 times using both methods for all samples. Dynamic viscosity was measured during the heating process under the temperature range 20-80°C. A digital rotational viscometer (Brookfield DV 2T) was used for dynamic viscosity detection. Electrical conductivity was measured using digital conductivity meter (Model 1152) in a temperature range from -5 to 30°C. The highest values of thermal parameters were reached in the diesel sample with the highest biofuel content. The dynamic viscosity of samples increased with higher concentration of bio-component rapeseed methyl esters. The electrical conductivity of blends also increased with rapeseed methyl esters content.

  17. Development of a Kinetic Assay for Late Endosome Movement.

    PubMed

    Esner, Milan; Meyenhofer, Felix; Kuhn, Michael; Thomas, Melissa; Kalaidzidis, Yannis; Bickle, Marc

    2014-08-01

    Automated imaging screens are performed mostly on fixed and stained samples to simplify the workflow and increase throughput. Some processes, such as the movement of cells and organelles or measuring membrane integrity and potential, can be measured only in living cells. Developing such assays to screen large compound or RNAi collections is challenging in many respects. Here, we develop a live-cell high-content assay for tracking endocytic organelles in medium throughput. We evaluate the added value of measuring kinetic parameters compared with measuring static parameters solely. We screened 2000 compounds in U-2 OS cells expressing Lamp1-GFP to label late endosomes. All hits have phenotypes in both static and kinetic parameters. However, we show that the kinetic parameters enable better discrimination of the mechanisms of action. Most of the compounds cause a decrease of motility of endosomes, but we identify several compounds that increase endosomal motility. In summary, we show that kinetic data help to better discriminate phenotypes and thereby obtain more subtle phenotypic clustering. © 2014 Society for Laboratory Automation and Screening.

  18. Development of analysis technique to predict the material behavior of blowing agent

    NASA Astrophysics Data System (ADS)

    Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo

    2014-11-01

    In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.

  19. Peritoneal Fluid Transport rather than Peritoneal Solute Transport Associates with Dialysis Vintage and Age of Peritoneal Dialysis Patients.

    PubMed

    Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia

    2016-01-01

    During peritoneal dialysis (PD), the peritoneal membrane undergoes ageing processes that affect its function. Here we analyzed associations of patient age and dialysis vintage with parameters of peritoneal transport of fluid and solutes, directly measured and estimated based on the pore model, for individual patients. Thirty-three patients (15 females; age 60 (21-87) years; median time on PD 19 (3-100) months) underwent sequential peritoneal equilibration test. Dialysis vintage and patient age did not correlate. Estimation of parameters of the two-pore model of peritoneal transport was performed. The estimated fluid transport parameters, including hydraulic permeability (LpS), fraction of ultrasmall pores (α u), osmotic conductance for glucose (OCG), and peritoneal absorption, were generally independent of solute transport parameters (diffusive mass transport parameters). Fluid transport parameters correlated whereas transport parameters for small solutes and proteins did not correlate with dialysis vintage and patient age. Although LpS and OCG were lower for older patients and those with long dialysis vintage, αu was higher. Thus, fluid transport parameters--rather than solute transport parameters--are linked to dialysis vintage and patient age and should therefore be included when monitoring processes linked to ageing of the peritoneal membrane.

  20. Multi-surface topography targeted plateau honing for the processing of cylinder liner surfaces of automotive engines

    NASA Astrophysics Data System (ADS)

    Lawrence, K. Deepak; Ramamoorthy, B.

    2016-03-01

    Cylinder bores of automotive engines are 'engineered' surfaces that are processed using multi-stage honing process to generate multiple layers of micro geometry for meeting the different functional requirements of the piston assembly system. The final processed surfaces should comply with several surface topographic specifications that are relevant for the good tribological performance of the engine. Selection of the process parameters in three stages of honing to obtain multiple surface topographic characteristics simultaneously within the specification tolerance is an important module of the process planning and is often posed as a challenging task for the process engineers. This paper presents a strategy by combining the robust process design and gray-relational analysis to evolve the operating levels of honing process parameters in rough, finish and plateau honing stages targeting to meet multiple surface topographic specifications on the final running surface of the cylinder bores. Honing experiments were conducted in three stages namely rough, finish and plateau honing on cast iron cylinder liners by varying four honing process parameters such as rotational speed, oscillatory speed, pressure and honing time. Abbott-Firestone curve based functional parameters (Rk, Rpk, Rvk, Mr1 and Mr2) coupled with mean roughness depth (Rz, DIN/ISO) and honing angle were measured and identified as the surface quality performance targets to be achieved. The experimental results have shown that the proposed approach is effective to generate cylinder liner surface that would simultaneously meet the explicit surface topographic specifications currently practiced by the industry.

  1. Processing study of a high temperature adhesive

    NASA Technical Reports Server (NTRS)

    Progar, D. J.

    1984-01-01

    An adhesive-bonding process cycle study was performed for a polyimidesulphone. The high molecular weight, linear aromatic system possesses properties which make it attractive as a processable, low-cost material for elevated temperature applications. The results of a study to better understand the parameters that affect the adhesive properties of the polymer for titanium alloy adherends are presented. These include the tape preparation, the use of a primer and press and simulated autoclave processing conditions. The polymer was characterized using Fourier transform infrared spectroscopy, glass transition temperature determination, flow measurements, and weight loss measurements. The lap shear strength of the adhesive was used to evaluate the effects of the bonding process variations.

  2. Online residence time distribution measurement of thermochemical biomass pretreatment reactors

    DOE PAGES

    Sievers, David A.; Kuhn, Erik M.; Stickel, Jonathan J.; ...

    2015-11-03

    Residence time is a critical parameter that strongly affects the product profile and overall yield achieved from thermochemical pretreatment of lignocellulosic biomass during production of liquid transportation fuels. The residence time distribution (RTD) is one important measure of reactor performance and provides a metric to use when evaluating changes in reactor design and operating parameters. An inexpensive and rapid RTD measurement technique was developed to measure the residence time characteristics in biomass pretreatment reactors and similar equipment processing wet-granular slurries. Sodium chloride was pulsed into the feed entering a 600 kg/d pilot-scale reactor operated at various conditions, and aqueous saltmore » concentration was measured in the discharge using specially fabricated electrical conductivity instrumentation. This online conductivity method was superior in both measurement accuracy and resource requirements compared to offline analysis. Experimentally measured mean residence time values were longer than estimated by simple calculation and screw speed and throughput rate were investigated as contributing factors. In conclusion, a semi-empirical model was developed to predict the mean residence time as a function of operating parameters and enabled improved agreement.« less

  3. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  4. Output statistics of laser anemometers in sparsely seeded flows

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1982-01-01

    It is noted that until very recently, research on this topic concentrated on the particle arrival statistics and the influence of the optical parameters on them. Little attention has been paid to the influence of subsequent processing on the measurement statistics. There is also controversy over whether the effects of the particle statistics can be measured. It is shown here that some of the confusion derives from a lack of understanding of the experimental parameters that are to be controlled or known. A rigorous framework is presented for examining the measurement statistics of such systems. To provide examples, two problems are then addressed. The first has to do with a sample and hold processor, the second with what is called a saturable processor. The sample and hold processor converts the output to a continuous signal by holding the last reading until a new one is obtained. The saturable system is one where the maximum processable rate is arrived at by the dead time of some unit in the system. At high particle rates, the processed rate is determined through the dead time.

  5. Remote sensing of wetland parameters related to carbon cycling

    NASA Technical Reports Server (NTRS)

    Bartlett, David S.; Johnson, Robert W.

    1985-01-01

    Measurement of the rates of important biogeochemical fluxes on regional or global scales is vital to understanding the geochemical and climatic consequences of natural biospheric processes and of human intervention in those processes. Remote data gathering and interpretation techniques were used to examine important cycling processes taking place in wetlands over large geographic expanses. Large area estimation of vegetative biomass and productivity depends upon accurate, consistent measurements of canopy spectral reflectance and upon wide applicability of algorithms relating reflectance to biometric parameters. Results of the use of airborne multispectral scanner data to map above-ground biomass in a Delaware salt marsh are shown. The mapping uses an effective algorithm linking biomass to measured spectral reflectance and a means to correct the scanner data for large variations in the angle of observation of the canopy. The consistency of radiometric biomass algorithms for marsh grass when they are applied over large latitudinal and tidal range gradients were also examined. Results of a 1 year study of methane emissions from tidal wetlands along a salinity gradient show marked effects of temperature, season, and pore-water chemistry in mediating flux to the atmosphere.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Junhwan; Hwang, Sungui; Park, Kyihwan, E-mail: khpark@gist.ac.kr

    To utilize a time-of-flight-based laser scanner as a distance measurement sensor, the measurable distance and accuracy are the most important performance parameters to consider. For these purposes, the optical system and electronic signal processing of the laser scanner should be optimally designed in order to reduce a distance error caused by the optical crosstalk and wide dynamic range input. Optical system design for removing optical crosstalk problem is proposed in this work. Intensity control is also considered to solve the problem of a phase-shift variation in the signal processing circuit caused by object reflectivity. The experimental results for optical systemmore » and signal processing design are performed using 3D measurements.« less

  7. Influence on surface characteristics of electron beam melting process (EBM) by varying the process parameters

    NASA Astrophysics Data System (ADS)

    Dolimont, Adrien; Michotte, Sebastien; Rivière-Lorphèvre, Edouard; Ducobu, François; Vivès, Solange; Godet, Stéphane; Henkes, Tom; Filippi, Enrico

    2017-10-01

    The use of additive manufacturing processes keeps growing in aerospace and biomedical industry. Among the numerous existing technologies, the Electron Beam Melting process has advantages (good dimensional accuracy, fully dense parts) and disadvantages (powder handling, support structure, high surface roughness). Analyzes of the surface characteristics are interesting to get a better understanding of the EBM operations. But that kind of analyzes is not often found in the literature. The main goal of this study is to determine if it is possible to improve the surface roughness by modifying some parameters of the process (scan speed function, number of contours, order of contours, etc.) on samples with different thicknesses. The experimental work on the surface roughness leads to a statistical analysis of 586 measures of EBM simple geometry parts.

  8. The state of the art of the impact of sampling uncertainty on measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Leite, V. J.; Oliveira, E. C.

    2018-03-01

    The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.

  9. Tensiomygraphic Measurement of Atrophy Related Processes During Bed Rest and Recovery

    NASA Astrophysics Data System (ADS)

    Simunic, B. ostjan; Degens, Hans; Rittweger, Jorn; Narici, Marcco; Pisot, Venceslav; Mekjavic, Igor B.; Pisot, Rado

    2013-02-01

    Tensiomyographic (TMG) parameters were recently proposed for a non-invasive estimation of MHC distribution in human vastus lateralis muscle. However, TMG potential is even higher, offers additional insight into the skeletal muscle physiology, especially in the field of atrophy and hypertrophy. The purpose of this study is in developing time dynamics of TMG-measured contraction time (Tc) and maximal response amplitude (Dm), together with muscle belly thickness, measure thoroughly during 35-day bed rest and followed in 30-day recovery (N = 10 males; age 24.3 ± 2.6 years). Measurements were performed in two postural muscles (vastus medialis and lateralis) and one non-postural muscle (biceps femoris). During bed rest period we found different dynamics of muscle thickness decrease and Dm increase. Tc was unchanged in postural muscles, but in non-postural muscle increased significantly and stayed as such even at the end of recovery. We could conclude that TMG related parameters are more sensitive in measuring muscle atrophic and hypertrophic processes than biomedical imaging technique. However, a mechanism that regulates Dm still needs to be identified.

  10. A "total parameter estimation" method in the varification of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.

  11. Reduced iron parameters and cognitive processes in children and adolescents with DM1 compared to those with standard parameters.

    PubMed

    Mojs, Ewa; Stanisławska-Kubiak, Maia; Wójciak, Rafał W; Wojciechowska, Julita; Przewoźniak, Sabina

    2016-03-01

    Anemia in patients with diabetes is not scarce and may contribute to the complications of the disease. The risk of iron deficiency parameters in child sufferers of diabetes type 1, observed in studies, can lead to cognitive impairment. The aim of the study was to determine whether children and adolescents with diabetes type 1, in whom reduced ferric parameters are observed in control tests, may also show reduced cognitive performance. The study included 100 children with diabetes type 1 at the age of 6-17 years. During control tests, patients' morphological blood parameters were measured: red blood cells (RBC), hemoglobin, glycosylated hemoglobin, hematocrit, RBC volume, the molar mass of hemoglobin in RBC (MCH), mean corpuscular hemoglobin in RBC and iron concentrations in serum using flame atomic absorption spectroscopy and the Wechsler Intelligence Scale for Children (WISC-R). Results in the group of children with a diabetes type 1 significantly lower concentration of three ferric parameters affect the non-verbal intelligence measured with WISC-R. The prevalence of reduced ferric parameters justifies further screening in all children with diabetes type 1 and taking up appropriate preventive measures to reduce the risk of their occurrence. Copyright © 2016 American Federation for Medical Research.

  12. Measurement and modeling of unsaturated hydraulic conductivity

    USGS Publications Warehouse

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    The unsaturated zone plays an extremely important hydrologic role that influences water quality and quantity, ecosystem function and health, the connection between atmospheric and terrestrial processes, nutrient cycling, soil development, and natural hazards such as flooding and landslides. Unsaturated hydraulic conductivity is one of the main properties considered to govern flow; however it is very difficult to measure accurately. Knowledge of the highly nonlinear relationship between unsaturated hydraulic conductivity (K) and volumetric water content is required for widely-used models of water flow and solute transport processes in the unsaturated zone. Measurement of unsaturated hydraulic conductivity of sediments is costly and time consuming, therefore use of models that estimate this property from more easily measured bulk-physical properties is common. In hydrologic studies, calculations based on property-transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values with the use of neural networks has become increasingly common. Hydraulic properties predicted using databases may be adequate in some applications, but not others. This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K. The parameters that describe the K curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  13. Ultrasensitive investigations of biological systems by fluorescence correlation spectroscopy.

    PubMed

    Haustein, Elke; Schwille, Petra

    2003-02-01

    Fluorescence correlation spectroscopy (FCS) extracts information about molecular dynamics from the tiny fluctuations that can be observed in the emission of small ensembles of fluorescent molecules in thermodynamic equilibrium. Employing a confocal setup in conjunction with highly dilute samples, the average number of fluorescent particles simultaneously within the measurement volume (approximately 1 fl) is minimized. Among the multitude of chemical and physical parameters accessible by FCS are local concentrations, mobility coefficients, rate constants for association and dissociation processes, and even enzyme kinetics. As any reaction causing an alteration of the primary measurement parameters such as fluorescence brightness or mobility can be monitored, the application of this noninvasive method to unravel processes in living cells is straightforward. Due to the high spatial resolution of less than 0.5 microm, selective measurements in cellular compartments, e.g., to probe receptor-ligand interactions on cell membranes, are feasible. Moreover, the observation of local molecular dynamics provides access to environmental parameters such as local oxygen concentrations, pH, or viscosity. Thus, this versatile technique is of particular attractiveness for researchers striving for quantitative assessment of interactions and dynamics of small molecular quantities in biologically relevant systems.

  14. Observability of ionospheric space-time structure with ISR: A simulation study

    NASA Astrophysics Data System (ADS)

    Swoboda, John; Semeter, Joshua; Zettergren, Matthew; Erickson, Philip J.

    2017-02-01

    The sources of error from electronically steerable array (ESA) incoherent scatter radar (ISR) systems are investigated both theoretically and with use of an open-source ISR simulator, developed by the authors, called Simulator for ISR (SimISR). The main sources of error incorporated in the simulator include statistical uncertainty, which arises due to nature of the measurement mechanism and the inherent space-time ambiguity from the sensor. SimISR can take a field of plasma parameters, parameterized by time and space, and create simulated ISR data at the scattered electric field (i.e., complex receiver voltage) level, subsequently processing these data to show possible reconstructions of the original parameter field. To demonstrate general utility, we show a number of simulation examples, with two cases using data from a self-consistent multifluid transport model. Results highlight the significant influence of the forward model of the ISR process and the resulting statistical uncertainty on plasma parameter measurements and the core experiment design trade-offs that must be made when planning observations. These conclusions further underscore the utility of this class of measurement simulator as a design tool for more optimal experiment design efforts using flexible ESA class ISR systems.

  15. 40 CFR 63.488 - Methods and procedures for batch front-end process vent group determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... engineering principles, measurable process parameters, or physical or chemical laws or properties. Examples of... primary condenser recovering monomer, reaction products, by-products, or solvent from a stripper operated in batch mode, and the primary condenser recovering monomer, reaction products, by-products, or...

  16. The role of interior watershed processes in improving parameter estimation and performance of watershed models

    USDA-ARS?s Scientific Manuscript database

    Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the l...

  17. Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding

    NASA Astrophysics Data System (ADS)

    Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong

    2018-05-01

    Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.

  18. Analysis of Flatness Deviations for Austenitic Stainless Steel Workpieces after Efficient Surface Machining

    NASA Astrophysics Data System (ADS)

    Nadolny, K.; Kapłonek, W.

    2014-08-01

    The following work is an analysis of flatness deviations of a workpiece made of X2CrNiMo17-12-2 austenitic stainless steel. The workpiece surface was shaped using efficient machining techniques (milling, grinding, and smoothing). After the machining was completed, all surfaces underwent stylus measurements in order to obtain surface flatness and roughness parameters. For this purpose the stylus profilometer Hommel-Tester T8000 by Hommelwerke with HommelMap software was used. The research results are presented in the form of 2D surface maps, 3D surface topographies with extracted single profiles, Abbott-Firestone curves, and graphical studies of the Sk parameters. The results of these experimental tests proved the possibility of a correlation between flatness and roughness parameters, as well as enabled an analysis of changes in these parameters from shaping and rough grinding to finished machining. The main novelty of this paper is comprehensive analysis of measurement results obtained during a three-step machining process of austenitic stainless steel. Simultaneous analysis of individual machining steps (milling, grinding, and smoothing) enabled a complementary assessment of the process of shaping the workpiece surface macro- and micro-geometry, giving special consideration to minimize the flatness deviations

  19. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  20. Method and apparatus for measuring coupled flow, transport, and reaction processes under liquid unsaturated flow conditions

    DOEpatents

    McGrail, Bernard P.; Martin, Paul F.; Lindenmeier, Clark W.

    1999-01-01

    The present invention is a method and apparatus for measuring coupled flow, transport and reaction processes under liquid unsaturated flow conditions. The method and apparatus of the present invention permit distinguishing individual precipitation events and their effect on dissolution behavior isolated to the specific event. The present invention is especially useful for dynamically measuring hydraulic parameters when a chemical reaction occurs between a particulate material and either liquid or gas (e.g. air) or both, causing precipitation that changes the pore structure of the test material.

  1. An Approach to Maximize Weld Penetration During TIG Welding of P91 Steel Plates by Utilizing Image Processing and Taguchi Orthogonal Array

    NASA Astrophysics Data System (ADS)

    Singh, Akhilesh Kumar; Debnath, Tapas; Dey, Vidyut; Rai, Ram Naresh

    2017-10-01

    P-91 is modified 9Cr-1Mo steel. Fabricated structures and components of P-91 has a lot of application in power and chemical industry owing to its excellent properties like high temperature stress corrosion resistance, less susceptibility to thermal fatigue at high operating temperatures. The weld quality and surface finish of fabricated structure of P91 is very good when welded by Tungsten Inert Gas welding (TIG). However, the process has its limitation regarding weld penetration. The success of a welding process lies in fabricating with such a combination of parameters that gives maximum weld penetration and minimum weld width. To carry out an investigation on the effect of the autogenous TIG welding parameters on weld penetration and weld width, bead-on-plate welds were carried on P91 plates of thickness 6 mm in accordance to a Taguchi L9 design. Welding current, welding speed and gas flow rate were the three control variables in the investigation. After autogenous (TIG) welding, the dimension of the weld width, weld penetration and weld area were successfully measured by an image analysis technique developed for the study. The maximum error for the measured dimensions of the weld width, penetration and area with the developed image analysis technique was only 2 % compared to the measurements of Leica-Q-Win-V3 software installed in optical microscope. The measurements with the developed software, unlike the measurements under a microscope, required least human intervention. An Analysis of Variance (ANOVA) confirms the significance of the selected parameters. Thereafter, Taguchi's method was successfully used to trade-off between maximum penetration and minimum weld width while keeping the weld area at a minimum.

  2. Disentangling inhibition-based and retrieval-based aftereffects of distractors: Cognitive versus motor processes.

    PubMed

    Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian

    2018-05-01

    Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Dynamic single photon emission computed tomography—basic principles and cardiac applications

    PubMed Central

    Gullberg, Grant T; Reutter, Bryan W; Sitek, Arkadiusz; Maltz, Jonathan S; Budinger, Thomas F

    2011-01-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time–activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time–activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements. PMID:20858925

  4. TOPICAL REVIEW: Dynamic single photon emission computed tomography—basic principles and cardiac applications

    NASA Astrophysics Data System (ADS)

    Gullberg, Grant T.; Reutter, Bryan W.; Sitek, Arkadiusz; Maltz, Jonathan S.; Budinger, Thomas F.

    2010-10-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements.

  5. Influence of Process Parameters on the Process Efficiency in Laser Metal Deposition Welding

    NASA Astrophysics Data System (ADS)

    Güpner, Michael; Patschger, Andreas; Bliedtner, Jens

    Conventionally manufactured tools are often completely constructed of a high-alloyed, expensive tool steel. An alternative way to manufacture tools is the combination of a cost-efficient, mild steel and a functional coating in the interaction zone of the tool. Thermal processing methods, like laser metal deposition, are always characterized by thermal distortion. The resistance against the thermal distortion decreases with the reduction of the material thickness. As a consequence, there is a necessity of a special process management for the laser based coating of thin parts or tools. The experimental approach in the present paper is to keep the energy and the mass per unit length constant by varying the laser power, the feed rate and the powder mass flow. The typical seam parameters are measured in order to characterize the cladding process, define process limits and evaluate the process efficiency. Ways to optimize dilution, angular distortion and clad height are presented.

  6. Addressing FinFET metrology challenges in 1X node using tilt-beam CD-SEM

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao; Zhou, Hua; Ge, Zhenhua; Vaid, Alok; Konduparthi, Deepasree; Osorio, Carmen; Ventola, Stefano; Meir, Roi; Shoval, Ori; Kris, Roman; Adan, Ofer; Bar-Zvi, Maayan

    2014-04-01

    At 1X node, 3D FinFETS raise a number of new metrology challenges. Gate height and fin height are two of the most important parameters for process control. At present there is a metrology gap in inline in-die measurement of these parameters. In order to fill this metrology gap, in-column beam tilt has been developed and implemented on Applied Materials V4i+ top-down CD-SEM for height measurement. A low tilt (5°) beam and a high tilt (14°) beam have been calibrated to obtain two sets of images providing measurement of sidewall edge width to calculate height in the host. Evaluations are done with applications in both gate height and fin height. TEM correlation with R2 being 0.89 and precision of 0.81nm have been achieved on various in-die features in gate height application. Fin height measurement shows less accuracy (R2 being 0.77) and precision (1.49 nm) due to challenges brought by fin geometry, yet still promising as first attempt. Sensitivity to DOE offset, die-to-die and in-die variation is demonstrated in both gate height and fin height. Process defect is successfully captured from inline wafers with gate height measurement implemented in production. This is the first successful demonstration of inline in-die gate height measurement for 14nm FinFET process control.

  7. Comparative fiber evaluation of the mesdan aqualab microwave moisture measurement instrument

    USDA-ARS?s Scientific Manuscript database

    Moisture is a key cotton fiber parameter, as it can impact the fiber quality and the processing of cotton fiber. The Mesdan Aqualab is a microwave-based fiber moisture measurement instrument for samples with moderate sample size. A program was implemented to determine the capabilities of the Aqual...

  8. Cotton micronaire measurements by small portable near infrared (nir) analyzers

    USDA-ARS?s Scientific Manuscript database

    A key quality and processing parameter for cotton fiber is micronaire, which is a function of the fiber’s maturity and fineness. Near Infrared (NIR) spectroscopy has previously shown the ability to measure micronaire, primarily in the laboratory and using large, research-grade laboratory NIR instru...

  9. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT

    PubMed Central

    Bedayat, Arash; Kumamaru, Kanako; Powers, Sara L.; Signorelli, Jason; Steigner, Michael L.; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T.

    2011-01-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use. PMID:21336552

  10. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT.

    PubMed

    Bedayat, Arash; Rybicki, Frank J; Kumamaru, Kanako; Powers, Sara L; Signorelli, Jason; Steigner, Michael L; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T

    2012-02-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use.

  11. Laser cutting metallic plates using a 2kW direct diode laser source

    NASA Astrophysics Data System (ADS)

    Fallahi Sichani, E.; Hauschild, D.; Meinschien, J.; Powell, J.; Assunção, E. G.; Blackburn, J.; Khan, A. H.; Kong, C. Y.

    2015-07-01

    This paper investigates the feasibility of using a 2kW direct diode laser source for producing high-quality cuts in a variety of materials. Cutting trials were performed in a two-stage experimental procedure. The first phase of trials was based on a one-factor-at-a-time change of process parameters aimed at exploring the process window and finding a semi-optimum set of parameters for each material/thickness combination. In the second phase, a full factorial experimental matrix was performed for each material and thickness, as a result of which, the optimum cutting parameters were identified. Characteristic values of the optimum cuts were then measured as per BS EN ISO 9013:2002.

  12. Assessing heat treatment of chicken breast cuts by impedance spectroscopy.

    PubMed

    Schmidt, Franciny C; Fuentes, Ana; Masot, Rafael; Alcañiz, Miguel; Laurindo, João B; Barat, José M

    2017-03-01

    The aim of this work was to develop a new system based on impedance spectroscopy to assess the heat treatment of previously cooked chicken meat by two experiments; in the first, samples were cooked at different temperatures (from 60 to 90 ℃) until core temperature of the meat reached the water bath temperature. In the second approach, temperature was 80 ℃ and the samples were cooked for different times (from 5 to 55 min). Impedance was measured once samples had cooled. The examined processing parameters were the maximum temperature reached in thermal centre of the samples, weight loss, moisture and the integral of the temperature profile during the cooking-cooling process. The correlation between the processing parameters and impedance was studied by partial least square regressions. The models were able to predict the studied parameters. Our results are essential for developing a new system to control the technological, sensory and safety aspects of cooked meat products on the whole meat processing line.

  13. Potential for Remotely Sensed Soil Moisture Data in Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Engman, Edwin T.

    1997-01-01

    Many hydrologic processes display a unique signature that is detectable with microwave remote sensing. These signatures are in the form of the spatial and temporal distributions of surface soil moisture and portray the spatial heterogeneity of hydrologic processes and properties that one encounters in drainage basins. The hydrologic processes that may be detected include ground water recharge and discharge zones, storm runoff contributing areas, regions of potential and less than potential ET, and information about the hydrologic properties of soils and heterogeneity of hydrologic parameters. Microwave remote sensing has the potential to detect these signatures within a basin in the form of volumetric soil moisture measurements in the top few cm. These signatures should provide information on how and where to apply soil physical parameters in distributed and lumped parameter models and how to subdivide drainage basins into hydrologically similar sub-basins.

  14. Quantitative modeling of viable cell density, cell size, intracellular conductivity, and membrane capacitance in batch and fed-batch CHO processes using dielectric spectroscopy.

    PubMed

    Opel, Cary F; Li, Jincai; Amanullah, Ashraf

    2010-01-01

    Dielectric spectroscopy was used to analyze typical batch and fed-batch CHO cell culture processes. Three methods of analysis (linear modeling, Cole-Cole modeling, and partial least squares regression), were used to correlate the spectroscopic data with routine biomass measurements [viable packed cell volume, viable cell concentration (VCC), cell size, and oxygen uptake rate (OUR)]. All three models predicted offline biomass measurements accurately during the growth phase of the cultures. However, during the stationary and decline phases of the cultures, the models decreased in accuracy to varying degrees. Offline cell radius measurements were unsuccessfully used to correct for the deviations from the linear model, indicating that physiological changes affecting permittivity were occurring. The beta-dispersion was analyzed using the Cole-Cole distribution parameters Deltaepsilon (magnitude of the permittivity drop), f(c) (critical frequency), and alpha (Cole-Cole parameter). Furthermore, the dielectric parameters static internal conductivity (sigma(i)) and membrane capacitance per area (C(m)) were calculated for the cultures. Finally, the relationship between permittivity, OUR, and VCC was examined, demonstrating how the definition of viability is critical when analyzing biomass online. The results indicate that the common assumptions of constant size and dielectric properties used in dielectric analysis are not always valid during later phases of cell culture processes. The findings also demonstrate that dielectric spectroscopy, while not a substitute for VCC, is a complementary measurement of viable biomass, providing useful auxiliary information about the physiological state of a culture. (c) 2010 American Institute of Chemical Engineers

  15. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  16. Reliability and validity of gait analysis by android-based smartphone.

    PubMed

    Nishiguchi, Shu; Yamada, Minoru; Nagai, Koutatsu; Mori, Shuhei; Kajiwara, Yuu; Sonoda, Takuya; Yoshimura, Kazuya; Yoshitomi, Hiroyuki; Ito, Hiromu; Okamoto, Kazuya; Ito, Tatsuaki; Muto, Shinyo; Ishihara, Tatsuya; Aoyama, Tomoki

    2012-05-01

    Smartphones are very common devices in daily life that have a built-in tri-axial accelerometer. Similar to previously developed accelerometers, smartphones can be used to assess gait patterns. However, few gait analyses have been performed using smartphones, and their reliability and validity have not been evaluated yet. The purpose of this study was to evaluate the reliability and validity of a smartphone accelerometer. Thirty healthy young adults participated in this study. They walked 20 m at their preferred speeds, and their trunk accelerations were measured using a smartphone and a tri-axial accelerometer that was secured over the L3 spinous process. We developed a gait analysis application and installed it in the smartphone to measure the acceleration. After signal processing, we calculated the gait parameters of each measurement terminal: peak frequency (PF), root mean square (RMS), autocorrelation peak (AC), and coefficient of variance (CV) of the acceleration peak intervals. Remarkable consistency was observed in the test-retest reliability of all the gait parameter results obtained by the smartphone (p<0.001). All the gait parameter results obtained by the smartphone showed statistically significant and considerable correlations with the same parameter results obtained by the tri-axial accelerometer (PF r=0.99, RMS r=0.89, AC r=0.85, CV r=0.82; p<0.01). Our study indicates that the smartphone with gait analysis application used in this study has the capacity to quantify gait parameters with a degree of accuracy that is comparable to that of the tri-axial accelerometer.

  17. Robust guaranteed-cost adaptive quantum phase estimation

    NASA Astrophysics Data System (ADS)

    Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.

    2017-05-01

    Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.

  18. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    NASA Astrophysics Data System (ADS)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the field survey) were weighted for priority. We compared some gradient-based global optimization methods of Dakota starting with the default parameters of Biome-BGC. In the result of sensitive analysis, carbon allocation parameters between coarse root and leaf, between stem and leaf, and SLA had high contribution on both leaf and woody biomass changes. These parameters were selected to be optimized. The measured leaf, above- and below-ground woody biomass carbon density at the last year were 0.22, 1.81 and 0.86 kgC m-2, respectively, whereas those simulated in the non-optimized control case using all default parameters were 0.12, 2.26 and 0.52 kgC m-2, respectively. After optimizing the parameters, the simulated values were improved to 0.19, 1.81 and 0.86 kgC m-2, respectively. The coliny global optimization method gave the better fitness than efficient global and ncsu direct method. The optimized parameters showed the higher carbon allocation rates to coarse roots and leaves and the lower SLA than the default parameters, which were consistent to the general water physiological response in a dry climate. The simulation using the weighted object function resulted in the closer simulations to the measurements at the last year with the lower fitness during the previous years.

  19. Laser cutting: industrial relevance, process optimization, and laser safety

    NASA Astrophysics Data System (ADS)

    Haferkamp, Heinz; Goede, Martin; von Busse, Alexander; Thuerk, Oliver

    1998-09-01

    Compared to other technological relevant laser machining processes, up to now laser cutting is the application most frequently used. With respect to the large amount of possible fields of application and the variety of different materials that can be machined, this technology has reached a stable position within the world market of material processing. Reachable machining quality for laser beam cutting is influenced by various laser and process parameters. Process integrated quality techniques have to be applied to ensure high-quality products and a cost effective use of the laser manufacturing plant. Therefore, rugged and versatile online process monitoring techniques at an affordable price would be desirable. Methods for the characterization of single plant components (e.g. laser source and optical path) have to be substituted by an omnivalent control system, capable of process data acquisition and analysis as well as the automatic adaptation of machining and laser parameters to changes in process and ambient conditions. At the Laser Zentrum Hannover eV, locally highly resolved thermographic measurements of the temperature distribution within the processing zone using cost effective measuring devices are performed. Characteristic values for cutting quality and plunge control as well as for the optimization of the surface roughness at the cutting edges can be deducted from the spatial distribution of the temperature field and the measured temperature gradients. Main influencing parameters on the temperature characteristic within the cutting zone are the laser beam intensity and pulse duration in pulse operation mode. For continuous operation mode, the temperature distribution is mainly determined by the laser output power related to the cutting velocity. With higher cutting velocities temperatures at the cutting front increase, reaching their maximum at the optimum cutting velocity. Here absorption of the incident laser radiation is drastically increased due to the angle between the normal of the cutting front and the laser beam axis. Beneath process optimization and control further work is focused on the characterization of particulate and gaseous laser generated air contaminants and adequate safety precautions like exhaust and filter systems.

  20. A new construction of measurement system based on specialized microsystem design for laryngological application

    NASA Astrophysics Data System (ADS)

    Paczesny, Daniel; Mikłaszewicz, Franciszek

    2013-10-01

    This article describes the design, construction and parameters of diagnostic medical system for air humidity measurement which can be proceeded in various places of human nasal cavities and also human throat. The system can measure dynamic changes of dew point temperature (absolute value of humidity) of inspired and expired air in different places of human upper airways. During regular respiration process dew point temperature is measured in nasal cavity, middle part cavity and nasopharynx. The presented system is the next step in construction of measurement system based on specialized microsystem for laryngological application. The microsystem fabricated on silicon substrate includes microheater, microthermoresistor and interdigitated electrodes. In comparison with previously built measurement system with current version some system functionalities and measurement parameters were improved. Additionally 3D printing technology was applied for rapid prototyping a measurement system housing. Presented measurement system is set of microprocessor module with signal conditioning circuits; heated measurement head based on specialized microsystem with disposable heated pipe for air sucking from various places of upper airways; power supplier and computer application for monitoring all system parameters and presenting on-line and off-line measured results. Some example results of constructed measurement system and dew point temperature measurements during respiration cycle are presented.

  1. Optimizing pulsed Nd:YAG laser beam welding process parameters to attain maximum ultimate tensile strength for thin AISI316L sheet using response surface methodology and simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Torabi, Amir; Kolahan, Farhad

    2018-07-01

    Pulsed laser welding is a powerful technique especially suitable for joining thin sheet metals. In this study, based on experimental data, pulsed laser welding of thin AISI316L austenitic stainless steel sheet has been modeled and optimized. The experimental data required for modeling are gathered as per Central Composite Design matrix in Response Surface Methodology (RSM) with full replication of 31 runs. Ultimate Tensile Strength (UTS) is considered as the main quality measure in laser welding. Furthermore, the important process parameters including peak power, pulse duration, pulse frequency and welding speed are selected as input process parameters. The relation between input parameters and the output response is established via full quadratic response surface regression with confidence level of 95%. The adequacy of the regression model was verified using Analysis of Variance technique results. The main effects of each factor and the interactions effects with other factors were analyzed graphically in contour and surface plot. Next, to maximum joint UTS, the best combinations of parameters levels were specified using RSM. Moreover, the mathematical model is implanted into a Simulated Annealing (SA) optimization algorithm to determine the optimal values of process parameters. The results obtained by both SA and RSM optimization techniques are in good agreement. The optimal parameters settings for peak power of 1800 W, pulse duration of 4.5 ms, frequency of 4.2 Hz and welding speed of 0.5 mm/s would result in a welded joint with 96% of the base metal UTS. Computational results clearly demonstrate that the proposed modeling and optimization procedures perform quite well for pulsed laser welding process.

  2. Anlysis capabilities for plutonium-238 programs

    NASA Astrophysics Data System (ADS)

    Wong, A. S.; Rinehart, G. H.; Reimus, M. H.; Pansoy-Hjelvik, M. E.; Moniz, P. F.; Brock, J. C.; Ferrara, S. E.; Ramsey, S. S.

    2000-07-01

    In this presentation, an overview of analysis capabilities that support 238Pu programs will be discussed. These capabilities include neutron emission rate and calorimetric measurements, metallography/ceramography, ultrasonic examination, particle size determination, and chemical analyses. The data obtained from these measurements provide baseline parameters for fuel clad impact testing, fuel processing, product certifications, and waste disposal. Also several in-line analyses capabilities will be utilized for process control in the full-scale 238Pu Aqueous Scrap Recovery line in FY01.

  3. NASA/MSFC FY91 Global Scale Atmospheric Processes Research Program Review

    NASA Technical Reports Server (NTRS)

    Leslie, Fred W. (Editor)

    1991-01-01

    The reports presented at the annual Marshall Research Review of Earth Science and Applications are compiled. The following subject areas are covered: understanding of atmospheric processes in a variety of spatial and temporal scales; measurements of geophysical parameters; measurements on a global scale from space; the Mission to Planet Earth Program (comprised of and Earth Observation System and the scientific strategy to analyze these data); and satellite data analysis and fundamental studies of atmospheric dynamics.

  4. Bluetooth-based distributed measurement system

    NASA Astrophysics Data System (ADS)

    Tang, Baoping; Chen, Zhuo; Wei, Yuguo; Qin, Xiaofeng

    2007-07-01

    A novel distributed wireless measurement system, which is consisted of a base station, wireless intelligent sensors and relay nodes etc, is established by combining of Bluetooth-based wireless transmission, virtual instrument, intelligent sensor, and network. The intelligent sensors mounted on the equipments to be measured acquire various parameters and the Bluetooth relay nodes get the acquired data modulated and sent to the base station, where data analysis and processing are done so that the operational condition of the equipment can be evaluated. The establishment of the distributed measurement system is discussed with a measurement flow chart for the distributed measurement system based on Bluetooth technology, and the advantages and disadvantages of the system are analyzed at the end of the paper and the measurement system has successfully been used in Daqing oilfield, China for measurement of parameters, such as temperature, flow rate and oil pressure at an electromotor-pump unit.

  5. Modelling and intepreting the isotopic composition of water vapour in convective updrafts

    NASA Astrophysics Data System (ADS)

    Bolot, M.; Legras, B.; Moyer, E. J.

    2012-08-01

    The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener-Bergeron-Findeisen process). As all of these processes are related to updraft strength, droplet size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.

  6. Modelling and interpreting the isotopic composition of water vapour in convective updrafts

    NASA Astrophysics Data System (ADS)

    Bolot, M.; Legras, B.; Moyer, E. J.

    2013-08-01

    The isotopic compositions of water vapour and its condensates have long been used as tracers of the global hydrological cycle, but may also be useful for understanding processes within individual convective clouds. We review here the representation of processes that alter water isotopic compositions during processing of air in convective updrafts and present a unified model for water vapour isotopic evolution within undiluted deep convective cores, with a special focus on the out-of-equilibrium conditions of mixed-phase zones where metastable liquid water and ice coexist. We use our model to show that a combination of water isotopologue measurements can constrain critical convective parameters, including degree of supersaturation, supercooled water content and glaciation temperature. Important isotopic processes in updrafts include kinetic effects that are a consequence of diffusive growth or decay of cloud particles within a supersaturated or subsaturated environment; isotopic re-equilibration between vapour and supercooled droplets, which buffers isotopic distillation; and differing mechanisms of glaciation (droplet freezing vs. the Wegener-Bergeron-Findeisen process). As all of these processes are related to updraft strength, particle size distribution and the retention of supercooled water, isotopic measurements can serve as a probe of in-cloud conditions of importance to convective processes. We study the sensitivity of the profile of water vapour isotopic composition to differing model assumptions and show how measurements of isotopic composition at cloud base and cloud top alone may be sufficient to retrieve key cloud parameters.

  7. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Qing; Wang, Jiang; Yu, Haitao

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less

  8. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    NASA Astrophysics Data System (ADS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  9. Adjoint Methods for Adjusting Three-Dimensional Atmosphere and Surface Properties to Fit Multi-Angle Multi-Pixel Polarimetric Measurements

    NASA Technical Reports Server (NTRS)

    Martin, William G.; Cairns, Brian; Bal, Guillaume

    2014-01-01

    This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.

  10. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, Rahul; Hamm, Nicholas Alexander Samuel; van der Tol, Christiaan; Stein, Alfred

    2016-03-01

    Gross primary production (GPP) can be separated from flux tower measurements of net ecosystem exchange (NEE) of CO2. This is used increasingly to validate process-based simulators and remote-sensing-derived estimates of simulated GPP at various time steps. Proper validation includes the uncertainty associated with this separation. In this study, uncertainty assessment was done in a Bayesian framework. It was applied to data from the Speulderbos forest site, The Netherlands. We estimated the uncertainty in GPP at half-hourly time steps, using a non-rectangular hyperbola (NRH) model for its separation from the flux tower measurements. The NRH model provides a robust empirical relationship between radiation and GPP. It includes the degree of curvature of the light response curve, radiation and temperature. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. We defined the prior distribution of each NRH parameter and used Markov chain Monte Carlo (MCMC) simulation to estimate the uncertainty in the separated GPP from the posterior distribution at half-hourly time steps. This time series also allowed us to estimate the uncertainty at daily time steps. We compared the informative with the non-informative prior distributions of the NRH parameters and found that both choices produced similar posterior distributions of GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  11. A Robust Post-Processing Workflow for Datasets with Motion Artifacts in Diffusion Kurtosis Imaging

    PubMed Central

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X.; Wan, Mingxi

    2014-01-01

    Purpose The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). Materials and methods The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). Results The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). Conclusion The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements. PMID:24727862

  12. A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.

    PubMed

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi

    2014-01-01

    The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.

  13. Studies of soundings and imagings measurements from geostationary satellites

    NASA Technical Reports Server (NTRS)

    Suomi, V. E.

    1973-01-01

    Soundings and imaging measurements from geostationary satellites are presented. The subjects discussed are: (1) meteorological data processing techniques, (2) sun glitter, (3) cloud growth rate study, satellite stability characteristics, and (4) high resolution optics. The use of perturbation technique to obtain the motion of sensors aboard a satellite is described. The most conditions, and measurement errors. Several performance evaluation parameters are proposed.

  14. Applications of the DOE/NASA wind turbine engineering information system

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1981-01-01

    A statistical analysis of data obtained from the Technology and Engineering Information Systems was made. The systems analyzed consist of the following elements: (1) sensors which measure critical parameters (e.g., wind speed and direction, output power, blade loads and component vibrations); (2) remote multiplexing units (RMUs) on each wind turbine which frequency-modulate, multiplex and transmit sensor outputs; (3) on-site instrumentation to record, process and display the sensor output; and (4) statistical analysis of data. Two examples of the capabilities of these systems are presented. The first illustrates the standardized format for application of statistical analysis to each directly measured parameter. The second shows the use of a model to estimate the variability of the rotor thrust loading, which is a derived parameter.

  15. Characterization of electrical appliances in transient state

    NASA Astrophysics Data System (ADS)

    Wójcik, Augustyn; Winiecki, Wiesław

    2017-08-01

    The article contains the study about electrical appliance characterization on the basis of power grid signals. To represent devices, parameters of current and voltage signals recorded during transient states are used. In this paper only transients occurring as a result of switching on devices are considered. The way of data acquisition performed in specialized measurement setup developed for electricity load monitoring is described. The paper presents the method of transients detection and the method of appliance parameters calculation. Using the set of acquired measurement data and appropriate software the set of parameters for several household appliances operating in different operating conditions was processed. Usefulness of appliances characterization in Non-Intrusive Appliance Load Monitoring System (NIALMS) with the use of proposed method is discussed focusing on obtained results.

  16. Deep Learning for Magnetic Resonance Fingerprinting: A New Approach for Predicting Quantitative Parameter Values from Time Series.

    PubMed

    Hoppe, Elisabeth; Körzdörfer, Gregor; Würfl, Tobias; Wetzl, Jens; Lugauer, Felix; Pfeuffer, Josef; Maier, Andreas

    2017-01-01

    The purpose of this work is to evaluate methods from deep learning for application to Magnetic Resonance Fingerprinting (MRF). MRF is a recently proposed measurement technique for generating quantitative parameter maps. In MRF a non-steady state signal is generated by a pseudo-random excitation pattern. A comparison of the measured signal in each voxel with the physical model yields quantitative parameter maps. Currently, the comparison is done by matching a dictionary of simulated signals to the acquired signals. To accelerate the computation of quantitative maps we train a Convolutional Neural Network (CNN) on simulated dictionary data. As a proof of principle we show that the neural network implicitly encodes the dictionary and can replace the matching process.

  17. Virtual IED sensor at an rf-biased electrode in low-pressure plasma

    NASA Astrophysics Data System (ADS)

    Bogdanova, Maria; Lopaev, Dmitry; Zyryanov, Sergey; Rakhimov, Alexander

    2016-09-01

    The majority of present-day technologies resort to ion-assisted processes in rf low-pressure plasma. In order to control the process precisely, the energy distribution of ions (IED) bombarding the sample placed on the rf-biased electrode should be tracked. In this work the ``Virtual IED sensor'' concept is considered. The idea is to obtain the IED ``virtually'' from the plasma sheath model including a set of externally measurable discharge parameters. The applicability of the ``Virtual IED sensor'' concept was studied for dual-frequency asymmetric ICP and CCP discharges. The IED measurements were carried out in Ar and H2 plasmas in a wide range of conditions. The calculated IEDs were compared to those measured by the Retarded Field Energy Analyzer. To calibrate the ``Virtual IED sensor'', the ion flux was measured by the pulsed self-bias method and then compared to plasma density measurements by Langmuir and hairpin probes. It is shown that if there is a reliable calibration procedure, the ``Virtual IED sensor'' can be successfully realized on the basis of analytical and semianalytical plasma sheath models including measurable discharge parameters. This research is supported by Russian Science Foundation (RSF) Grant 14-12-01012.

  18. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  19. The effect of spraying parameters on micro-structural properties of WC-12%Co coating deposited on copper substrate by HVOF process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sathwara, Nishit, E-mail: nishit-25@live.in; Metallurgical & Materials Engineering Department, Indus University, Ahmedabad-382115; Jariwala, C., E-mail: chetanjari@yahoo.com

    High Velocity Oxy-Fuel (HVOF) thermal sprayed coatingmade from Tungsten Carbide (WC) isconsidered as one of the most durable materials as wear resistance for industrial applications at room temperature. WC coating offers high wear resistance due to its high hardness and tough matrix imparts. The coating properties strongly depend on thermal spray processing parameters, surface preparation and surface finish. In this investigation, the effect of variousHVOF process parameters was studied on WC coating properties. The WC-12%Co coating was produced on Copper substrate. Prior to coating, theCopper substrate surface was prepared by grit blasting. WC-12%Co coatings were deposited on Coppersubstrates with varyingmore » process parameters such as Oxygen gas pressure, Air pressure, and spraying distance. Microstructure of coating was examined using Scanning Electron Microscope (SEM) and characterization of phasespresentin the coating was examined by X-Ray Diffraction (XRD). Microhardness of all coatingswas measured by VickerMicrohardness tester. At low Oxygen Pressure(10.00 bar), high Air pressure (7bar) and short nozzle to substrate distance of 170mm, best coating adhesion and porosity less structure isachieved on Coppersubstrate.« less

  20. The effect of spraying parameters on micro-structural properties of WC-12%Co coating deposited on copper substrate by HVOF process

    NASA Astrophysics Data System (ADS)

    Sathwara, Nishit; Jariwala, C.; Chauhan, N.; Raole, P. M.; Basa, D. K.

    2015-08-01

    High Velocity Oxy-Fuel (HVOF) thermal sprayed coatingmade from Tungsten Carbide (WC) isconsidered as one of the most durable materials as wear resistance for industrial applications at room temperature. WC coating offers high wear resistance due to its high hardness and tough matrix imparts. The coating properties strongly depend on thermal spray processing parameters, surface preparation and surface finish. In this investigation, the effect of variousHVOF process parameters was studied on WC coating properties. The WC-12%Co coating was produced on Copper substrate. Prior to coating, theCopper substrate surface was prepared by grit blasting. WC-12%Co coatings were deposited on Coppersubstrates with varying process parameters such as Oxygen gas pressure, Air pressure, and spraying distance. Microstructure of coating was examined using Scanning Electron Microscope (SEM) and characterization of phasespresentin the coating was examined by X-Ray Diffraction (XRD). Microhardness of all coatingswas measured by VickerMicrohardness tester. At low Oxygen Pressure(10.00 bar), high Air pressure (7bar) and short nozzle to substrate distance of 170mm, best coating adhesion and porosity less structure isachieved on Coppersubstrate.

  1. PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Shiyuan; Huang, Jianhua Z.; Long, James

    2016-12-01

    We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less

  2. A Foot-Arch Parameter Measurement System Using a RGB-D Camera.

    PubMed

    Chun, Sungkuk; Kong, Sejin; Mun, Kyung-Ryoul; Kim, Jinwook

    2017-08-04

    The conventional method of measuring foot-arch parameters is highly dependent on the measurer's skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the foot and yields three foot-arch parameters: arch index (AI), arch width (AW) and arch height (AH). The proposed system captures 3D geometric and color data on the plantar surface of the foot in a static standing pose using a commercial RGB-D camera. It detects the region of the foot surface in contact with the footplate by applying the clustering and Markov random field (MRF)-based image segmentation methods. The system computes the foot-arch parameters by analyzing the 2/3D shape of the contact region. Validation experiments were carried out to assess the accuracy and repeatability of the system. The average errors for AI, AW, and AH estimation on 99 data collected from 11 subjects during 3 days were -0.17%, 0.95 mm, and 0.52 mm, respectively. Reliability and statistical analysis on the estimated foot-arch parameters, the robustness to the change of weights used in the MRF, the processing time were also performed to show the feasibility of the system.

  3. Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach.

    PubMed

    Campitelli, Guillermo; Gerrans, Paul

    2014-04-01

    We used a mathematical modeling approach, based on a sample of 2,019 participants, to better understand what the cognitive reflection test (CRT; Frederick In Journal of Economic Perspectives, 19, 25-42, 2005) measures. This test, which is typically completed in less than 10 min, contains three problems and aims to measure the ability or disposition to resist reporting the response that first comes to mind. However, since the test contains three mathematically based problems, it is possible that the test only measures mathematical abilities, and not cognitive reflection. We found that the models that included an inhibition parameter (i.e., the probability of inhibiting an intuitive response), as well as a mathematical parameter (i.e., the probability of using an adequate mathematical procedure), fitted the data better than a model that only included a mathematical parameter. We also found that the inhibition parameter in males is best explained by both rational thinking ability and the disposition toward actively open-minded thinking, whereas in females this parameter was better explained by rational thinking only. With these findings, this study contributes to the understanding of the processes involved in solving the CRT, and will be particularly useful for researchers who are considering using this test in their research.

  4. Application of Gurson–Tvergaard–Needleman Constitutive Model to the Tensile Behavior of Reinforcing Bars with Corrosion Pits

    PubMed Central

    Xu, Yidong; Qian, Chunxiang

    2013-01-01

    Based on meso-damage mechanics and finite element analysis, the aim of this paper is to describe the feasibility of the Gurson–Tvergaard–Needleman (GTN) constitutive model in describing the tensile behavior of corroded reinforcing bars. The orthogonal test results showed that different fracture pattern and the related damage evolution process can be simulated by choosing different material parameters of GTN constitutive model. Compared with failure parameters, the two constitutive parameters are significant factors affecting the tensile strength. Both the nominal yield and ultimate tensile strength decrease markedly with the increase of constitutive parameters. Combining with the latest data and trial-and-error method, the suitable material parameters of GTN constitutive model were adopted to simulate the tensile behavior of corroded reinforcing bars in concrete under carbonation environment attack. The numerical predictions can not only agree very well with experimental measurements, but also simplify the finite element modeling process. PMID:23342140

  5. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  6. Thermo-Mechanical Characterization of Friction Stir Spot Welded AA7050 Sheets by Means of Experimental and FEM Analyses

    PubMed Central

    D’Urso, Gianluca; Giardini, Claudio

    2016-01-01

    The present study was carried out to evaluate how the friction stir spot welding (FSSW) process parameters affect the temperature distribution in the welding region, the welding forces and the mechanical properties of the joints. The experimental study was performed by means of a CNC machine tool obtaining FSSW lap joints on AA7050 aluminum alloy plates. Three thermocouples were inserted into the samples to measure the temperatures at different distance from the joint axis during the whole FSSW process. Experiments was repeated varying the process parameters, namely rotational speed, axial feed rate and plunging depth. Axial welding forces were measured during the tests using a piezoelectric load cell, while the mechanical properties of the joints were evaluated by executing shear tests on the specimens. The correlation found between process parameters and joints properties, allowed to identify the best technological window. The data collected during the experiments were used to validate a simulation model of the FSSW process, too. The model was set up using a 2D approach for the simulation of a 3D problem, in order to guarantee a very simple and practical solution for achieving results in a very short time. A specific external routine for the calculation of the thermal energy due to friction acting between pin and sheet was developed. An index for the prediction of the joint mechanical properties using the FEM simulations was finally presented and validated. PMID:28773810

  7. Thermo-Mechanical Characterization of Friction Stir Spot Welded AA7050 Sheets by Means of Experimental and FEM Analyses.

    PubMed

    D'Urso, Gianluca; Giardini, Claudio

    2016-08-11

    The present study was carried out to evaluate how the friction stir spot welding (FSSW) process parameters affect the temperature distribution in the welding region, the welding forces and the mechanical properties of the joints. The experimental study was performed by means of a CNC machine tool obtaining FSSW lap joints on AA7050 aluminum alloy plates. Three thermocouples were inserted into the samples to measure the temperatures at different distance from the joint axis during the whole FSSW process. Experiments was repeated varying the process parameters, namely rotational speed, axial feed rate and plunging depth. Axial welding forces were measured during the tests using a piezoelectric load cell, while the mechanical properties of the joints were evaluated by executing shear tests on the specimens. The correlation found between process parameters and joints properties, allowed to identify the best technological window. The data collected during the experiments were used to validate a simulation model of the FSSW process, too. The model was set up using a 2D approach for the simulation of a 3D problem, in order to guarantee a very simple and practical solution for achieving results in a very short time. A specific external routine for the calculation of the thermal energy due to friction acting between pin and sheet was developed. An index for the prediction of the joint mechanical properties using the FEM simulations was finally presented and validated.

  8. Preparation of high-oriented molybdenum thin films using DC reactive magnetronsputtering

    NASA Astrophysics Data System (ADS)

    Shang, Zhengguo; Li, Dongling; Yin, She; Wang, Shengqiang

    2017-03-01

    Since molybdenum (Mo) thin film has been used widely recently, it attracts plenty of attention, like it is a good candidate of back contact material for CuIn1-xGaxSe2-ySy (CIGSeS) solar cells development; thanks to its more conductive and higher adhesive property. Besides, molybdenum thin film is an ideal material for aluminum nitride (AlN) thin film preparation and attributes to the tiny (-1.0%) lattice mismatch between Mo and AlN. As we know that the quality of Mo thin film is mainly dependent on process conditions, it brings a practical significance to study the influence of process parameters on Mo thin film properties. In this work, various sputtering conditions are employed to explore the feasibility of depositing a layer of molybdenum film with good quality by DC reactive magnetron sputtering. The influence of process parameters such as power, gas flow, substrate temperature and process time on the crystallinity and crystal orientation of Mo thin films is investigated. X-ray diffraction (XRD) measurements and atomic force microscope (AFM) are used to characterize the properties and surface roughness, respectively. According to comparative analysis on the results, process parameters are optimized. The full width at half maximum (FWHM) of the rocking curves of the (110) Mo is decreased to 2.7∘, and the (110) Mo peaks reached 1.2 × 105 counts. The grain size and the surface roughness have been measured as 20 Å and 3.8 nm, respectively, at 200∘C.

  9. Analysis of the variation of range parameters of thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2016-10-01

    Measured range characteristics may vary considerably (up to several dozen percent) between different samples of the same camera type. The question is whether the manufacturing process somehow lacks repeatability or the commonly used measurement procedures themselves need improvement. The presented paper attempts to deal with the aforementioned question. The measurement method has been thoroughly analyzed as well as the measurement test bed. Camera components (such as detector and optics) have also been analyzed and their key parameters have been measured, including noise figures of the entire system. Laboratory measurements are the most precise method used to determine range parameters of a thermal camera. However, in order to obtain reliable results several important conditions have to be fulfilled. One must have the test equipment capable of measurement accuracy (uncertainty) significantly better than the magnitudes of measured quantities. The measurements must be performed in a controlled environment thus excluding the influence of varying environmental conditions. The personnel must be well-trained, experienced in testing the thermal imaging devices and familiar with the applied measurement procedures. The measurement data recorded for several dozen of cooled thermal cameras (from one of leading camera manufacturers) have been the basis of the presented analysis. The measurements were conducted in the accredited research laboratory of Institute of Optoelectronics (Military University of Technology).

  10. Probing Reliability of Transport Phenomena Based Heat Transfer and Fluid Flow Analysis in Autogeneous Fusion Welding Process

    NASA Astrophysics Data System (ADS)

    Bag, S.; de, A.

    2010-09-01

    The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.

  11. Efficiency of the Inertia Friction Welding Process and Its Dependence on Process Parameters

    NASA Astrophysics Data System (ADS)

    Senkov, O. N.; Mahaffey, D. W.; Tung, D. J.; Zhang, W.; Semiatin, S. L.

    2017-07-01

    It has been widely assumed, but never proven, that the efficiency of the inertia friction welding (IFW) process is independent of process parameters and is relatively high, i.e., 70 to 95 pct. In the present work, the effect of IFW parameters on process efficiency was established. For this purpose, a series of IFW trials was conducted for the solid-state joining of two dissimilar nickel-base superalloys (LSHR and Mar-M247) using various combinations of initial kinetic energy ( i.e., the total weld energy, E o), initial flywheel angular velocity ( ω o), flywheel moment of inertia ( I), and axial compression force ( P). The kinetics of the conversion of the welding energy to heating of the faying sample surfaces ( i.e., the sample energy) vs parasitic losses to the welding machine itself were determined by measuring the friction torque on the sample surfaces ( M S) and in the machine bearings ( M M). It was found that the rotating parts of the welding machine can consume a significant fraction of the total energy. Specifically, the parasitic losses ranged from 28 to 80 pct of the total weld energy. The losses increased (and the corresponding IFW process efficiency decreased) as P increased (at constant I and E o), I decreased (at constant P and E o), and E o (or ω o) increased (at constant P and I). The results of this work thus provide guidelines for selecting process parameters which minimize energy losses and increase process efficiency during IFW.

  12. Hiereachical Bayesian Model for Combining Geochemical and Geophysical Data for Environmental Applications Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2013-05-01

    Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less

  13. A method for predicting optimized processing parameters for surfacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupont, J.N.; Marder, A.R.

    1994-12-31

    Welding is used extensively for surfacing applications. To operate a surfacing process efficiently, the variables must be optimized to produce low levels of dilution with the substrate while maintaining high deposition rates. An equation for dilution in terms of the welding variables, thermal efficiency factors, and thermophysical properties of the overlay and substrate was developed by balancing energy and mass terms across the welding arc. To test the validity of the resultant dilution equation, the PAW, GTAW, GMAW, and SAW processes were used to deposit austenitic stainless steel onto carbon steel over a wide range of parameters. Arc efficiency measurementsmore » were conducted using a Seebeck arc welding calorimeter. Melting efficiency was determined based on knowledge of the arc efficiency. Dilution was determined for each set of processing parameters using a quantitative image analysis system. The pertinent equations indicate dilution is a function of arc power (corrected for arc efficiency), filler metal feed rate, melting efficiency, and thermophysical properties of the overlay and substrate. With the aid of the dilution equation, the effect of processing parameters on dilution is presented by a new processing diagram. A new method is proposed for determining dilution from welding variables. Dilution is shown to depend on the arc power, filler metal feed rate, arc and melting efficiency, and the thermophysical properties of the overlay and substrate. Calculated dilution levels were compared with measured values over a large range of processing parameters and good agreement was obtained. The results have been applied to generate a processing diagram which can be used to: (1) predict the maximum deposition rate for a given arc power while maintaining adequate fusion with the substrate, and (2) predict the resultant level of dilution with the substrate.« less

  14. Microbiological, physicochemical and sensory parameters of dry fermented sausages manufactured with high hydrostatic pressure processed raw meat.

    PubMed

    Omer, M K; Prieto, B; Rendueles, E; Alvarez-Ordoñez, A; Lunde, K; Alvseike, O; Prieto, M

    2015-10-01

    The aim of this trial was to describe physicochemical, microbiological and organoleptic characteristics of dry fermented sausages produced from high hydrostatic pressure (HHP) pre-processed trimmings. During ripening of the meat products pH, weight, water activity (aw), and several microbiological parameters were measured at zero, eight, fifteen days and after 6weeks. Sensory characteristics were estimated at day 15 and after six weeks by a test panel by using several sensory tests. Enterobacteriaceae were not detected in sausages from HHP-processed trimmings. Fermentation was little affected, but weight and aw of the HHP-processed sausages decreased faster during ripening. HHP-treated sausages were consistently less favoured than non HHP-treated sausages, but the strategy may be an alternative approach if the process is optimized. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Temperature Measurement and Numerical Prediction in Machining Inconel 718.

    PubMed

    Díaz-Álvarez, José; Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-06-30

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning.

  16. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  17. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  18. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  19. 40 CFR 63.645 - Test methods and procedures for miscellaneous process vents.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... analysis based on accepted chemical engineering principles, measurable process parameters, or physical or... minute, at a temperature of 20 °C. (g) Engineering assessment may be used to determine the TOC emission...) Engineering assessment includes, but is not limited to, the following: (i) Previous test results provided the...

  20. 40 CFR 63.1323 - Batch process vents-methods and procedures for group determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... paragraph (b)(5) of this section. Engineering assessment may be used to estimate emissions from a batch... defined in paragraph (b)(5) of this section, through engineering assessment, as defined in paragraph (b)(6...

  1. 40 CFR 63.645 - Test methods and procedures for miscellaneous process vents.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... analysis based on accepted chemical engineering principles, measurable process parameters, or physical or... minute, at a temperature of 20 °C. (g) Engineering assessment may be used to determine the TOC emission...) Engineering assessment includes, but is not limited to, the following: (i) Previous test results provided the...

  2. 40 CFR 63.645 - Test methods and procedures for miscellaneous process vents.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... analysis based on accepted chemical engineering principles, measurable process parameters, or physical or... minute, at a temperature of 20 °C. (g) Engineering assessment may be used to determine the TOC emission...) Engineering assessment includes, but is not limited to, the following: (i) Previous test results provided the...

  3. 40 CFR 63.1323 - Batch process vents-methods and procedures for group determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... paragraph (b)(5) of this section. Engineering assessment may be used to estimate emissions from a batch... defined in paragraph (b)(5) of this section, through engineering assessment, as defined in paragraph (b)(6...

  4. 40 CFR 63.1323 - Batch process vents-methods and procedures for group determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... paragraph (b)(5) of this section. Engineering assessment may be used to estimate emissions from a batch... defined in paragraph (b)(5) of this section, through engineering assessment, as defined in paragraph (b)(6...

  5. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  6. 40 CFR 63.1323 - Batch process vents-methods and procedures for group determination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... paragraph (b)(5) of this section. Engineering assessment may be used to estimate emissions from a batch... defined in paragraph (b)(5) of this section, through engineering assessment, as defined in paragraph (b)(6...

  7. Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.

  8. Influence of spray nozzle shape upon atomization process

    NASA Astrophysics Data System (ADS)

    Beniuga, Marius; Mihai, Ioan

    2016-12-01

    The atomization process is affected by a number of operating parameters (pressure, viscosity, temperature, etc.) [1-6] and the adopted constructive solution. In this article are compared parameters of atomized liquid jet with two nozzles that have different lifespan, one being new and the other one out. The last statement shows that the second nozzle was monitored as time of operation on the one hand and on the other hand, two dimensional nozzles have been analyzed using laser profilometry. To compare the experimental parameters was carried an experimental stand to change the period and pulse width in injecting liquid through two nozzles. Atomized liquid jets were photographed and filmed quickly. Images obtained were analyzed using a Matlab code that allowed to determine a number of parameters that characterize an atomized jet. Knowing the conditions and operating parameters of atomized jet, will establish a new wastewater nozzle block of parameter values that can be implemented in controller that provides dosing of the liquid injected. Experimental measurements to observe the myriad forms of atomized droplets to a wide range of operating conditions, realized using the electronic control module.

  9. Measurements of Cuspal Slope Inclination Angles in Palaeoanthropological Applications

    NASA Astrophysics Data System (ADS)

    Gaboutchian, A. V.; Knyaz, V. A.; Leybova, N. A.

    2017-05-01

    Tooth crown morphological features, studied in palaeoanthropology, provide valuable information about human evolution and development of civilization. Tooth crown morphology represents biological and historical data of high taxonomical value as it characterizes genetically conditioned tooth relief features averse to substantial changes under environmental factors during lifetime. Palaeoanthropological studies are still based mainly on descriptive techniques and manual measurements of limited number of morphological parameters. Feature evaluation and measurement result analysis are expert-based. Development of new methods and techniques in 3D imaging creates a background provides for better value of palaeoanthropological data processing, analysis and distribution. The goals of the presented research are to propose new features for automated odontometry and to explore their applicability to paleoanthropological studies. A technique for automated measuring of given morphological tooth parameters needed for anthropological study is developed. It is based on using original photogrammetric system as a teeth 3D models acquisition device and on a set of algorithms for given tooth parameters estimation.

  10. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  11. Evaluation of parameters of color profile models of LCD and LED screens

    NASA Astrophysics Data System (ADS)

    Zharinov, I. O.; Zharinov, O. O.

    2017-12-01

    The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.

  12. Energy optimization aspects by injection process technology

    NASA Astrophysics Data System (ADS)

    Tulbure, A.; Ciortea, M.; Hutanu, C.; Farcas, V.

    2016-08-01

    In the proposed paper, the authors examine the energy aspects related to the injection moulding process technology in the automotive industry. Theoretical considerations have been validated by experimental measurements on the manufacturing process, for two types of injections moulding machines, hydraulic and electric. Practical measurements have been taken with professional equipment separately on each technological operation: lamination, compression, injection and expansion. For results traceability, the following parameters were, whenever possible, maintained: cycle time, product weight and the relative time. The aim of the investigations was to carry out a professional energy audit with accurate losses identification. Base on technological diagram for each production cycle, at the end of this contribution, some measure to reduce the energy consumption were proposed.

  13. Application of Fourier transform near-infrared spectroscopy to optimization of green tea steaming process conditions.

    PubMed

    Ono, Daiki; Bamba, Takeshi; Oku, Yuichi; Yonetani, Tsutomu; Fukusaki, Eiichiro

    2011-09-01

    In this study, we constructed prediction models by metabolic fingerprinting of fresh green tea leaves using Fourier transform near-infrared (FT-NIR) spectroscopy and partial least squares (PLS) regression analysis to objectively optimize of the steaming process conditions in green tea manufacture. The steaming process is the most important step for manufacturing high quality green tea products. However, the parameter setting of the steamer is currently determined subjectively by the manufacturer. Therefore, a simple and robust system that can be used to objectively set the steaming process parameters is necessary. We focused on FT-NIR spectroscopy because of its simple operation, quick measurement, and low running costs. After removal of noise in the spectral data by principal component analysis (PCA), PLS regression analysis was performed using spectral information as independent variables, and the steaming parameters set by experienced manufacturers as dependent variables. The prediction models were successfully constructed with satisfactory accuracy. Moreover, the results of the demonstrated experiment suggested that the green tea steaming process parameters could be predicted on a larger manufacturing scale. This technique will contribute to improvement of the quality and productivity of green tea because it can objectively optimize the complicated green tea steaming process and will be suitable for practical use in green tea manufacture. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  14. The influence of inertial sensor sampling frequency on the accuracy of measurement parameters in rearfoot running.

    PubMed

    Mitschke, Christian; Zaumseil, Falk; Milani, Thomas L

    2017-11-01

    Increasingly, inertial sensors are being used for running analyses. The aim of this study was to systematically investigate the influence of inertial sensor sampling frequencies (SF) on the accuracy of kinematic, spatio-temporal, and kinetic parameters. We hypothesized that running analyses at lower SF result in less signal information and therefore the inability to sufficiently interpret measurement data. Twenty-one subjects participated in this study. Rearfoot strikers ran on an indoor running track at a velocity of 3.5 ± 0.1 ms -1 . A uniaxial accelerometer was attached at the tibia and an inertial measurement unit was mounted at the heel of the right shoe. All sensors were synchronized at the start and data was measured with 1000 Hz (reference SF). Datasets were reduced to 500, 333, 250, 200, and 100 Hz in post-processing. The results of this study showed that a minimum SF of 500 Hz should be used to accurately measure kinetic parameters (e.g. peak heel acceleration). In contrast, stride length showed accurate results even at 333 Hz. 200 Hz were required to calculate parameters accurately for peak tibial acceleration, stride duration, and all kinematic measurements. The information from this study is necessary to correctly interpret measurement data of existing investigations and to plan future studies.

  15. Characterization and Effects of Fiber Pull-Outs in Hole Quality of Carbon Fiber Reinforced Plastics Composite

    PubMed Central

    Alizadeh Ashrafi, Sina; Miller, Peter W.; Wandro, Kevin M.; Kim, Dave

    2016-01-01

    Hole quality plays a crucial role in the production of close-tolerance holes utilized in aircraft assembly. Through drilling experiments of carbon fiber-reinforced plastic composites (CFRP), this study investigates the impact of varying drilling feed and speed conditions on fiber pull-out geometries and resulting hole quality parameters. For this study, hole quality parameters include hole size variance, hole roundness, and surface roughness. Fiber pull-out geometries are quantified by using scanning electron microscope (SEM) images of the mechanically-sectioned CFRP-machined holes, to measure pull-out length and depth. Fiber pull-out geometries and the hole quality parameter results are dependent on the drilling feed and spindle speed condition, which determines the forces and undeformed chip thickness during the process. Fiber pull-out geometries influence surface roughness parameters from a surface profilometer, while their effect on other hole quality parameters obtained from a coordinate measuring machine is minimal. PMID:28773950

  16. Objectification of steering feel around straight-line driving for vehicle/tyre design

    NASA Astrophysics Data System (ADS)

    Kim, Jungsik; Yoon, Yong-San

    2015-02-01

    This paper presents the objectification techniques for the assessment of steering feel including {on-centre} feel and steering response by measurement data. Here, new objective parameters are developed by considering not only the process by which the steering feel is evaluated subjectively but also by the ergonomic perceptive sensitivity of the driver. In order to validate such objective parameters, subjective tests are carried out by professional drivers. Objective measurements are also performed for several cars at a proving ground. The linear correlation coefficients between the subjective ratings and the objective parameters are calculated. As one of new objective parameters, steering wheel angle defined by ergonomic perception sensitivity shows high correlation with the subjective questionnaires of on-center responses. Newly defined steering torque curvature also shows high correlation with the subjective questionnaires of on-center effort. These correlation results conclude that the subjective assessment of steering feel can be successfully explained and objectified by means of the suggested objective parameters.

  17. Modelling of Lunar Laser Ranging in the Geocentric Frame and Comparison with the Common-View Double-Difference Lunar Laser Ranging Approach

    NASA Astrophysics Data System (ADS)

    Svehla, D.; Rothacher, M.

    2016-12-01

    Is it possible to process Lunar Laser Ranging (LLR) measurements in the geocentric frame in a similar way SLR measurements are modelled for GPS satellites and estimate all global reference frame parameters like in the case of GPS? The answer is yes. We managed to process Lunar laser measurements to Apollo and Luna retro-reflectors on the Moon in a similar way we are processing SLR measurements to GPS satellites. We make use of the latest Lunar libration models and DE430 ephemerides given in the Solar system baricentric frame and model uplink and downlink Lunar laser ranges in the geocentric frame as one way measurements, similar to SLR measurements to GPS satellites. In the first part of this contribution we present the estimation of the Lunar orbit as well as the Earth orientation parameters (including UT1 or UT0) with this new formulation. In the second part, we form common-view double-difference LLR measurements between two Lunar retro-reflectors and two LLR telescopes to show the actual noise of the LLR measurements. Since, by forming double-differences of LLR measurements, all range biases are removed and orbit errors are significantly reduced (the Lunar orbit is much farther away than the GPS orbits), one can consider double-difference LLR as an "orbit-free" and "bias-free" differential approach. In the end, we make a comparison with the SLR double-difference approach with Galileo satellites, where we already demonstrated submillimeter precision, and discuss possible combination of LLR and SLR to GNSS satellites using double-difference approach.

  18. Analysis of rocket flight stability based on optical image measurement

    NASA Astrophysics Data System (ADS)

    Cui, Shuhua; Liu, Junhu; Shen, Si; Wang, Min; Liu, Jun

    2018-02-01

    Based on the abundant optical image measurement data from the optical measurement information, this paper puts forward the method of evaluating the rocket flight stability performance by using the measurement data of the characteristics of the carrier rocket in imaging. On the basis of the method of measuring the characteristics of the carrier rocket, the attitude parameters of the rocket body in the coordinate system are calculated by using the measurements data of multiple high-speed television sets, and then the parameters are transferred to the rocket body attack angle and it is assessed whether the rocket has a good flight stability flying with a small attack angle. The measurement method and the mathematical algorithm steps through the data processing test, where you can intuitively observe the rocket flight stability state, and also can visually identify the guidance system or failure analysis.

  19. Advanced non-contrasted computed tomography post-processing by CT-Calculometry (CT-CM) outperforms established predictors for the outcome of shock wave lithotripsy.

    PubMed

    Langenauer, J; Betschart, P; Hechelhammer, L; Güsewell, S; Schmid, H P; Engeler, D S; Abt, D; Zumstein, V

    2018-05-29

    To evaluate the predictive value of advanced non-contrasted computed tomography (NCCT) post-processing using novel CT-calculometry (CT-CM) parameters compared to established predictors of success of shock wave lithotripsy (SWL) for urinary calculi. NCCT post-processing was retrospectively performed in 312 patients suffering from upper tract urinary calculi who were treated by SWL. Established predictors such as skin to stone distance, body mass index, stone diameter or mean stone attenuation values were assessed. Precise stone size and shape metrics, 3-D greyscale measurements and homogeneity parameters such as skewness and kurtosis, were analysed using CT-CM. Predictive values for SWL outcome were analysed using logistic regression and receiver operating characteristics (ROC) statistics. Overall success rate (stone disintegration and no re-intervention needed) of SWL was 59% (184 patients). CT-CM metrics mainly outperformed established predictors. According to ROC analyses, stone volume and surface area performed better than established stone diameter, mean 3D attenuation value was a stronger predictor than established mean attenuation value, and parameters skewness and kurtosis performed better than recently emerged variation coefficient of stone density. Moreover, prediction of SWL outcome with 80% probability to be correct would be possible in a clearly higher number of patients (up to fivefold) using CT-CM-derived parameters. Advanced NCCT post-processing by CT-CM provides novel parameters that seem to outperform established predictors of SWL response. Implementation of these parameters into clinical routine might reduce SWL failure rates.

  20. Uncertainty analysis of signal deconvolution using a measured instrument response function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartouni, E. P.; Beeman, B.; Caggiano, J. A.

    2016-10-05

    A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less

  1. Rock surface roughness measurement using CSI technique and analysis of surface characterization by qualitative and quantitative results

    NASA Astrophysics Data System (ADS)

    Mukhtar, Husneni; Montgomery, Paul; Gianto; Susanto, K.

    2016-01-01

    In order to develop image processing that is widely used in geo-processing and analysis, we introduce an alternative technique for the characterization of rock samples. The technique that we have used for characterizing inhomogeneous surfaces is based on Coherence Scanning Interferometry (CSI). An optical probe is first used to scan over the depth of the surface roughness of the sample. Then, to analyse the measured fringe data, we use the Five Sample Adaptive method to obtain quantitative results of the surface shape. To analyse the surface roughness parameters, Hmm and Rq, a new window resizing analysis technique is employed. The results of the morphology and surface roughness analysis show micron and nano-scale information which is characteristic of each rock type and its history. These could be used for mineral identification and studies in rock movement on different surfaces. Image processing is thus used to define the physical parameters of the rock surface.

  2. Direct injection analysis of fatty and resin acids in papermaking process waters by HPLC/MS.

    PubMed

    Valto, Piia; Knuutinen, Juha; Alén, Raimo

    2011-04-01

    A novel HPLC-atmospheric pressure chemical ionization/MS (HPLC-APCI/MS) method was developed for the rapid analysis of selected fatty and resin acids typically present in papermaking process waters. A mixture of palmitic, stearic, oleic, linolenic, and dehydroabietic acids was separated by a commercial HPLC column (a modified stationary C(18) phase) using gradient elution with methanol/0.15% formic acid (pH 2.5) as a mobile phase. The internal standard (myristic acid) method was used to calculate the correlation coefficients and in the quantitation of the results. In the thorough quality parameters measurement, a mixture of these model acids in aqueous media as well as in six different paper machine process waters was quantitatively determined. The measured quality parameters, such as selectivity, linearity, precision, and accuracy, clearly indicated that, compared with traditional gas chromatographic techniques, the simple method developed provided a faster chromatographic analysis with almost real-time monitoring of these acids. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Characterization of airborne particles generated from metal active gas welding process.

    PubMed

    Guerreiro, C; Gomes, J F; Carvalho, P; Santos, T J G; Miranda, R M; Albuquerque, P

    2014-05-01

    This study is focused on the characterization of particles emitted in the metal active gas welding of carbon steel using mixture of Ar + CO2, and intends to analyze which are the main process parameters that influence the emission itself. It was found that the amount of emitted particles (measured by particle number and alveolar deposited surface area) are clearly dependent on the distance to the welding front and also on the main welding parameters, namely the current intensity and heat input in the welding process. The emission of airborne fine particles seems to increase with the current intensity as fume-formation rate does. When comparing the tested gas mixtures, higher emissions are observed for more oxidant mixtures, that is, mixtures with higher CO2 content, which result in higher arc stability. These mixtures originate higher concentrations of fine particles (as measured by number of particles by cm(3) of air) and higher values of alveolar deposited surface area of particles, thus resulting in a more severe worker's exposure.

  4. [Determination of process variable pH in solid-state fermentation by FT-NIR spectroscopy and extreme learning machine (ELM)].

    PubMed

    Liu, Guo-hai; Jiang, Hui; Xiao, Xia-hong; Zhang, Dong-juan; Mei, Cong-li; Ding, Yu-han

    2012-04-01

    Fourier transform near-infrared (FT-NIR) spectroscopy was attempted to determine pH, which is one of the key process parameters in solid-state fermentation of crop straws. First, near infrared spectra of 140 solid-state fermented product samples were obtained by near infrared spectroscopy system in the wavelength range of 10 000-4 000 cm(-1), and then the reference measurement results of pH were achieved by pH meter. Thereafter, the extreme learning machine (ELM) was employed to calibrate model. In the calibration model, the optimal number of PCs and the optimal number of hidden-layer nodes of ELM network were determined by the cross-validation. Experimental results showed that the optimal ELM model was achieved with 1040-1 topology construction as follows: R(p) = 0.961 8 and RMSEP = 0.104 4 in the prediction set. The research achievement could provide technological basis for the on-line measurement of the process parameters in solid-state fermentation.

  5. Robust measurement of supernova ν e spectra with future neutrino detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikrant, Alex; Laha, Ranjan; Horiuchi, Shunsaku

    Measuring precise all-flavor neutrino information from a supernova is crucial for understanding the core-collapse process as well as neutrino properties. We apply a chi-squared analysis for different detector setups to explore determination of ν e spectral parameters. Using a long-term two-dimensional core-collapse simulation with three time-varying spectral parameters, we generate mock data to examine the capabilities of the current Super-Kamiokande detector and compare the relative improvements that gadolinium, Hyper-Kamiokande, and DUNE would have. We show that in a realistic three spectral parameter framework, the addition of gadolinium to Super-Kamiokande allows for a qualitative improvement in νe determination. Efficient neutron taggingmore » will allow Hyper-Kamiokande to constrain spectral information more strongly in both the accretion and cooling phases. Overall, significant improvements will be made by Hyper-Kamiokande and DUNE, allowing for much more precise determination of ν e spectral parameters.« less

  6. Robust measurement of supernova ν e spectra with future neutrino detectors

    DOE PAGES

    Nikrant, Alex; Laha, Ranjan; Horiuchi, Shunsaku

    2018-01-25

    Measuring precise all-flavor neutrino information from a supernova is crucial for understanding the core-collapse process as well as neutrino properties. We apply a chi-squared analysis for different detector setups to explore determination of ν e spectral parameters. Using a long-term two-dimensional core-collapse simulation with three time-varying spectral parameters, we generate mock data to examine the capabilities of the current Super-Kamiokande detector and compare the relative improvements that gadolinium, Hyper-Kamiokande, and DUNE would have. We show that in a realistic three spectral parameter framework, the addition of gadolinium to Super-Kamiokande allows for a qualitative improvement in νe determination. Efficient neutron taggingmore » will allow Hyper-Kamiokande to constrain spectral information more strongly in both the accretion and cooling phases. Overall, significant improvements will be made by Hyper-Kamiokande and DUNE, allowing for much more precise determination of ν e spectral parameters.« less

  7. Geometrical Dependence of Domain-Wall Propagation and Nucleation Fields in Magnetic-Domain-Wall Sensors

    NASA Astrophysics Data System (ADS)

    Borie, B.; Kehlberger, A.; Wahrhusen, J.; Grimm, H.; Kläui, M.

    2017-08-01

    We study the key domain-wall properties in segmented nanowire loop-based structures used in domain-wall-based sensors. The two reasons for device failure, namely, distribution of the domain-wall propagation field (depinning) and the nucleation field are determined with magneto-optical Kerr effect and giant-magnetoresistance (GMR) measurements for thousands of elements to obtain significant statistics. Single layers of Ni81 Fe19 , a complete GMR stack with Co90 Fe10 /Ni81Fe19 as a free layer, and a single layer of Co90 Fe10 are deposited and industrially patterned to determine the influence of the shape anisotropy, the magnetocrystalline anisotropy, and the fabrication processes. We show that the propagation field is influenced only slightly by the geometry but significantly by material parameters. Simulations for a realistic wire shape yield a curling-mode type of magnetization configuration close to the nucleation field. Nonetheless, we find that the domain-wall nucleation fields can be described by a typical Stoner-Wohlfarth model related to the measured geometrical parameters of the wires and fitted by considering the process parameters. The GMR effect is subsequently measured in a substantial number of devices (3000) in order to accurately gauge the variation between devices. This measurement scheme reveals a corrected upper limit to the nucleation fields of the sensors that can be exploited for fast characterization of the working elements.

  8. Estimation of forest biomass using remote sensing

    NASA Astrophysics Data System (ADS)

    Sarker, Md. Latifur Rahman

    Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation

  9. Model selection using cosmic chronometers with Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Melia, Fulvio; Yennapureddy, Manoj K.

    2018-02-01

    The use of Gaussian Processes with a measurement of the cosmic expansion rate based solely on the observation of cosmic chronometers provides a completely cosmology-independent reconstruction of the Hubble constant H(z) suitable for testing different models. The corresponding dispersion σH is smaller than ~ 9% over the entire redshift range (lesssim zlesssim 20) of the observations, rivaling many kinds of cosmological measurements available today. We use the reconstructed H(z) function to test six different cosmologies, and show that it favours the Rh=ct universe, which has only one free parameter (i.e., H0) over other models, including Planck ΛCDM . The parameters of the standard model may be re-optimized to improve the fits to the reconstructed H(z) function, but the results have smaller p-values than one finds with Rh=ct.

  10. Photolyses of mammalian carboxy-hemoglobin studied by photoacoustic calorimetry

    NASA Astrophysics Data System (ADS)

    Zhao, JinYu; Li, JiaHuang; Zhang, Zheng; Zhang, ShuYi; Qu, Min; Fang, JianWen; Hua, ZiChun

    2013-07-01

    The enthalpy and conformational volume changes in the photolyses of carboxy-hemoglobin (HbCO) of human, bovine, pig, horse and rabbit are investigated by photoacoustic calorimetry. Considering the time scales of the exciting laser pulse and the receiving ultrasound transducers (PVDF films and PZT ceramics), as well as the reaction lifetimes in the photolysis processes of HbCO, the measured results are related to the geminate recombination and tertiary relaxation in photolyses of HbCO. Moreover, the quantum yields of the five mammals are also measured by laser pump-probe technique. The results show that the dynamic parameters, such as enthalpy and conformational volume changes, differ between the processes of the geminate recombination and tertiary relaxation. Also, the dynamic parameters differ among the five mammals although some of them may be consistent with each other.

  11. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  12. 4D flow mri post-processing strategies for neuropathologies

    NASA Astrophysics Data System (ADS)

    Schrauben, Eric Mathew

    4D flow MRI allows for the measurement of a dynamic 3D velocity vector field. Blood flow velocities in large vascular territories can be qualitatively visualized with the added benefit of quantitative probing. Within cranial pathologies theorized to have vascular-based contributions or effects, 4D flow MRI provides a unique platform for comprehensive assessment of hemodynamic parameters. Targeted blood flow derived measurements, such as flow rate, pulsatility, retrograde flow, or wall shear stress may provide insight into the onset or characterization of more complex neuropathologies. Therefore, the thorough assessment of each parameter within the context of a given disease has important medical implications. Not surprisingly, the last decade has seen rapid growth in the use of 4D flow MRI. Data acquisition sequences are available to researchers on all major scanner platforms. However, the use has been limited mostly to small research trials. One major reason that has hindered the more widespread use and application in larger clinical trials is the complexity of the post-processing tasks and the lack of adequate tools for these tasks. Post-processing of 4D flow MRI must be semi-automated, fast, user-independent, robust, and reliably consistent for use in a clinical setting, within large patient studies, or across a multicenter trial. Development of proper post-processing methods coupled with systematic investigation in normal and patient populations pushes 4D flow MRI closer to clinical realization while elucidating potential underlying neuropathological origins. Within this framework, the work in this thesis assesses venous flow reproducibility and internal consistency in a healthy population. A preliminary analysis of venous flow parameters in healthy controls and multiple sclerosis patients is performed in a large study employing 4D flow MRI. These studies are performed in the context of the chronic cerebrospinal venous insufficiency hypothesis. Additionally, a double-gated flow acquisition and reconstruction scheme demonstrates respiratory-induced changes in internal jugular vein flow. Finally, a semi-automated intracranial vessel segmentation and flow parameter measurement software tool for fast and consistent 4D flow post-processing analysis is developed, validated, and exhibited an in-vivo.

  13. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  14. QUANTIFYING THE EFFECTS OF THE MIXING PROCESS IN FABRICATED DILUTION SYSTEMS ON PARTICULATE EMISSION MEASUREMENTS VIA AN INTEGRATED EXPERIMENTAL AND MODELING APPROACH

    EPA Science Inventory

    Mixture properties vs Aerodynamic properties
     
    Considering a number of parameters influencing particulate emission measurements, we first categorize them into two groups based on their characteristics, i.e., to mixture propertie...

  15. Modeling carbon cycle process of soil profile in Loess Plateau of China

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Finke, P.; Guo, Z.; Wu, H.

    2011-12-01

    SoilGen2 is a process-based model, which could reconstruct soil formation under various climate conditions, parent materials, vegetation types, slopes, expositions and time scales. Both organic and inorganic carbon cycle processes could be simulated, while the later process is important in carbon cycle of arid and semi-arid regions but seldom being studied. After calibrating parameters of dust deposition rate and segments depth affecting elements transportation and deposition in the profile, modeling results after 10000 years were confronted with measurements of two soil profiles in loess plateau of China, The simulated trends of organic carbon and CaCO3 in the profile are similar to measured values. Relative sensitivity analysis for carbon cycle process have been done and the results show that the change of organic carbon in long time scale is more sensitive to precipitation, temperature, plant carbon input and decomposition parameters (decomposition rate of humus, ratio of CO2/(BIO+HUM), etc.) in the model. As for the inorganic carbon cycle, precipitation and potential evaporation are important for simulation quality, while the leaching and deposition of CaCO3 are not sensitive to pCO2 and temperature of atmosphere.

  16. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    NASA Astrophysics Data System (ADS)

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  17. Nonlinear Parameter Identification of a Resonant Electrostatic MEMS Actuator

    PubMed Central

    Al-Ghamdi, Majed S.; Alneamy, Ayman M.; Park, Sangtak; Li, Beichen; Khater, Mahmoud E.; Abdel-Rahman, Eihab M.; Heppler, Glenn R.; Yavuz, Mustafa

    2017-01-01

    We experimentally investigate the primary superharmonic of order two and subharmonic of order one-half resonances of an electrostatic MEMS actuator under direct excitation. We identify the parameters of a one degree of freedom (1-DOF) generalized Duffing oscillator model representing it. The experiments were conducted in soft vacuum to reduce squeeze-film damping, and the actuator response was measured optically using a laser vibrometer. The predictions of the identified model were found to be in close agreement with the experimental results. We also identified the noise spectral density of process (actuation voltage) and measurement noise. PMID:28505097

  18. Nonlinear Parameter Identification of a Resonant Electrostatic MEMS Actuator.

    PubMed

    Al-Ghamdi, Majed S; Alneamy, Ayman M; Park, Sangtak; Li, Beichen; Khater, Mahmoud E; Abdel-Rahman, Eihab M; Heppler, Glenn R; Yavuz, Mustafa

    2017-05-13

    We experimentally investigate the primary superharmonic of order two and subharmonic of order one-half resonances of an electrostatic MEMS actuator under direct excitation. We identify the parameters of a one degree of freedom (1-DOF) generalized Duffing oscillator model representing it. The experiments were conducted in soft vacuum to reduce squeeze-film damping, and the actuator response was measured optically using a laser vibrometer. The predictions of the identified model were found to be in close agreement with the experimental results. We also identified the noise spectral density of process (actuation voltage) and measurement noise.

  19. Nickel-Phosphorous Development for Total Solar Irradiance Measurement

    NASA Astrophysics Data System (ADS)

    Carlesso, F.; Berni, L. A.; Vieira, L. E. A.; Savonov, G. S.; Nishimori, M.; Dal Lago, A.; Miranda, E.

    2017-10-01

    The development of an absolute radiometer instrument is currently a effort at INPE for TSI measurements. In this work, we describe the development of black Ni-P coatings for TSI radiometers absorptive cavities. We present a study of the surface blackening process and the relationships between morphological structure, chemical composition and coating absorption. Ni-P deposits with different phosphorous content were obtained by electroless techniques on aluminum substrates with a thin zincate layer. Appropriate phosphorus composition and etching parameters process produce low reflectance black coatings.

  20. [INVITED] Evaluation of process observation features for laser metal welding

    NASA Astrophysics Data System (ADS)

    Tenner, Felix; Klämpfl, Florian; Nagulin, Konstantin Yu.; Schmidt, Michael

    2016-06-01

    In the present study we show how fast the fluid dynamics change when changing the laser power for different feed rates during laser metal welding. By the use of two high-speed cameras and a data acquisition system we conclude how fast we have to image the process to measure the fluid dynamics with a very high certainty. Our experiments show that not all process features which can be measured during laser welding do represent the process behavior similarly well. Despite the good visibility of the vapor plume the monitoring of its movement is less suitable as an input signal for a closed-loop control. The features measured inside the keyhole show a good correlation with changes of process parameters. Due to its low noise, the area of the keyhole opening is well suited as an input signal for a closed-loop control of the process.

  1. Measurement of the main and critical parameters for optimal laser treatment of heart disease

    NASA Astrophysics Data System (ADS)

    Kabeya, FB; Abrahamse, H.; Karsten, AE

    2017-10-01

    Laser light is frequently used in the diagnosis and treatment of patients. As in traditional treatments such as medication, bypass surgery, and minimally invasive ways, laser treatment can also fail and present serious side effects. The true reason for laser treatment failure or the side effects thereof, remains unknown. From the literature review conducted, and experimental results generated we conclude that an optimal laser treatment for coronary artery disease (named heart disease) can be obtained if certain critical parameters are correctly measured and understood. These parameters include the laser power, the laser beam profile, the fluence rate, the treatment time, as well as the absorption and scattering coefficients of the target treatment tissue. Therefore, this paper proposes different, accurate methods for the measurement of these critical parameters to determine the optimal laser treatment of heart disease with a minimal risk of side effects. The results from the measurement of absorption and scattering properties can be used in a computer simulation package to predict the fluence rate. The computing technique is a program based on the random number (Monte Carlo) process and probability statistics to track the propagation of photons through a biological tissue.

  2. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the estimation approach to a simple, accurately modeled system, its effectiveness and accuracy can be evaluated. The same experimental setup can then be used with fluid-filled tanks to further evaluate the effectiveness of the process. Ultimately, the proven process can be applied to the full-sized spinning experimental setup to quickly and accurately determine the slosh model parameters for a particular spacecraft mission. Automating the parameter identification process will save time, allow more changes to be made to proposed designs, and lower the cost in the initial design stages.

  3. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.

  4. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  5. Remote sensing requirements as suggested by watershed model sensitivity analyses

    NASA Technical Reports Server (NTRS)

    Salomonson, V. V.; Rango, A.; Ormsby, J. P.; Ambaruch, R.

    1975-01-01

    A continuous simulation watershed model has been used to perform sensitivity analyses that provide guidance in defining remote sensing requirements for the monitoring of watershed features and processes. The results show that out of 26 input parameters having meaningful effects on simulated runoff, 6 appear to be obtainable with existing remote sensing techniques. Of these six parameters, 3 require the measurement of the areal extent of surface features (impervious areas, water bodies, and the extent of forested area), two require the descrimination of land use that can be related to overland flow roughness coefficient or the density of vegetation so as to estimate the magnitude of precipitation interception, and one parameter requires the measurement of distance to get the length over which overland flow typically occurs. Observational goals are also suggested for monitoring such fundamental watershed processes as precipitation, soil moisture, and evapotranspiration. A case study on the Patuxent River in Maryland shows that runoff simulation is improved if recent satellite land use observations are used as model inputs as opposed to less timely topographic map information.

  6. VESGEN Software for Mapping and Quantification of Vascular Regulators

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.

    2012-01-01

    VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.

  7. A novel process for production of spherical PBT powders and their processing behavior during laser beam melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Jochen, E-mail: jochen.schmidt@fau.de; Sachs, Marius; Fanselow, Stephanie

    2016-03-09

    Additive manufacturing processes like laser beam melting of polymers are established for production of prototypes and individualized parts. The transfer to other areas of application and to serial production is currently hindered by the limited availability of polymer powders with good processability. Within this contribution a novel process route for the production of spherical polymer micron-sized particles of good flowability has been established and applied to produce polybutylene terephthalate (PBT) powders. Moreover, the applicability of the PBT powders in selective laser beam melting and the dependencies of process parameters on device properties will be outlined. First, polymer micro particles aremore » produced by a novel wet grinding method. To improve the flowability the produced particles the particle shape is optimized by rounding in a heated downer reactor. A further improvement of flowability of the cohesive spherical PBT particles is realized by dry coating. An improvement of flowability by a factor of about 5 is achieved by subsequent rounding of the comminution product and dry-coating as proven by tensile strength measurements of the powders. The produced PBT powders were characterized with respect to their processability. Therefore thermal, rheological, optical and bulk properties were analyzed. Based on these investigations a range of processing parameters was derived. Parameter studies on thin layers, produced in a selective laser melting system, were conducted. Hence appropriate parameters for processing the PBT powders by laser beam melting, like building chamber temperature, scan speed and laser power have been identified.« less

  8. United States Air Force Summer Research Program -- 1993 Summer Research Program Final Reports. Volume 11. Arnold Engineering Development Center, Frank J. Seiler Research Laboratory, Wilford Hall Medical Center

    DTIC Science & Technology

    1993-01-01

    external parameters such as airflow, temperature, pressure, etc, are measured. Turbine Engine testing generates massive volumes of data at very high...a form that describes the signal flow graph topology as well as specific parameters of the processing blocks in the diagram. On multiprocessor...provides an interface to the symbolic builder and control functions such that parameters may be set during the build operation that will affect the

  9. Theoretical Accuracy of Along-Track Displacement Measurements from Multiple-Aperture Interferometry (MAI)

    PubMed Central

    Jung, Hyung-Sup; Lee, Won-Jin; Zhang, Lei

    2014-01-01

    The measurement of precise along-track displacements has been made with the multiple-aperture interferometry (MAI). The empirical accuracies of the MAI measurements are about 6.3 and 3.57 cm for ERS and ALOS data, respectively. However, the estimated empirical accuracies cannot be generalized to any interferometric pair because they largely depend on the processing parameters and coherence of the used SAR data. A theoretical formula is given to calculate an expected MAI measurement accuracy according to the system and processing parameters and interferometric coherence. In this paper, we have investigated the expected MAI measurement accuracy on the basis of the theoretical formula for the existing X-, C- and L-band satellite SAR systems. The similarity between the expected and empirical MAI measurement accuracies has been tested as well. The expected accuracies of about 2–3 cm and 3–4 cm (γ = 0.8) are calculated for the X- and L-band SAR systems, respectively. For the C-band systems, the expected accuracy of Radarsat-2 ultra-fine is about 3–4 cm and that of Sentinel-1 IW is about 27 cm (γ = 0.8). The results indicate that the expected MAI measurement accuracy of a given interferometric pair can be easily calculated by using the theoretical formula. PMID:25251408

  10. Methods of Measurement for Semiconductor Materials, Process Control, and Devices

    NASA Technical Reports Server (NTRS)

    Bullis, W. M. (Editor)

    1973-01-01

    The development of methods of measurement for semiconductor materials, process control, and devices is reported. Significant accomplishments include: (1) Completion of an initial identification of the more important problems in process control for integrated circuit fabrication and assembly; (2) preparations for making silicon bulk resistivity wafer standards available to the industry; and (3) establishment of the relationship between carrier mobility and impurity density in silicon. Work is continuing on measurement of resistivity of semiconductor crystals; characterization of generation-recombination-trapping centers, including gold, in silicon; evaluation of wire bonds and die attachment; study of scanning electron microscopy for wafer inspection and test; measurement of thermal properties of semiconductor devices; determination of S-parameters and delay time in junction devices; and characterization of noise and conversion loss of microwave detector diodes.

  11. Computer-controlled multi-parameter mapping of 3D compressible flowfields using planar laser-induced iodine fluorescence

    NASA Technical Reports Server (NTRS)

    Donohue, James M.; Victor, Kenneth G.; Mcdaniel, James C., Jr.

    1993-01-01

    A computer-controlled technique, using planar laser-induced iodine fluorescence, for measuring complex compressible flowfields is presented. A new laser permits the use of a planar two-line temperature technique so that all parameters can be measured with the laser operated narrowband. Pressure and temperature measurements in a step flowfield show agreement within 10 percent of a CFD model except in regions close to walls. Deviation of near wall temperature measurements from the model was decreased from 21 percent to 12 percent compared to broadband planar temperature measurements. Computer-control of the experiment has been implemented, except for the frequency tuning of the laser. Image data storage and processing has been improved by integrating a workstation into the experimental setup reducing the data reduction time by a factor of 50.

  12. Closed-Loop Process Control for Electron Beam Freeform Fabrication and Deposition Processes

    NASA Technical Reports Server (NTRS)

    Taminger, Karen M. (Inventor); Hofmeister, William H. (Inventor); Martin, Richard E. (Inventor); Hafley, Robert A. (Inventor)

    2013-01-01

    A closed-loop control method for an electron beam freeform fabrication (EBF(sup 3)) process includes detecting a feature of interest during the process using a sensor(s), continuously evaluating the feature of interest to determine, in real time, a change occurring therein, and automatically modifying control parameters to control the EBF(sup 3) process. An apparatus provides closed-loop control method of the process, and includes an electron gun for generating an electron beam, a wire feeder for feeding a wire toward a substrate, wherein the wire is melted and progressively deposited in layers onto the substrate, a sensor(s), and a host machine. The sensor(s) measure the feature of interest during the process, and the host machine continuously evaluates the feature of interest to determine, in real time, a change occurring therein. The host machine automatically modifies control parameters to the EBF(sup 3) apparatus to control the EBF(sup 3) process in a closed-loop manner.

  13. Parameter estimating state reconstruction

    NASA Technical Reports Server (NTRS)

    George, E. B.

    1976-01-01

    Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.

  14. Cutting Zone Temperature Identification During Machining of Nickel Alloy Inconel 718

    NASA Astrophysics Data System (ADS)

    Czán, Andrej; Daniš, Igor; Holubják, Jozef; Zaušková, Lucia; Czánová, Tatiana; Mikloš, Matej; Martikáň, Pavol

    2017-12-01

    Quality of machined surface is affected by quality of cutting process. There are many parameters, which influence on the quality of the cutting process. The cutting temperature is one of most important parameters that influence the tool life and the quality of machined surfaces. Its identification and determination is key objective in specialized machining processes such as dry machining of hard-to-machine materials. It is well known that maximum temperature is obtained in the tool rake face at the vicinity of the cutting edge. A moderate level of cutting edge temperature and a low thermal shock reduce the tool wear phenomena, and a low temperature gradient in the machined sublayer reduces the risk of high tensile residual stresses. The thermocouple method was used to measure the temperature directly in the cutting zone. An original thermocouple was specially developed for measuring of temperature in the cutting zone, surface and subsurface layers of machined surface. This paper deals with identification of temperature and temperature gradient during dry peripheral milling of Inconel 718. The measurements were used to identification the temperature gradients and to reconstruct the thermal distribution in cutting zone with various cutting conditions.

  15. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  16. Measuring dynamic kidney function in an undergraduate physiology laboratory.

    PubMed

    Medler, Scott; Harrington, Frederick

    2013-12-01

    Most undergraduate physiology laboratories are very limited in how they treat renal physiology. It is common to find teaching laboratories equipped with the capability for high-resolution digital recordings of physiological functions (muscle twitches, ECG, action potentials, respiratory responses, etc.), but most urinary laboratories still rely on a "dipstick" approach of urinalysis. Although this technique can provide some basic insights into the functioning of the kidneys, it overlooks the dynamic processes of filtration, reabsorption, and secretion. In the present article, we provide a straightforward approach of using renal clearance measurements to estimate glomerular filtration rate, fractional water reabsorption, glucose clearance, and other physiologically relevant parameters. The estimated values from our measurements in laboratory are in close agreement with those anticipated based on textbook parameters. For example, we found glomerular filtration rate to average 124 ± 45 ml/min, serum creatinine to be 1.23 ± 0.4 mg/dl, and fractional water reabsorption to be ∼96.8%. Furthermore, analyses for the class data revealed significant correlations between parameters like fractional water reabsorption and urine concentration, providing opportunities to discuss urine concentrating mechanisms and other physiological processes. The procedures outlined here are general enough that most undergraduate physiology laboratory courses should be able to implement them without difficulty.

  17. Disentangling the adult attention-deficit hyperactivity disorder endophenotype: parametric measurement of attention.

    PubMed

    Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2011-11-01

    Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.

  18. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  19. Increasing signal processing sophistication in the calculation of the respiratory modulation of the photoplethysmogram (DPOP).

    PubMed

    Addison, Paul S; Wang, Rui; Uribe, Alberto A; Bergese, Sergio D

    2015-06-01

    DPOP (∆POP or Delta-POP) is a non-invasive parameter which measures the strength of respiratory modulations present in the pulse oximetry photoplethysmogram (pleth) waveform. It has been proposed as a non-invasive surrogate parameter for pulse pressure variation (PPV) used in the prediction of the response to volume expansion in hypovolemic patients. Many groups have reported on the DPOP parameter and its correlation with PPV using various semi-automated algorithmic implementations. The study reported here demonstrates the performance gains made by adding increasingly sophisticated signal processing components to a fully automated DPOP algorithm. A DPOP algorithm was coded and its performance systematically enhanced through a series of code module alterations and additions. Each algorithm iteration was tested on data from 20 mechanically ventilated OR patients. Correlation coefficients and ROC curve statistics were computed at each stage. For the purposes of the analysis we split the data into a manually selected 'stable' region subset of the data containing relatively noise free segments and a 'global' set incorporating the whole data record. Performance gains were measured in terms of correlation against PPV measurements in OR patients undergoing controlled mechanical ventilation. Through increasingly advanced pre-processing and post-processing enhancements to the algorithm, the correlation coefficient between DPOP and PPV improved from a baseline value of R = 0.347 to R = 0.852 for the stable data set, and, correspondingly, R = 0.225 to R = 0.728 for the more challenging global data set. Marked gains in algorithm performance are achievable for manually selected stable regions of the signals using relatively simple algorithm enhancements. Significant additional algorithm enhancements, including a correction for low perfusion values, were required before similar gains were realised for the more challenging global data set.

  20. Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System

    NASA Technical Reports Server (NTRS)

    Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.

    2013-01-01

    Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.

  1. Colour Measurements and Modeling

    NASA Astrophysics Data System (ADS)

    Jha, Shyam N.

    The most common property to measure quality of any material is its appearance. Appearance includes colour, shape, size and surface conditions. The analysis of colour is especially an important consideration when determining the efficacy of variety of postharvest treatments. Consumers can easily be influenced by preconceived ideas of how a particular fruit or vegetable or a processed food should appear, and marketers often attempt to improve upon what nature has painted. Recently colour measurements have also been used as quality parameters and indicator of some inner constituents of the material. In spite of the significance of colour in food industries, many continue to analyze it inadequately. This chapter deals with theory of colour, colour scales and its measurement, sampling techniques, and modeling of colour values for correlating them with some internal quality parameters of selected fruits.

  2. Error analysis in inverse scatterometry. I. Modeling.

    PubMed

    Al-Assaad, Rayan M; Byrne, Dale M

    2007-02-01

    Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.

  3. Visual measurement of the evaporation process of a sessile droplet by dual-channel simultaneous phase-shifting interferometry.

    PubMed

    Sun, Peng; Zhong, Liyun; Luo, Chunshu; Niu, Wenhu; Lu, Xiaoxu

    2015-07-16

    To perform the visual measurement of the evaporation process of a sessile droplet, a dual-channel simultaneous phase-shifting interferometry (DCSPSI) method is proposed. Based on polarization components to simultaneously generate a pair of orthogonal interferograms with the phase shifts of π/2, the real-time phase of a dynamic process can be retrieved with two-step phase-shifting algorithm. Using this proposed DCSPSI system, the transient mass (TM) of the evaporation process of a sessile droplet with different initial mass were presented through measuring the real-time 3D shape of a droplet. Moreover, the mass flux density (MFD) of the evaporating droplet and its regional distribution were also calculated and analyzed. The experimental results show that the proposed DCSPSI will supply a visual, accurate, noncontact, nondestructive, global tool for the real-time multi-parameter measurement of the droplet evaporation.

  4. Recommended procedures for measuring aircraft noise and associated parameters

    NASA Technical Reports Server (NTRS)

    Marsh, A. H.

    1977-01-01

    Procedures are recommended for obtaining experimental values of aircraft flyover noise levels (and associated parameters). Specific recommendations are made for test criteria, instrumentation performance requirements, data-acquisition procedures, and test operations. The recommendations are based on state-of-the-art measurement capabilities available in 1976 and are consistent with the measurement objectives of the NASA Aircraft Noise Prediction Program. The recommendations are applicable to measurements of the noise produced by an airplane flying subsonically over (or past) microphones located near the surface of the ground. Aircraft types covered by the recommendations are fixed-wing airplanes powered by turbojet or turbofan engines and using conventional aerodynamic means for takeoff and landing. Various assumptions with respect to subsequent data processing and analysis were made (and are described) and the recommended measurement procedures are compatible with the assumptions. Some areas where additional research is needed relative to aircraft flyover noise measurement techniques are also discussed.

  5. In-situ acoustic signature monitoring in additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Koester, Lucas W.; Taheri, Hossein; Bigelow, Timothy A.; Bond, Leonard J.; Faierson, Eric J.

    2018-04-01

    Additive manufacturing is a rapidly maturing process for the production of complex metallic, ceramic, polymeric, and composite components. The processes used are numerous, and with the complex geometries involved this can make quality control and standardization of the process and inspection difficult. Acoustic emission measurements have been used previously to monitor a number of processes including machining and welding. The authors have identified acoustic signature measurement as a potential means of monitoring metal additive manufacturing processes using process noise characteristics and those discrete acoustic emission events characteristic of defect growth, including cracks and delamination. Results of acoustic monitoring for a metal additive manufacturing process (directed energy deposition) are reported. The work investigated correlations between acoustic emissions and process noise with variations in machine state and deposition parameters, and provided proof of concept data that such correlations do exist.

  6. Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover

    NASA Technical Reports Server (NTRS)

    Dangelo, K. R.

    1974-01-01

    A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.

  7. Wave parameters comparisons between High Frequency (HF) radar system and an in situ buoy: a case study

    NASA Astrophysics Data System (ADS)

    Fernandes, Maria; Alonso-Martirena, Andrés; Agostinho, Pedro; Sanchez, Jorge; Ferrer, Macu; Fernandes, Carlos

    2015-04-01

    The coastal zone is an important area for the development of maritime countries, either in terms of recreation, energy exploitation, weather forecasting or national security. Field measurements are in the basis of understanding how coastal and oceanic processes occur. Most processes occur over long timescales and over large spatial ranges, like the variation of mean sea level. These processes also involve a variety of factors such as waves, winds, tides, storm surges, currents, etc., that cause huge interference on such phenomena. Measurement of waves have been carried out using different techniques. The instruments used to measure wave parameters can be very different, i.e. buoys, ship base equipment like sonar and satellites. Each equipment has its own advantage and disadvantage depending on the study subject. The purpose of this study is to evaluate the behaviour of a different technology available and presently adopted in wave measurement. In the past few years the measurement of waves using High Frequency (HF) Radars has had several developments. Such a method is already established as a powerful tool for measuring the pattern of surface current, but its use in wave measurements, especially in the dual arrangement is recent. Measurement of the backscatter of HF radar wave provides the raw dataset which is analyzed to give directional data of surface elevation at each range cell. Buoys and radars have advantages, disadvantages and its accuracy is discussed in this presentation. A major advantage with HF radar systems is that they are unaffected by weather, clouds or changing ocean conditions. The HF radar system is a very useful tool for the measurement of waves over a wide area with real-time observation, but it still lacks a method to check its accuracy. The primary goal of this study was to show how the HF radar system responds to high energetic variations when compared to wave buoy data. The bulk wave parameters used (significant wave height, period and direction) were obtained during 2013 and 2014 from one 13.5 MHz CODAR SeaSonde radar station from Hydrographic Institute, located in Espichel Cape (Portugal). These data were compared with those obtained from one wave buoy Datawell Directional Waverider, also from Hydrographic Institute, moored inbound Sines (Portugal) at 100 m depth. For this first approach, was assumed that all the waves are in a deep water situation. Results showed that during high energetic periods, the HF radar system revealed a good correlation with wave buoy data following the bulk wave parameters gradient variations.

  8. Understanding controls of hydrologic processes across two monolithological catchments using model-data integration

    NASA Astrophysics Data System (ADS)

    Xiao, D.; Shi, Y.; Li, L.

    2016-12-01

    Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.

  9. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  10. Difference and similarity of dielectric relaxation processes among polyols

    NASA Astrophysics Data System (ADS)

    Minoguchi, Ayumi; Kitai, Kei; Nozaki, Ryusuke

    2003-09-01

    Complex permittivity measurements were performed on sorbitol, xylitol, and sorbitol-xylitol mixture in the supercooled liquid state in an extremely wide frequency range from 10 μHz to 500 MHz at temperatures near and above the glass transition temperature. We determined detailed behavior of the relaxation parameters such as relaxation frequency and broadening against temperature not only for the α process but also for the β process above the glass transition temperature, to the best of our knowledge, for the first time. Since supercooled liquids are in the quasi-equilibrium state, the behavior of all the relaxation parameters for the β process can be compared among the polyols as well as those for the α process. The relaxation frequencies of the α processes follow the Vogel-Fulcher-Tammann manner and the loci in the Arrhenius diagram are different corresponding to the difference of the glass transition temperatures. On the other hand, the relaxation frequencies of the β processes, which are often called as the Johari-Goldstein processes, follow the Arrhenius-type temperature dependence. The relaxation parameters for the β process are quite similar among the polyols at temperatures below the αβ merging temperature, TM. However, they show anomalous behavior near TM, which depends on the molecular size of materials. These results suggest that the origin of the β process is essentially the same among the polyols.

  11. NASA Cold Land Processes Experiment (CLPX 2002/03): Ground-based and near-surface meteorological observations

    Treesearch

    Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter

    2009-01-01

    A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...

  12. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... sample run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  13. Elucidation and visualization of solid-state transformation and mixing in a pharmaceutical mini hot melt extrusion process using in-line Raman spectroscopy.

    PubMed

    Van Renterghem, Jeroen; Kumar, Ashish; Vervaet, Chris; Remon, Jean Paul; Nopens, Ingmar; Vander Heyden, Yvan; De Beer, Thomas

    2017-01-30

    Mixing of raw materials (drug+polymer) in the investigated mini pharma melt extruder is achieved by using co-rotating conical twin screws and an internal recirculation channel. In-line Raman spectroscopy was implemented in the barrels, allowing monitoring of the melt during processing. The aim of this study was twofold: to investigate (I) the influence of key process parameters (screw speed - barrel temperature) upon the product solid-state transformation during processing of a sustained release formulation in recirculation mode; (II) the influence of process parameters (screw speed - barrel temperature - recirculation time) upon mixing of a crystalline drug (tracer) in an amorphous polymer carrier by means of residence time distribution (RTD) measurements. The results indicated a faster mixing endpoint with increasing screw speed. Processing a high drug load formulation above the drug melting temperature resulted in the production of amorphous drug whereas processing below the drug melting point produced solid dispersions with partially amorphous/crystalline drug. Furthermore, increasing the screw speed resulted in lower drug crystallinity of the solid dispersion. RTD measurements elucidated the improved mixing capacity when using the recirculation channel. In-line Raman spectroscopy has shown to be an adequate PAT-tool for product solid-state monitoring and elucidation of the mixing behavior during processing in a mini extruder. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  15. Coffee husk composting: An investigation of the process using molecular and non-molecular tools

    PubMed Central

    Shemekite, Fekadu; Gómez-Brandón, María; Franke-Whittle, Ingrid H.; Praehauser, Barbara; Insam, Heribert; Assefa, Fassil

    2014-01-01

    Various parameters were measured during a 90-day composting process of coffee husk with cow dung (Pile 1), with fruit/vegetable wastes (Pile 2) and coffee husk alone (Pile 3). Samples were collected on days 0, 32 and 90 for chemical and microbiological analyses. C/N ratios of Piles 1 and 2 decreased significantly over the 90 days. The highest bacterial counts at the start of the process and highest actinobacterial counts at the end of the process (Piles 1 and 2) indicated microbial succession with concomitant production of compost relevant enzymes. Denaturing gradient gel electrophoresis of rDNA and COMPOCHIP microarray analysis indicated distinctive community shifts during the composting process, with day 0 samples clustering separately from the 32 and 90-day samples. This study, using a multi-parameter approach, has revealed differences in quality and species diversity of the three composts. PMID:24369846

  16. Processing of High Resolution, Multiparametric Radar Data for the Airborne Dual-Frequency Precipitation Radar APR-2

    NASA Technical Reports Server (NTRS)

    Tanelli, Simone; Meagher, Jonathan P.; Durden, Stephen L.; Im, Eastwood

    2004-01-01

    Following the successful Precipitation Radar (PR) of the Tropical Rainfall Measuring Mission, a new airborne, 14/35 GHz rain profiling radar, known as Airborne Precipitation Radar - 2 (APR-2), has been developed as a prototype for an advanced, dual-frequency spaceborne radar for a future spaceborne precipitation measurement mission. . This airborne instrument is capable of making simultaneous measurements of rainfall parameters, including co-pol and cross-pol rain reflectivities and vertical Doppler velocities, at 14 and 35 GHz. furthermore, it also features several advanced technologies for performance improvement, including real-time data processing, low-sidelobe dual-frequency pulse compression, and dual-frequency scanning antenna. Since August 2001, APR-2 has been deployed on the NASA P3 and DC8 aircrafts in four experiments including CAMEX-4 and the Wakasa Bay Experiment. Raw radar data are first processed to obtain reflectivity, LDR (linear depolarization ratio), and Doppler velocity measurements. The dataset is then processed iteratively to accurately estimate the true aircraft navigation parameters and to classify the surface return. These intermediate products are then used to refine reflectivity and LDR calibrations (by analyzing clear air ocean surface returns), and to correct Doppler measurements for the aircraft motion. Finally, the the melting layer of precipitation is detected and its boundaries and characteristics are identifIed at the APR-2 range resolution of 30m. The resulting 3D dataset will be used for validation of other airborne and spaceborne instruments, development of multiparametric rain/snow retrieval algorithms and melting layer characterization and statistics.

  17. Constraints on the ^22Ne(α,n)^25Mg reaction rate from ^natMg+n Total and ^25Mg(n,γ ) Cross Sections

    NASA Astrophysics Data System (ADS)

    Koehler, Paul

    2002-10-01

    The ^22Ne(α,n)^25Mg reaction is the neutron source during the s process in massive and intermediate mass stars as well as a secondary neutron source during the s process in low mass stars. Therefore, an accurate determination of this rate is important for a better understanding of the origin of nuclides heavier than iron as well as for improving s-process models. Also, because the s process produces seed nuclides for a later p process in massive stars, an accurate value for this rate is important for a better understanding of the p process. Because the lowest observed resonance in direct ^22Ne(α,n)^25Mg measurements is considerably above the most important energy range for s-process temperatures, the uncertainty in this rate is dominated by the poorly known properties of states in ^26Mg between this resonance and threshold. Neutron measurements can observe these states with much better sensitivity and determine their parameters much more accurately than direct ^22Ne(α,n)^25Mg measurements. I have analyzed previously reported Mg+n total and ^25Mg(n,γ ) cross sections to obtain a much improved set of resonance parameters for states in ^26Mg in this region, and an improved estimate of the uncertainty in the ^22Ne(α,n)^25Mg reaction rate. This work was supported by the U.S. DOE under contract No. DE-AC05-00OR22725 with UT-Battell, LLC.

  18. Near Real Time Review of Instrument Performance using the Airborne Data Processing and Analysis Software Package

    NASA Astrophysics Data System (ADS)

    Delene, D. J.

    2014-12-01

    Research aircraft that conduct atmospheric measurements carry an increasing array of instrumentation. While on-board personnel constantly review instrument parameters and time series plots, there are an overwhelming number of items. Furthermore, directing the aircraft flight takes up much of the flight scientist time. Typically, a flight engineer is given the responsibility of reviewing the status of on-board instruments. While major issues like not receiving data are quickly identified during a flight, subtle issues like low but believable concentration measurements may go unnoticed. Therefore, it is critical to review data after a flight in near real time. The Airborne Data Processing and Analysis (ADPAA) software package used by the University of North Dakota automates the post-processing of aircraft flight data. Utilizing scripts to process the measurements recorded by data acquisition systems enables the generation of data files within an hour of flight completion. The ADPAA Cplot visualization program enables plots to be quickly generated that enable timely review of all recorded and processed parameters. Near real time review of aircraft flight data enables instrument problems to be identified, investigated and fixed before conducting another flight. On one flight, near real time data review resulted in the identification of unusually low measurements of cloud condensation nuclei, and rapid data visualization enabled the timely investigation of the cause. As a result, a leak was found and fixed before the next flight. Hence, with the high cost of aircraft flights, it is critical to find and fix instrument problems in a timely matter. The use of a automated processing scripts and quick visualization software enables scientists to review aircraft flight data in near real time to identify potential problems.

  19. The application of information theory for the research of aging and aging-related diseases.

    PubMed

    Blokh, David; Stambler, Ilia

    2017-10-01

    This article reviews the application of information-theoretical analysis, employing measures of entropy and mutual information, for the study of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes and related diseases are multi-parametric, with continuous parameters coexisting alongside discrete parameters, and with the relations between the parameters being as a rule non-linear. Information theory provides unique analytical capabilities for the solution of such problems, with unique advantages over common linear biostatistics. Among the age-related diseases, information theory has been used in the study of neurodegenerative diseases (particularly using EEG time series for diagnosis and prediction), cancer (particularly for establishing individual and combined cancer biomarkers), diabetes (mainly utilizing mutual information to characterize the diseased and aging states), and heart disease (mainly for the analysis of heart rate variability). Few works have employed information theory for the analysis of general aging processes and frailty, as underlying determinants and possible early preclinical diagnostic measures for aging-related diseases. Generally, the use of information-theoretical analysis permits not only establishing the (non-linear) correlations between diagnostic or therapeutic parameters of interest, but may also provide a theoretical insight into the nature of aging and related diseases by establishing the measures of variability, adaptation, regulation or homeostasis, within a system of interest. It may be hoped that the increased use of such measures in research may considerably increase diagnostic and therapeutic capabilities and the fundamental theoretical mathematical understanding of aging and disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.

    2015-08-01

    Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  1. Calibration of infiltration parameters on hydrological tank model using runoff coefficient of rational method

    NASA Astrophysics Data System (ADS)

    Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery

    2017-09-01

    In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.

  2. Influences of the manufacturing process chain design on the near surface condition and the resulting fatigue behaviour of quenched and tempered SAE 4140

    NASA Astrophysics Data System (ADS)

    Klein, M.; Eifler, D.

    2010-07-01

    To analyse interactions between single steps of process chains, variations in material properties, especially the microstructure and the resulting mechanical properties, specimens with tension screw geometry were manufactured with five process chains. The different process chains as well as their parameters influence the near surface condition and consequently the fatigue behaviour in a characteristic manner. The cyclic deformation behaviour of these specimens can be benchmarked equivalently with conventional strain measurements as well as with high-precision temperature and electrical resistance measurements. The development of temperature-values provides substantial information on cyclic load dependent changes in the microstructure.

  3. A rapid and accurate method, ventilated chamber C-history method, of measuring the emission characteristic parameters of formaldehyde/VOCs in building materials.

    PubMed

    Huang, Shaodan; Xiong, Jianyin; Zhang, Yinping

    2013-10-15

    The indoor pollution caused by formaldehyde and volatile organic compounds (VOCs) emitted from building materials poses an adverse effect on people's health. It is necessary to understand and control the behaviors of the emission sources. Based on detailed mass transfer analysis on the emission process in a ventilated chamber, this paper proposes a novel method of measuring the three emission characteristic parameters, i.e., the initial emittable concentration, the diffusion coefficient and the partition coefficient. A linear correlation between the logarithm of dimensionless concentration and time is derived. The three parameters can then be calculated from the intercept and slope of the correlation. Compared with the closed chamber C-history method, the test is performed under ventilated condition thus some commonly-used measurement instruments (e.g., GC/MS, HPLC) can be applied. While compared with other methods, the present method can rapidly and accurately measure the three parameters, with experimental time less than 12h and R(2) ranging from 0.96 to 0.99 for the cases studied. Independent experiment was carried out to validate the developed method, and good agreement was observed between the simulations based on the determined parameters and experiments. The present method should prove useful for quick characterization of formaldehyde/VOC emissions from indoor materials. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Accuracy-enhanced constitutive parameter identification using virtual fields method and special stereo-digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongya; Pan, Bing; Grédiac, Michel; Song, Weidong

    2018-04-01

    The virtual fields method (VFM) is generally used with two-dimensional digital image correlation (2D-DIC) or grid method (GM) for identifying constitutive parameters. However, when small out-of-plane translation/rotation occurs to the test specimen, 2D-DIC and GM are prone to yield inaccurate measurements, which further lessen the accuracy of the parameter identification using VFM. In this work, an easy-to-implement but effective "special" stereo-DIC (SS-DIC) method is proposed for accuracy-enhanced VFM identification. The SS-DIC can not only deliver accurate deformation measurement without being affected by unavoidable out-of-plane movement/rotation of a test specimen, but can also ensure evenly distributed calculation data in space, which leads to simple data processing. Based on the accurate kinematics fields with evenly distributed measured points determined by SS-DIC method, constitutive parameters can be identified by VFM with enhanced accuracy. Uniaxial tensile tests of a perforated aluminum plate and pure shear tests of a prismatic aluminum specimen verified the effectiveness and accuracy of the proposed method. Experimental results show that the constitutive parameters identified by VFM using SS-DIC are more accurate and stable than those identified by VFM using 2D-DIC. It is suggested that the proposed SS-DIC can be used as a standard measuring tool for mechanical identification using VFM.

  5. Optimization of Robotic Spray Painting process Parameters using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar

    2018-02-01

    Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.

  6. Parameter learning for performance adaptation

    NASA Technical Reports Server (NTRS)

    Peek, Mark D.; Antsaklis, Panos J.

    1990-01-01

    A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.

  7. Experiments and simulation for 6061-T6 aluminum alloy resistance spot welded lap joints

    NASA Astrophysics Data System (ADS)

    Florea, Radu Stefanel

    This comprehensive study is the first to quantify the fatigue performance, failure loads, and microstructure of resistance spot welding (RSW) in 6061-T6 aluminum (Al) alloy according to welding parameters and process sensitivity. The extensive experimental, theoretical and simulated analyses will provide a framework to optimize the welding of lightweight structures for more fuel-efficient automotive and military applications. The research was executed in four primary components. The first section involved using electron back scatter diffraction (EBSD) scanning, tensile testing, laser beam profilometry (LBP) measurements, and optical microscopy(OM) images to experimentally investigate failure loads and deformation of the Al-alloy resistance spot welded joints. Three welding conditions, as well as nugget and microstructure characteristics, were quantified according to predefined process parameters. Quasi-static tensile tests were used to characterize the failure loads in specimens based upon these same process parameters. Profilometer results showed that increasing the applied welding current deepened the weld imprints. The EBSD scans revealed the strong dependency between the grain sizes and orientation function on the process parameters. For the second section, the fatigue behavior of the RSW'ed joints was experimentally investigated. The process optimization included consideration of the forces, currents, and times for both the main weld and post-heating. Load control cyclic tests were conducted on single weld lap-shear joint coupons to characterize the fatigue behavior in spot welded specimens. Results demonstrate that welding parameters do indeed significantly affect the microstructure and fatigue performance for these welds. The third section comprised residual strains of resistance spot welded joints measured in three different directions, denoted as in-plane longitudinal, in-plane transversal, and normal, and captured on the fusion zone, heat affected zone and base metal of the joints. Neutron diffraction results showed residual stresses in the weld are approximately 40% lower than the yield strength of the parent material, with maximum variation occurring in the vertical position of the specimen because of the orientation of electrode clamping forces that produce a non-uniform solidification pattern. In the final section a theoretical continuum modeling framework for 6061-T6 aluminum resistance spot welded joints is presented.

  8. NASA/MSFC FY88 Global Scale Atmospheric Processes Research Program Review

    NASA Technical Reports Server (NTRS)

    Wilson, Greg S. (Editor); Leslie, Fred W. (Editor); Arnold, J. E. (Editor)

    1989-01-01

    Interest in environmental issues and the magnitude of the environmental changes continues. One way to gain more understanding of the atmosphere is to make measurements on a global scale from space. The Earth Observation System is a series of new sensors to measure globally atmospheric parameters. Analysis of satellite data by developing algorithms to interpret the radiance information improves the understanding and also defines requirements for these sensors. One measure of knowledge of the atmosphere lies in the ability to predict its behavior. Use of numerical and experimental models provides a better understanding of these processes. These efforts are described in the context of satellite data analysis and fundamental studies of atmospheric dynamics which examine selected processes important to the global circulation.

  9. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging.

    PubMed

    Tremsin, Anton S; Perrodin, Didier; Losko, Adrian S; Vogel, Sven C; Bourke, Mark A M; Bizarri, Gregory A; Bourret, Edith D

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  10. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A. M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-04-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  11. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A.M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-01-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes. PMID:28425461

  12. Review on innovative techniques in oil sludge bioremediation

    NASA Astrophysics Data System (ADS)

    Mahdi, Abdullah M. El; Aziz, Hamidi Abdul; Eqab, Eqab Sanoosi

    2017-10-01

    Petroleum hydrocarbon waste is produced in worldwide refineries in significant amount. In Libya, approximately 10,000 tons of oil sludge is generated in oil refineries (hydrocarbon waste mixtures) annually. Insufficient treatment of those wastes can threaten the human health and safety as well as our environment. One of the major challenges faced by petroleum refineries is the safe disposal of oil sludge generated during the cleaning and refining process stages of crude storage facilities. This paper reviews the hydrocarbon sludge characteristics and conventional methods for remediation of oil hydrocarbon from sludge. This study intensively focuses on earlier literature to describe the recently selected innovation technology in oily hydrocarbon sludge bioremediation process. Conventional characterization parameters or measurable factors can be gathered in chemical, physical, and biological parameters: (1) Chemical parameters are consequently necessary in the case of utilization of topsoil environment when they become relevant to the presence of nutrients and toxic compounds; (2) Physical parameters provide general data on sludge process and hand ability; (3) Biological parameters provide data on microbial activity and organic matter presence, which will be used to evaluate the safety of the facilities. The objective of this research is to promote the bioremediating oil sludge feasibility from Marsa El Hariga Terminal and Refinery (Tobruk).

  13. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  14. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  15. Determination of melt pool dimensions using DOE-FEM and RSM with process window during SLM of Ti6Al4V powder

    NASA Astrophysics Data System (ADS)

    Zhuang, Jyun-Rong; Lee, Yee-Ting; Hsieh, Wen-Hsin; Yang, An-Shik

    2018-07-01

    Selective laser melting (SLM) shows a positive prospect as an additive manufacturing (AM) technique for fabrication of 3D parts with complicated structures. A transient thermal model was developed by the finite element method (FEM) to simulate the thermal behavior for predicting the time evolution of temperature field and melt pool dimensions of Ti6Al4V powder during SLM. The FEM predictions were then compared with published experimental measurements and calculation results for model validation. This study applied the design of experiment (DOE) scheme together with the response surface method (RSM) to conduct the regression analysis based on four processing parameters (exactly, the laser power, scanning speed, preheating temperature and hatch space) for predicting the dimensions of the melt pool in SLM. The preliminary RSM results were used to quantify the effects of those parameters on the melt pool size. The process window was further implemented via two criteria of the width and depth of the molten pool to screen impractical conditions of four parameters for including the practical ranges of processing parameters. The FEM simulations confirmed the good accuracy of the critical RSM models in the predictions of melt pool dimensions for three typical SLM working scenarios.

  16. Simulation based analysis of laser beam brazing

    NASA Astrophysics Data System (ADS)

    Dobler, Michael; Wiethop, Philipp; Schmid, Daniel; Schmidt, Michael

    2016-03-01

    Laser beam brazing is a well-established joining technology in car body manufacturing with main applications in the joining of divided tailgates and the joining of roof and side panels. A key advantage of laser brazed joints is the seam's visual quality which satisfies highest requirements. However, the laser beam brazing process is very complex and process dynamics are only partially understood. In order to gain deeper knowledge of the laser beam brazing process, to determine optimal process parameters and to test process variants, a transient three-dimensional simulation model of laser beam brazing is developed. This model takes into account energy input, heat transfer as well as fluid and wetting dynamics that lead to the formation of the brazing seam. A validation of the simulation model is performed by metallographic analysis and thermocouple measurements for different parameter sets of the brazing process. These results show that the multi-physical simulation model not only can be used to gain insight into the laser brazing process but also offers the possibility of process optimization in industrial applications. The model's capabilities in determining optimal process parameters are exemplarily shown for the laser power. Small deviations in the energy input can affect the brazing results significantly. Therefore, the simulation model is used to analyze the effect of the lateral laser beam position on the energy input and the resulting brazing seam.

  17. A New Multifunctional Sensor for Measuring Concentrations of Ternary Solution

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Shida, Katsunori

    This paper presents a multifunctional sensor with novel structure, which is capable of directly sensing temperature and two physical parameters of solutions, namely ultrasonic velocity and conductivity. By combined measurement of these three measurable parameters, the concentrations of various components in a ternary solution can be simultaneously determined. The structure and operation principle of the sensor are described, and a regression algorithm based on natural cubic spline interpolation and the least square method is adopted to estimate the concentrations. The performances of the proposed sensor are experimentally tested by the use of ternary aqueous solution of sodium chloride and sucrose, which is widely involved in food and beverage industries. This sensor could prove valuable as a process control sensor in industry fields.

  18. An integrated process analytical technology (PAT) approach to monitoring the effect of supercooling on lyophilization product and process parameters of model monoclonal antibody formulations.

    PubMed

    Awotwe Otoo, David; Agarabi, Cyrus; Khan, Mansoor A

    2014-07-01

    The aim of the present study was to apply an integrated process analytical technology (PAT) approach to control and monitor the effect of the degree of supercooling on critical process and product parameters of a lyophilization cycle. Two concentrations of a mAb formulation were used as models for lyophilization. ControLyo™ technology was applied to control the onset of ice nucleation, whereas tunable diode laser absorption spectroscopy (TDLAS) was utilized as a noninvasive tool for the inline monitoring of the water vapor concentration and vapor flow velocity in the spool during primary drying. The instantaneous measurements were then used to determine the effect of the degree of supercooling on critical process and product parameters. Controlled nucleation resulted in uniform nucleation at lower degrees of supercooling for both formulations, higher sublimation rates, lower mass transfer resistance, lower product temperatures at the sublimation interface, and shorter primary drying times compared with the conventional shelf-ramped freezing. Controlled nucleation also resulted in lyophilized cakes with more elegant and porous structure with no visible collapse or shrinkage, lower specific surface area, and shorter reconstitution times compared with the uncontrolled nucleation. Uncontrolled nucleation however resulted in lyophilized cakes with relatively lower residual moisture contents compared with controlled nucleation. TDLAS proved to be an efficient tool to determine the endpoint of primary drying. There was good agreement between data obtained from TDLAS-based measurements and SMART™ technology. ControLyo™ technology and TDLAS showed great potential as PAT tools to achieve enhanced process monitoring and control during lyophilization cycles. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  19. Study on effect of tool electrodes on surface finish during electrical discharge machining of Nitinol

    NASA Astrophysics Data System (ADS)

    Sahu, Anshuman Kumar; Chatterjee, Suman; Nayak, Praveen Kumar; Sankar Mahapatra, Siba

    2018-03-01

    Electrical discharge machining (EDM) is a non-traditional machining process which is widely used in machining of difficult-to-machine materials. EDM process can produce complex and intrinsic shaped component made of difficult-to-machine materials, largely applied in aerospace, biomedical, die and mold making industries. To meet the required applications, the EDMed components need to possess high accuracy and excellent surface finish. In this work, EDM process is performed using Nitinol as work piece material and AlSiMg prepared by selective laser sintering (SLS) as tool electrode along with conventional copper and graphite electrodes. The SLS is a rapid prototyping (RP) method to produce complex metallic parts by additive manufacturing (AM) process. Experiments have been carried out varying different process parameters like open circuit voltage (V), discharge current (Ip), duty cycle (τ), pulse-on-time (Ton) and tool material. The surface roughness parameter like average roughness (Ra), maximum height of the profile (Rt) and average height of the profile (Rz) are measured using surface roughness measuring instrument (Talysurf). To reduce the number of experiments, design of experiment (DOE) approach like Taguchi’s L27 orthogonal array has been chosen. The surface properties of the EDM specimen are optimized by desirability function approach and the best parametric setting is reported for the EDM process. Type of tool happens to be the most significant parameter followed by interaction of tool type and duty cycle, duty cycle, discharge current and voltage. Better surface finish of EDMed specimen can be obtained with low value of voltage (V), discharge current (Ip), duty cycle (τ) and pulse on time (Ton) along with the use of AlSiMg RP electrode.

  20. Wavefront attributes in anisotropic media

    NASA Astrophysics Data System (ADS)

    Vanelle, C.; Abakumov, I.; Gajewski, D.

    2018-07-01

    Surface-measured wavefront attributes are the key ingredient to multiparameter methods, which are nowadays standard tools in seismic data processing. However, most operators are restricted to application to isotropic media. Whereas application of an isotropic operator will still lead to satisfactory stack results, further processing steps that interpret isotropic stacking parameters in terms of wavefront attributes will lead to erroneous results if anisotropy is present but not accounted for. In this paper, we derive relationships between the stacking parameters and anisotropic wavefront attributes that allow us to apply the common reflection surface type operator to 3-D media with arbitrary anisotropy for the zero-offset and finite-offset configurations including converted waves. The operator itself is expressed in terms of wavefront attributes that are measured in the acquisition surface, that is, no model assumptions are made. Numerical results confirm that the accuracy of the new anisotropic operator is of the same magnitude as that of its isotropic counterpart.

  1. Foraging for brain stimulation: toward a neurobiology of computation.

    PubMed

    Gallistel, C R

    1994-01-01

    The self-stimulating rat performs foraging tasks mediated by simple computations that use interreward intervals and subjective reward magnitudes to determine stay durations. This is a simplified preparation in which to study the neurobiology of the elementary computational operations that make cognition possible, because the neural signal specifying the value of a computationally relevant variable is produced by direct electrical stimulation of a neural pathway. Newly developed measurement methods yield functions relating the subjective reward magnitude to the parameters of the neural signal. These measurements also show that the decision process that governs foraging behavior divides the subjective reward magnitude by the most recent interreward interval to determine the preferability of an option (a foraging patch). The decision process sets the parameters that determine stay durations (durations of visits to foraging patches) so that the ratios of the stay durations match the ratios of the preferabilities.

  2. Error free all optical wavelength conversion in highly nonlinear As-Se chalcogenide glass fiber.

    PubMed

    Ta'eed, Vahid G; Fu, Libin; Pelusi, Mark; Rochette, Martin; Littler, Ian C; Moss, David J; Eggleton, Benjamin J

    2006-10-30

    We present the first demonstration of all optical wavelength conversion in chalcogenide glass fiber including system penalty measurements at 10 Gb/s. Our device is based on As2Se3 chalcogenide glass fiber which has the highest Kerr nonlinearity (n(2)) of any fiber to date for which either advanced all optical signal processing functions or system penalty measurements have been demonstrated. We achieve wavelength conversion via cross phase modulation over a 10 nm wavelength range near 1550 nm with 7 ps pulses at 2.1 W peak pump power in 1 meter of fiber, achieving only 1.4 dB excess system penalty. Analysis and comparison of the fundamental fiber parameters, including nonlinear coefficient, two-photon absorption coefficient and dispersion parameter with other nonlinear glasses shows that As(2)Se(3) based devices show considerable promise for radically integrated nonlinear signal processing devices.

  3. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  4. Optimisation of shock absorber process parameters using failure mode and effect analysis and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mariajayaprakash, Arokiasamy; Senthilvelan, Thiyagarajan; Vivekananthan, Krishnapillai Ponnambal

    2013-07-01

    The various process parameters affecting the quality characteristics of the shock absorber during the process were identified using the Ishikawa diagram and by failure mode and effect analysis. The identified process parameters are welding process parameters (squeeze, heat control, wheel speed, and air pressure), damper sealing process parameters (load, hydraulic pressure, air pressure, and fixture height), washing process parameters (total alkalinity, temperature, pH value of rinsing water, and timing), and painting process parameters (flowability, coating thickness, pointage, and temperature). In this paper, the process parameters, namely, painting and washing process parameters, are optimized by Taguchi method. Though the defects are reasonably minimized by Taguchi method, in order to achieve zero defects during the processes, genetic algorithm technique is applied on the optimized parameters obtained by Taguchi method.

  5. Thomson scattering diagnostics of steady state and pulsed welding processes without and with metal vapor

    NASA Astrophysics Data System (ADS)

    Kühn-Kauffeldt, M.; Marqués, J.-L.; Schein, J.

    2015-01-01

    Thomson scattering is applied to measure temperature and density of electrons in the arc plasma of the direct current gas tungsten arc welding (GTAW) process and pulsed gas metal arc welding (GMAW) process. This diagnostic technique allows to determine these plasma parameters independent from the gas composition and heavy particles temperature. The experimental setup is adapted to perform measurements on stationary as well as transient processes. Spatial and temporal electron temperature and density profiles of a pure argon arc in the case of the GTAW process and argon arc with the presence of aluminum metal vapor in the case of the GMAW process were obtained. Additionally the data is used to estimate the concentration of the metal vapor in the GMAW plasma.

  6. Metal Big Area Additive Manufacturing: Process Modeling and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W

    Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology for printing large-scale 3D objects. mBAAM is based on the gas metal arc welding process and uses a continuous feed of welding wire to manufacture an object. An electric arc forms between the wire and the substrate, which melts the wire and deposits a bead of molten metal along the predetermined path. In general, the welding process parameters and local conditions determine the shape of the deposited bead. The sequence of the bead deposition and the corresponding thermal history of the manufactured object determine the long rangemore » effects, such as thermal-induced distortions and residual stresses. Therefore, the resulting performance or final properties of the manufactured object are dependent on its geometry and the deposition path, in addition to depending on the basic welding process parameters. Physical testing is critical for gaining the necessary knowledge for quality prints, but traversing the process parameter space in order to develop an optimized build strategy for each new design is impractical by pure experimental means. Computational modeling and optimization may accelerate development of a build process strategy and saves time and resources. Because computational modeling provides these opportunities, we have developed a physics-based Finite Element Method (FEM) simulation framework and numerical models to support the mBAAM process s development and design. In this paper, we performed a sequentially coupled heat transfer and stress analysis for predicting the final deformation of a small rectangular structure printed using the mild steel welding wire. Using the new simulation technologies, material was progressively added into the FEM simulation as the arc weld traversed the build path. In the sequentially coupled heat transfer and stress analysis, the heat transfer was performed to calculate the temperature evolution, which was used in a stress analysis to evaluate the residual stresses and distortions. In this formulation, we assume that physics is directionally coupled, i.e. the effect of stress of the component on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.« less

  7. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  8. Effective Parameters in Axial Injection Suspension Plasma Spray Process of Alumina-Zirconia Ceramics

    NASA Astrophysics Data System (ADS)

    Tarasi, F.; Medraj, M.; Dolatabadi, A.; Oberste-Berghaus, J.; Moreau, C.

    2008-12-01

    Suspension plasma spray (SPS) is a novel process for producing nano-structured coatings with metastable phases using significantly smaller particles as compared to conventional thermal spraying. Considering the complexity of the system there is an extensive need to better understand the relationship between plasma spray conditions and resulting coating microstructure and defects. In this study, an alumina/8 wt.% yttria-stabilized zirconia was deposited by axial injection SPS process. The effects of principal deposition parameters on the microstructural features are evaluated using the Taguchi design of experiment. The microstructural features include microcracks, porosities, and deposition rate. To better understand the role of the spray parameters, in-flight particle characteristics, i.e., temperature and velocity were also measured. The role of the porosity in this multicomponent structure is studied as well. The results indicate that thermal diffusivity of the coatings, an important property for potential thermal barrier applications, is barely affected by the changes in porosity content.

  9. Finite Element Method (FEM) Modeling of Freeze-drying: Monitoring Pharmaceutical Product Robustness During Lyophilization.

    PubMed

    Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V

    2015-12-01

    Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.

  10. Temperature Measurement and Numerical Prediction in Machining Inconel 718

    PubMed Central

    Tapetado, Alberto; Vázquez, Carmen; Miguélez, Henar

    2017-01-01

    Thermal issues are critical when machining Ni-based superalloy components designed for high temperature applications. The low thermal conductivity and extreme strain hardening of this family of materials results in elevated temperatures around the cutting area. This elevated temperature could lead to machining-induced damage such as phase changes and residual stresses, resulting in reduced service life of the component. Measurement of temperature during machining is crucial in order to control the cutting process, avoiding workpiece damage. On the other hand, the development of predictive tools based on numerical models helps in the definition of machining processes and the obtainment of difficult to measure parameters such as the penetration of the heated layer. However, the validation of numerical models strongly depends on the accurate measurement of physical parameters such as temperature, ensuring the calibration of the model. This paper focuses on the measurement and prediction of temperature during the machining of Ni-based superalloys. The temperature sensor was based on a fiber-optic two-color pyrometer developed for localized temperature measurements in turning of Inconel 718. The sensor is capable of measuring temperature in the range of 250 to 1200 °C. Temperature evolution is recorded in a lathe at different feed rates and cutting speeds. Measurements were used to calibrate a simplified numerical model for prediction of temperature fields during turning. PMID:28665312

  11. Plantar pressure measurements and running-related injury: A systematic review of methods and possible associations.

    PubMed

    Mann, Robert; Malisoux, Laurent; Urhausen, Axel; Meijer, Kenneth; Theisen, Daniel

    2016-06-01

    Pressure-sensitive measuring devices have been identified as appropriate tools for measuring an array of parameters during running. It is unclear which biomechanical characteristics relate to running-related injury (RRI) and which data-processing techniques are most promising to detect this relationship. This systematic review aims to identify pertinent methodologies and characteristics measured using plantar pressure devices, and to summarise their associations with RRI. PubMed, Embase, CINAHL, ScienceDirect and Scopus were searched up until March 2015. Retrospective and prospective, biomechanical studies on running using any kind of pressure-sensitive device with RRI as an outcome were included. All studies involving regular or recreational runners were considered. The study quality was assessed and the measured parameters were summarised. One low quality, two moderate quality and five high quality studies were included. Five different subdivisions of plantar area were identified, as well as five instants and four phases of measurement during foot-ground contact. Overall many parameters were collated and subdivided as plantar pressure and force, plantar pressure and force location, contact area, timing and stride parameters. Differences between the injured and control group were found for mediolateral and anteroposterior displacement of force, contact area, velocity of force displacement, relative force-time integral, mediolateral force ratio, time to peak force and inter-stride correlative patterns. However, no consistent results were found between studies and no biomechanical risk patterns were apparent. Additionally, conflicting findings were reported for peak force in three studies. Based on these observations, we provide suggestions for improved methodology measurement of pertinent parameters for future studies. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  12. An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.

  13. An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.

    2013-01-01

    The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172

  14. Neutron coincidence measurements when nuclear parameters vary during the multiplication process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.

    1995-07-01

    In a recent paper, a physical/mathematical model was developed for neutron coincidence counting, taking explicit account of neutron absorption and leakage, and using dual probability generating function to derive explicit formulae for the single and multiple count-rates in terms of the physical parameters of the system. The results of this modeling proved very successful in a number of cases in which the system parameters (neutron reaction cross-sections, detection probabilities, etc.) remained the same at the various stages of the process (i.e. from collision to collision). However, there are practical circumstances in which such system parameters change from collision to collision,more » and it is necessary to accommodate these, too, in a general theory, applicable to such situations. For instance, in the case of the neutron coincidence collar (NCC), the parameters for the initial, spontaneous fission neutrons, are not the same as those for the succeeding induced fission neutrons, and similar situations can be envisaged for certain other experimental configurations. This present document shows how the previous considerations can be elaborated to embrace these more general requirements.« less

  15. Encapsulation of brewing yeast in alginate/chitosan matrix: lab-scale optimization of lager beer fermentation.

    PubMed

    Naydenova, Vessela; Badova, Mariyana; Vassilev, Stoyan; Iliev, Vasil; Kaneva, Maria; Kostov, Georgi

    2014-03-04

    Two mathematical models were developed for studying the effect of main fermentation temperature ( T MF ), immobilized cell mass ( M IC ) and original wort extract (OE) on beer fermentation with alginate-chitosan microcapsules with a liquid core. During the experiments, the investigated parameters were varied in order to find the optimal conditions for beer fermentation with immobilized cells. The basic beer characteristics, i.e. extract, ethanol, biomass concentration, pH and colour, as well as the concentration of aldehydes and vicinal diketones, were measured. The results suggested that the process parameters represented a powerful tool in controlling the fermentation time. Subsequently, the optimized process parameters were used to produce beer in laboratory batch fermentation. The system productivity was also investigated and the data were used for the development of another mathematical model.

  16. Encapsulation of brewing yeast in alginate/chitosan matrix: lab-scale optimization of lager beer fermentation

    PubMed Central

    Naydenova, Vessela; Badova, Mariyana; Vassilev, Stoyan; Iliev, Vasil; Kaneva, Maria; Kostov, Georgi

    2014-01-01

    Two mathematical models were developed for studying the effect of main fermentation temperature (T MF), immobilized cell mass (M IC) and original wort extract (OE) on beer fermentation with alginate-chitosan microcapsules with a liquid core. During the experiments, the investigated parameters were varied in order to find the optimal conditions for beer fermentation with immobilized cells. The basic beer characteristics, i.e. extract, ethanol, biomass concentration, pH and colour, as well as the concentration of aldehydes and vicinal diketones, were measured. The results suggested that the process parameters represented a powerful tool in controlling the fermentation time. Subsequently, the optimized process parameters were used to produce beer in laboratory batch fermentation. The system productivity was also investigated and the data were used for the development of another mathematical model. PMID:26019512

  17. Verification and Validation of Residual Stresses in Bi-Material Composite Rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy

    Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less

  18. Implementation of polarization processes in a charge transport model applied on poly(ethylene naphthalate) films

    NASA Astrophysics Data System (ADS)

    Hoang, M.-Q.; Le Roy, S.; Boudou, L.; Teyssedre, G.

    2016-06-01

    One of the difficulties in unravelling transport processes in electrically insulating materials is the fact that the response, notably charging current transients, can have mixed contributions from orientation polarization and from space charge processes. This work aims at identifying and characterizing the polarization processes in a polar polymer in the time and frequency-domains and to implement the contribution of the polarization into a charge transport model. To do so, Alternate Polarization Current (APC) and Dielectric Spectroscopy measurements have been performed on poly(ethylene naphthalene 2,6-dicarboxylate) (PEN), an aromatic polar polymer, providing information on polarization mechanisms in the time- and frequency-domain, respectively. In the frequency-domain, PEN exhibits 3 relaxation processes termed β, β* (sub-glass transitions), and α relaxations (glass transition) in increasing order of temperature. Conduction was also detected at high temperatures. Dielectric responses were treated using a simplified version of the Havriliak-Negami model (Cole-Cole (CC) model), using 3 parameters per relaxation process, these parameters being temperature dependent. The time dependent polarization obtained from the CC model is then added to a charge transport model. Simulated currents issued from the transport model implemented with the polarization are compared with the measured APCs, showing a good consistency between experiments and simulations in a situation where the response comes essentially from dipolar processes.

  19. The PreViBOSS project: study the short term predictability of the visibility change during the Fog life cycle, from surface and satellite observation

    NASA Astrophysics Data System (ADS)

    Elias, T.; Haeffelin, M.; Ramon, D.; Gomes, L.; Brunet, F.; Vrac, M.; Yiou, P.; Hello, G.; Petithomme, H.

    2010-07-01

    Fog prejudices major activities as transport and Earth observation, by critically reducing atmospheric visibility with no continuity in time and space. Fog is also an essential factor of air quality and climate as it modifies particle properties of the surface atmospheric layer. Complexity, diversity and the fine scale of processes make uncertain by current numerical weather prediction models, not only visibility diagnosis but also fog event prediction. Extensive measurements of atmospheric parameters are made on the SIRTA since 1997 to document physical processes over the atmospheric column, in the Paris suburb area, typical of an environment intermittently under oceanic influence and affected by urban and industrial pollution. The ParisFog field campaign hosted in SIRTA during 6-month in winter 2006-2007 resulted in the deployment of instrumentation specifically dedicated to study physical processes in the fog life cycle: thermodynamical, radiative, dynamical, microphysical processes. Analysis of the measurements provided a preliminary climatology of the episodes of reduced visibility, chronology of processes was delivered by examining time series of measured parameters and a closure study was performed on optical and microphysical properties of particles (aerosols to droplets) during the life cycle of a radiative fog, providing the relative contribution of several particle groups to extinction in clear-sky conditions, in haze and in fog. PreViBOSS is a 3-year project scheduled to start this year. The aim is to improve the short term prediction of changes in atmospheric visibility, at a local scale. It proposes an innovative approach: applying the Generalised Additive Model statistical method to the detailed and extended dataset acquired at SIRTA. This method offers the opportunity to explore non linear relationships between parameters, which are not yet integrated in current numerical models. Emphasis will be put on aerosols and their impact on the fog life cycle. Furthermore, the data set of ground-based measurements will be completed by spaceborne observation of visible and infra red radiance performed by the METEOSAT mission.

  20. Measurement and data processing approach for detecting anisotropic spatial statistics of the turbulence-induced index of refraction fluctuations in the upper atmosphere.

    PubMed

    Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John

    2002-05-20

    We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.

Top