Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao
2016-01-01
Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product’s performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner’s ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499
Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao
2016-08-06
Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product's performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner's ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters.
Multisite EPR oximetry from multiple quadrature harmonics.
Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C
2012-01-01
Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Byun, Bo-Ram; Kim, Yong-Il; Yamaguchi, Tetsutaro; Maki, Koutaro; Son, Woo-Sung
2015-01-01
This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6-18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R (2) had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status.
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mbamalu, G.A.N.; El-Hawary, M.E.
The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less
NASA Astrophysics Data System (ADS)
Lawrence, K. Deepak; Ramamoorthy, B.
2016-03-01
Cylinder bores of automotive engines are 'engineered' surfaces that are processed using multi-stage honing process to generate multiple layers of micro geometry for meeting the different functional requirements of the piston assembly system. The final processed surfaces should comply with several surface topographic specifications that are relevant for the good tribological performance of the engine. Selection of the process parameters in three stages of honing to obtain multiple surface topographic characteristics simultaneously within the specification tolerance is an important module of the process planning and is often posed as a challenging task for the process engineers. This paper presents a strategy by combining the robust process design and gray-relational analysis to evolve the operating levels of honing process parameters in rough, finish and plateau honing stages targeting to meet multiple surface topographic specifications on the final running surface of the cylinder bores. Honing experiments were conducted in three stages namely rough, finish and plateau honing on cast iron cylinder liners by varying four honing process parameters such as rotational speed, oscillatory speed, pressure and honing time. Abbott-Firestone curve based functional parameters (Rk, Rpk, Rvk, Mr1 and Mr2) coupled with mean roughness depth (Rz, DIN/ISO) and honing angle were measured and identified as the surface quality performance targets to be achieved. The experimental results have shown that the proposed approach is effective to generate cylinder liner surface that would simultaneously meet the explicit surface topographic specifications currently practiced by the industry.
Byun, Bo-Ram; Kim, Yong-Il; Maki, Koutaro; Son, Woo-Sung
2015-01-01
This study was aimed to examine the correlation between skeletal maturation status and parameters from the odontoid process/body of the second vertebra and the bodies of third and fourth cervical vertebrae and simultaneously build multiple regression models to be able to estimate skeletal maturation status in Korean girls. Hand-wrist radiographs and cone beam computed tomography (CBCT) images were obtained from 74 Korean girls (6–18 years of age). CBCT-generated cervical vertebral maturation (CVM) was used to demarcate the odontoid process and the body of the second cervical vertebra, based on the dentocentral synchondrosis. Correlation coefficient analysis and multiple linear regression analysis were used for each parameter of the cervical vertebrae (P < 0.05). Forty-seven of 64 parameters from CBCT-generated CVM (independent variables) exhibited statistically significant correlations (P < 0.05). The multiple regression model with the greatest R 2 had six parameters (PH2/W2, UW2/W2, (OH+AH2)/LW2, UW3/LW3, D3, and H4/W4) as independent variables with a variance inflation factor (VIF) of <2. CBCT-generated CVM was able to include parameters from the second cervical vertebral body and odontoid process, respectively, for the multiple regression models. This suggests that quantitative analysis might be used to estimate skeletal maturation status. PMID:25878721
Minimizing energy dissipation of matrix multiplication kernel on Virtex-II
NASA Astrophysics Data System (ADS)
Choi, Seonil; Prasanna, Viktor K.; Jang, Ju-wook
2002-07-01
In this paper, we develop energy-efficient designs for matrix multiplication on FPGAs. To analyze the energy dissipation, we develop a high-level model using domain-specific modeling techniques. In this model, we identify architecture parameters that significantly affect the total energy (system-wide energy) dissipation. Then, we explore design trade-offs by varying these parameters to minimize the system-wide energy. For matrix multiplication, we consider a uniprocessor architecture and a linear array architecture to develop energy-efficient designs. For the uniprocessor architecture, the cache size is a parameter that affects the I/O complexity and the system-wide energy. For the linear array architecture, the amount of storage per processing element is a parameter affecting the system-wide energy. By using maximum amount of storage per processing element and minimum number of multipliers, we obtain a design that minimizes the system-wide energy. We develop several energy-efficient designs for matrix multiplication. For example, for 6×6 matrix multiplication, energy savings of upto 52% for the uniprocessor architecture and 36% for the linear arrary architecture is achieved over an optimized library for Virtex-II FPGA from Xilinx.
Systematic procedure for designing processes with multiple environmental objectives.
Kim, Ki-Joo; Smith, Raymond L
2005-04-01
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.
Study on validation method for femur finite element model under multiple loading conditions
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu
2018-03-01
Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.
NASA Astrophysics Data System (ADS)
Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz
2017-08-01
Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.
NASA Astrophysics Data System (ADS)
Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar
2016-08-01
In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Integrated Process Modeling-A Process Validation Life Cycle Companion.
Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph
2017-10-17
During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.
Wan, Bo; Fu, Guicui; Li, Yanruoyue; Zhao, Youhu
2016-08-10
The cementing manufacturing process of ferrite phase shifters has the defect that cementing strength is insufficient and fractures always appear. A detection method of these defects was studied utilizing the multi-sensors Prognostic and Health Management (PHM) theory. Aiming at these process defects, the reasons that lead to defects are analyzed in this paper. In the meanwhile, the key process parameters were determined and Differential Scanning Calorimetry (DSC) tests during the cure process of resin cementing were carried out. At the same time, in order to get data on changing cementing strength, multiple-group cementing process tests of different key process parameters were designed and conducted. A relational model of cementing strength and cure temperature, time and pressure was established, by combining data of DSC and process tests as well as based on the Avrami formula. Through sensitivity analysis for three process parameters, the on-line detection decision criterion and the process parameters which have obvious impact on cementing strength were determined. A PHM system with multiple temperature and pressure sensors was established on this basis, and then, on-line detection, diagnosis and control for ferrite phase shifter cementing process defects were realized. It was verified by subsequent process that the on-line detection system improved the reliability of the ferrite phase shifter cementing process and reduced the incidence of insufficient cementing strength defects.
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Sato, Aki-Hiro
2004-04-01
Autoregressive conditional duration (ACD) processes, which have the potential to be applied to power law distributions of complex systems found in natural science, life science, and social science, are analyzed both numerically and theoretically. An ACD(1) process exhibits the singular second order moment, which suggests that its probability density function (PDF) has a power law tail. It is verified that the PDF of the ACD(1) has a power law tail with an arbitrary exponent depending on a model parameter. On the basis of theory of the random multiplicative process a relation between the model parameter and the power law exponent is theoretically derived. It is confirmed that the relation is valid from numerical simulations. An application of the ACD(1) to intervals between two successive transactions in a foreign currency market is shown.
Multiple electron processes of He and Ne by proton impact
NASA Astrophysics Data System (ADS)
Terekhin, Pavel Nikolaevich; Montenegro, Pablo; Quinto, Michele; Monti, Juan; Fojon, Omar; Rivarola, Roberto
2016-05-01
A detailed investigation of multiple electron processes (single and multiple ionization, single capture, transfer-ionization) of He and Ne is presented for proton impact at intermediate and high collision energies. Exclusive absolute cross sections for these processes have been obtained by calculation of transition probabilities in the independent electron and independent event models as a function of impact parameter in the framework of the continuum distorted wave-eikonal initial state theory. A binomial analysis is employed to calculate exclusive probabilities. The comparison with available theoretical and experimental results shows that exclusive probabilities are needed for a reliable description of the experimental data. The developed approach can be used for obtaining the input database for modeling multiple electron processes of charged particles passing through the matter.
Multiple wavelengths filtering of light through inner resonances.
Felbacq, Didier; Larciprete, Maria Cristina; Sibilia, Concita; Bertolotti, Mario; Scalora, Michael
2005-12-01
We show that by using the internal resonances of a grating, it is possible to design a filter working for multiple wavelengths. We study the characteristics of the device with respect to the constituting parameters and we propose a realization process.
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
A deterministic (non-stochastic) low frequency method for geoacoustic inversion.
Tolstoy, A
2010-06-01
It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.
A Theory of Exoplanet Transits with Light Scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Tyler D., E-mail: tydrobin@ucsc.edu
Exoplanet transit spectroscopy enables the characterization of distant worlds, and will yield key results for NASA's James Webb Space Telescope . However, transit spectra models are often simplified, omitting potentially important processes like refraction and multiple scattering. While the former process has seen recent development, the effects of light multiple scattering on exoplanet transit spectra have received little attention. Here, we develop a detailed theory of exoplanet transit spectroscopy that extends to the full refracting and multiple scattering case. We explore the importance of scattering for planet-wide cloud layers, where the relevant parameters are the slant scattering optical depth, themore » scattering asymmetry parameter, and the angular size of the host star. The latter determines the size of the “target” for a photon that is back-mapped from an observer. We provide results that straightforwardly indicate the potential importance of multiple scattering for transit spectra. When the orbital distance is smaller than 10–20 times the stellar radius, multiple scattering effects for aerosols with asymmetry parameters larger than 0.8–0.9 can become significant. We provide examples of the impacts of cloud/haze multiple scattering on transit spectra of a hot Jupiter-like exoplanet. For cases with a forward and conservatively scattering cloud/haze, differences due to multiple scattering effects can exceed 200 ppm, but shrink to zero at wavelength ranges corresponding to strong gas absorption or when the slant optical depth of the cloud exceeds several tens. We conclude with a discussion of types of aerosols for which multiple scattering in transit spectra may be important.« less
NASA Technical Reports Server (NTRS)
Martin, T. V.; Mullins, N. E.
1972-01-01
The operating and set-up procedures for the multi-satellite, multi-arc GEODYN- Orbit Determination program are described. All system output is analyzed. The GEODYN Program is the nucleus of the entire GEODYN system. It is a definitive orbit and geodetic parameter estimation program capable of simultaneously processing observations from multiple arcs of multiple satellites. GEODYN has two modes of operation: (1) the data reduction mode and (2) the orbit generation mode.
Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,
The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Osche, G R
2000-08-20
Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
Design of a multiple kernel learning algorithm for LS-SVM by convex programming.
Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou
2011-06-01
As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Optimization of Gas Metal Arc Welding Process Parameters
NASA Astrophysics Data System (ADS)
Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.
2016-09-01
This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.
Goldrick, Stephen; Holmes, William; Bond, Nicholas J.; Lewis, Gareth; Kuiper, Marcel; Turner, Richard
2017-01-01
ABSTRACT Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody–peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high‐throughput (HT) micro‐bioreactor system (AmbrTM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on‐line and off‐line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale‐up. Biotechnol. Bioeng. 2017;114: 2222–2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:28500668
Neutron coincidence measurements when nuclear parameters vary during the multiplication process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ming-Shih; Teichmann, T.
1995-07-01
In a recent paper, a physical/mathematical model was developed for neutron coincidence counting, taking explicit account of neutron absorption and leakage, and using dual probability generating function to derive explicit formulae for the single and multiple count-rates in terms of the physical parameters of the system. The results of this modeling proved very successful in a number of cases in which the system parameters (neutron reaction cross-sections, detection probabilities, etc.) remained the same at the various stages of the process (i.e. from collision to collision). However, there are practical circumstances in which such system parameters change from collision to collision,more » and it is necessary to accommodate these, too, in a general theory, applicable to such situations. For instance, in the case of the neutron coincidence collar (NCC), the parameters for the initial, spontaneous fission neutrons, are not the same as those for the succeeding induced fission neutrons, and similar situations can be envisaged for certain other experimental configurations. This present document shows how the previous considerations can be elaborated to embrace these more general requirements.« less
On two diffusion neuronal models with multiplicative noise: The mean first-passage time properties
NASA Astrophysics Data System (ADS)
D'Onofrio, G.; Lansky, P.; Pirozzi, E.
2018-04-01
Two diffusion processes with multiplicative noise, able to model the changes in the neuronal membrane depolarization between two consecutive spikes of a single neuron, are considered and compared. The processes have the same deterministic part but different stochastic components. The differences in the state-dependent variabilities, their asymptotic distributions, and the properties of the first-passage time across a constant threshold are investigated. Closed form expressions for the mean of the first-passage time of both processes are derived and applied to determine the role played by the parameters involved in the model. It is shown that for some values of the input parameters, the higher variability, given by the second moment, does not imply shorter mean first-passage time. The reason for that can be found in the complete shape of the stationary distribution of the two processes. Applications outside neuroscience are also mentioned.
NASA Astrophysics Data System (ADS)
Raju, B. S.; Sekhar, U. Chandra; Drakshayani, D. N.
2017-08-01
The paper investigates optimization of stereolithography process for SL5530 epoxy resin material to enhance part quality. The major characteristics indexed for performance selected to evaluate the processes are tensile strength, Flexural strength, Impact strength and Density analysis and corresponding process parameters are Layer thickness, Orientation and Hatch spacing. In this study, the process is intrinsically with multiple parameters tuning so that grey relational analysis which uses grey relational grade as performance index is specially adopted to determine the optimal combination of process parameters. Moreover, the principal component analysis is applied to evaluate the weighting values corresponding to various performance characteristics so that their relative importance can be properly and objectively desired. The results of confirmation experiments reveal that grey relational analysis coupled with principal component analysis can effectively acquire the optimal combination of process parameters. Hence, this confirm that the proposed approach in this study can be an useful tool to improve the process parameters in stereolithography process, which is very useful information for machine designers as well as RP machine users.
Parameter estimation and forecasting for multiplicative log-normal cascades.
Leövey, Andrés E; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Pulsed electric fields for pasteurization: defining processing conditions
USDA-ARS?s Scientific Manuscript database
Application of pulsed electric fields (PEF) technology in food pasteurization has been extensively studied. Optimal PEF treatment conditions for maximum microbial inactivation depend on multiple factors including PEF processing conditions, production parameters and product properties. In order for...
Simulation and design of ECT differential bobbin probes for the inspection of cracks in bolts
NASA Astrophysics Data System (ADS)
Ra, S. W.; Im, K. H.; Lee, S. G.; Kim, H. J.; Song, S. J.; Kim, S. K.; Cho, Y. T.; Woo, Y. D.; Jung, J. A.
2015-12-01
All Various defects could be generated in bolts for a use of oil filters for the manufacturing process and then may affect to the safety and quality in bolts. Also, fine defects may be imbedded in oil filter system during multiple forging manufacturing processes. So it is very important that such defects be investigated and screened during the multiple manufacturing processes. Therefore, in order effectively to evaluate the fine defects, the design parameters for bobbin-types were selected under a finite element method (FEM) simulations and Eddy current testing (ECT). Especially the FEM simulations were performed to make characterization in the crack detection of the bolts and the parameters such as number of turns of the coil, the coil size and applied frequency were calculated based on the simulation results.
Extensions of Rasch's Multiplicative Poisson Model.
ERIC Educational Resources Information Center
Jansen, Margo G. H.; van Duijn, Marijtje A. J.
1992-01-01
A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)
NASA Astrophysics Data System (ADS)
Sharifi, P.; Jamali, J.; Sadayappan, K.; Wood, J. T.
2018-05-01
A quantitative experimental study of the effects of process parameters on the formation of defects during solidification of high-pressure die cast magnesium alloy components is presented. The parameters studied are slow-stage velocity, fast-stage velocity, intensification pressure, and die temperature. The amount of various defects are quantitatively characterized. Multiple runs of the commercial casting simulation package, ProCAST™, are used to model the mold-filling and solidification events. Several locations in the component including knit lines, last-to-fill region, and last-to-solidify region are identified as the critical regions that have a high concentration of defects. The area fractions of total porosity, shrinkage porosity, gas porosity, and externally solidified grains are separately measured. This study shows that the process parameters, fluid flow and local solidification conditions, play major roles in the formation of defects during HPDC process.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Optimization of hybrid laser - TIG welding of 316LN steel using response surface methodology (RSM)
NASA Astrophysics Data System (ADS)
Ragavendran, M.; Chandrasekhar, N.; Ravikumar, R.; Saxena, Rajesh; Vasudevan, M.; Bhaduri, A. K.
2017-07-01
In the present study, the hybrid laser - TIG welding parameters for welding of 316LN austenitic stainless steel have been investigated by combining a pulsed laser beam with a TIG welding heat source at the weld pool. Laser power, pulse frequency, pulse duration, TIG current were presumed as the welding process parameters whereas weld bead width, weld cross-sectional area and depth of penetration (DOP) were considered as the process responses. Central composite design was used to complete the design matrix and welding experiments were conducted based on the design matrix. Weld bead measurements were then carried out to generate the dataset. Multiple regression models correlating the process parameters with the responses have been developed. The accuracy of the models were found to be good. Then, the desirability approach optimization technique was employed for determining the optimum process parameters to obtain the desired weld bead profile. Validation experiments were then carried out from the determined optimum process parameters. There was good agreement between the predicted and measured values.
Multiple-predators-based capture process on complex networks
NASA Astrophysics Data System (ADS)
Ramiz Sharafat, Rajput; Pu, Cunlai; Li, Jie; Chen, Rongbin; Xu, Zhongqi
2017-03-01
The predator/prey (capture) problem is a prototype of many network-related applications. We study the capture process on complex networks by considering multiple predators from multiple sources. In our model, some lions start from multiple sources simultaneously to capture the lamb by biased random walks, which are controlled with a free parameter $\\alpha$. We derive the distribution of the lamb's lifetime and the expected lifetime $\\left\\langle T\\right\\rangle $. Through simulation, we find that the expected lifetime drops substantially with the increasing number of lions. We also study how the underlying topological structure affects the capture process, and obtain that locating on small-degree nodes is better than large-degree nodes to prolong the lifetime of the lamb. Moreover, dense or homogeneous network structures are against the survival of the lamb.
Goldrick, Stephen; Holmes, William; Bond, Nicholas J; Lewis, Gareth; Kuiper, Marcel; Turner, Richard; Farid, Suzanne S
2017-10-01
Product quality heterogeneities, such as a trisulfide bond (TSB) formation, can be influenced by multiple interacting process parameters. Identifying their root cause is a major challenge in biopharmaceutical production. To address this issue, this paper describes the novel application of advanced multivariate data analysis (MVDA) techniques to identify the process parameters influencing TSB formation in a novel recombinant antibody-peptide fusion expressed in mammalian cell culture. The screening dataset was generated with a high-throughput (HT) micro-bioreactor system (Ambr TM 15) using a design of experiments (DoE) approach. The complex dataset was firstly analyzed through the development of a multiple linear regression model focusing solely on the DoE inputs and identified the temperature, pH and initial nutrient feed day as important process parameters influencing this quality attribute. To further scrutinize the dataset, a partial least squares model was subsequently built incorporating both on-line and off-line process parameters and enabled accurate predictions of the TSB concentration at harvest. Process parameters identified by the models to promote and suppress TSB formation were implemented on five 7 L bioreactors and the resultant TSB concentrations were comparable to the model predictions. This study demonstrates the ability of MVDA to enable predictions of the key performance drivers influencing TSB formation that are valid also upon scale-up. Biotechnol. Bioeng. 2017;114: 2222-2234. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jin-Won; Lee, Yun-Seong, E-mail: leeeeys@kaist.ac.kr; Chang, Hong-Young
2014-08-15
In this study, we attempted to determine the possibility of multiple inductively coupled plasma (ICP) and helicon plasma sources for large-area processes. Experiments were performed with the one and two coils to measure plasma and electrical parameters, and a circuit simulation was performed to measure the current at each coil in the 2-coil experiment. Based on the result, we could determine the possibility of multiple ICP sources due to a direct change of impedance due to current and saturation of impedance due to the skin-depth effect. However, a helicon plasma source is difficult to adapt to the multiple sources duemore » to the consistent change of real impedance due to mode transition and the low uniformity of the B-field confinement. As a result, it is expected that ICP can be adapted to multiple sources for large-area processes.« less
Ergodicity-breaking bifurcations and tunneling in hyperbolic transport models
NASA Astrophysics Data System (ADS)
Giona, M.; Brasiello, A.; Crescitelli, S.
2015-11-01
One of the main differences between parabolic transport, associated with Langevin equations driven by Wiener processes, and hyperbolic models related to generalized Kac equations driven by Poisson processes, is the occurrence in the latter of multiple stable invariant densities (Frobenius multiplicity) in certain regions of the parameter space. This phenomenon is associated with the occurrence in linear hyperbolic balance equations of a typical bifurcation, referred to as the ergodicity-breaking bifurcation, the properties of which are thoroughly analyzed.
Parameter estimation and forecasting for multiplicative log-normal cascades
NASA Astrophysics Data System (ADS)
Leövey, Andrés E.; Lux, Thomas
2012-04-01
We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.
Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K; de Supinski, B
2006-11-07
This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.
Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis
2015-01-01
Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.
Logistic Stick-Breaking Process
Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.
2013-01-01
A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593
Processing of meteorological data with ultrasonic thermoanemometers
NASA Astrophysics Data System (ADS)
Telminov, A. E.; Bogushevich, A. Ya.; Korolkov, V. A.; Botygin, I. A.
2017-11-01
The article describes a software system intended for supporting scientific researches of the atmosphere during the processing of data gathered by multi-level ultrasonic complexes for automated monitoring of meteorological and turbulent parameters in the ground layer of the atmosphere. The system allows to process files containing data sets of temperature instantaneous values, three orthogonal components of wind speed, humidity and pressure. The processing task execution is done in multiple stages. During the first stage, the system executes researcher's query for meteorological parameters. At the second stage, the system computes series of standard statistical meteorological field properties, such as averages, dispersion, standard deviation, asymmetry coefficients, excess, correlation etc. The third stage is necessary to prepare for computing the parameters of atmospheric turbulence. The computation results are displayed to user and stored at hard drive.
The Blessing and the Curse of the Multiplicative Updates
NASA Astrophysics Data System (ADS)
Warmuth, Manfred K.
Multiplicative updates multiply the parameters by nonnegative factors. These updates are motivated by a Maximum Entropy Principle and they are prevalent in evolutionary processes where the parameters are for example concentrations of species and the factors are survival rates. The simplest such update is Bayes rule and we give an in vitro selection algorithm for RNA strands that implements this rule in the test tube where each RNA strand represents a different model. In one liter of the RNA "soup" there are approximately 1020 different strands and therefore this is a rather high-dimensional implementation of Bayes rule.
Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.
Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing
2009-06-01
Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.
Borole, Abhijeet P.
2015-08-25
Conversion of biomass into bioenergy is possible via multiple pathways resulting in production of biofuels, bioproducts and biopower. Efficient and sustainable conversion of biomass, however, requires consideration of many environmental and societal parameters in order to minimize negative impacts. Integration of multiple conversion technologies and inclusion of upcoming alternatives such as bioelectrochemical systems can minimize these impacts and improve conservation of resources such as hydrogen, water and nutrients via recycle and reuse. This report outlines alternate pathways integrating microbial electrolysis in biorefinery schemes to improve energy efficiency while evaluating environmental sustainability parameters.
NASA Astrophysics Data System (ADS)
Rokni, M. R.; Nutt, S. R.; Widener, C. A.; Champagne, V. K.; Hrabe, R. H.
2017-08-01
In the cold spray (CS) process, deposits are produced by depositing powder particles at high velocity onto a substrate. Powders deposited by CS do not undergo melting before or upon impacting the substrate. This feature makes CS suitable for deposition of a wide variety of materials, most commonly metallic alloys, but also ceramics and composites. During processing, the particles undergo severe plastic deformation and create a more mechanical and less metallurgical bond with the underlying material. The deformation behavior of an individual particle depends on multiple material and process parameters that are classified into three major groups—powder characteristics, geometric parameters, and processing parameters, each with their own subcategories. Changing any of these parameters leads to evolution of a different microstructure and consequently changes the mechanical properties in the deposit. While cold spray technology has matured during the last decade, the process is inherently complex, and thus, the effects of deposition parameters on particle deformation, deposit microstructure, and mechanical properties remain unclear. The purpose of this paper is to review the parameters that have been investigated up to now with an emphasis on the existent relationships between particle deformation behavior, microstructure, and mechanical properties of various cold spray deposits.
NASA Astrophysics Data System (ADS)
Jia, Bing
2014-03-01
A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram
2018-06-08
Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
ERIC Educational Resources Information Center
Bogon, Johanna; Finke, Kathrin; Schulte-Körne, Gerd; Müller, Hermann J.; Schneider, Werner X.; Stenneken, Prisca
2014-01-01
People with developmental dyslexia (DD) have been shown to be impaired in tasks that require the processing of multiple visual elements in parallel. It has been suggested that this deficit originates from disturbed visual attentional functions. The parameter-based assessment of visual attention based on Bundesen's (1990) theory of visual…
Numerical study of impact erosion of multiple solid particle
NASA Astrophysics Data System (ADS)
Zheng, Chao; Liu, Yonghong; Chen, Cheng; Qin, Jie; Ji, Renjie; Cai, Baoping
2017-11-01
Material erosion caused by continuous particle impingement during hydraulic fracturing results in significant economic loss and increased production risks. The erosion process is complex and has not been clearly explained through physical experiments. To address this problem, a multiple particle model in a 3D configuration was proposed to investigate the dynamic erosion process. This approach can significantly reduce experiment costs. The numerical model considered material damping and elastic-plastic material behavior of target material. The effects of impact parameters on erosion characteristics, such as plastic deformation, contact time, and energy loss rate, were investigated. Based on comprehensive studies, the dynamic erosion mechanism and geometry evolution of eroded crater was obtained. These findings can provide a detailed erosion process of target material and insights into the material erosion caused by multiple particle impingement.
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
Estimation of multiple accelerated motions using chirp-Fourier transform and clustering.
Alexiadis, Dimitrios S; Sergiadis, George D
2007-01-01
Motion estimation in the spatiotemporal domain has been extensively studied and many methodologies have been proposed, which, however, cannot handle both time-varying and multiple motions. Extending previously published ideas, we present an efficient method for estimating multiple, linearly time-varying motions. It is shown that the estimation of accelerated motions is equivalent to the parameter estimation of superpositioned chirp signals. From this viewpoint, one can exploit established signal processing tools such as the chirp-Fourier transform. It is shown that accelerated motion results in energy concentration along planes in the 4-D space: spatial frequencies-temporal frequency-chirp rate. Using fuzzy c-planes clustering, we estimate the plane/motion parameters. The effectiveness of our method is verified on both synthetic as well as real sequences and its advantages are highlighted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moges, Edom; Demissie, Yonas; Li, Hong-Yi
2016-04-01
In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less
Analysis and optimization of machining parameters of laser cutting for polypropylene composite
NASA Astrophysics Data System (ADS)
Deepa, A.; Padmanabhan, K.; Kuppan, P.
2017-11-01
Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.
NASA Astrophysics Data System (ADS)
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.
NASA Technical Reports Server (NTRS)
Boorstyn, R. R.
1973-01-01
Research is reported dealing with problems of digital data transmission and computer communications networks. The results of four individual studies are presented which include: (1) signal processing with finite state machines, (2) signal parameter estimation from discrete-time observations, (3) digital filtering for radar signal processing applications, and (4) multiple server queues where all servers are not identical.
The effect of some heat treatment parameters on the dimensional stability of AISI D2
NASA Astrophysics Data System (ADS)
Surberg, Cord Henrik; Stratton, Paul; Lingenhöle, Klaus
2008-01-01
The tool steel AISI D2 is usually processed by vacuum hardening followed by multiple tempering cycles. It has been suggested that a deep cold treatment in between the hardening and tempering processes could reduce processing time and improve the final properties and dimensional stability. Hardened blocks were then subjected to various combinations of single and multiple tempering steps (520 and 540 °C) and deep cold treatments (-90, -120 and -150 °C). The greatest dimensional stability was achieved by deep cold treatments at the lowest temperature used and was independent of the deep cold treatment time.
Waldbusser, George G; Salisbury, Joseph E
2014-01-01
Multiple natural and anthropogenic processes alter the carbonate chemistry of the coastal zone in ways that either exacerbate or mitigate ocean acidification effects. Freshwater inputs and multiple acid-base reactions change carbonate chemistry conditions, sometimes synergistically. The shallow nature of these systems results in strong benthic-pelagic coupling, and marine invertebrates at different life history stages rely on both benthic and pelagic habitats. Carbonate chemistry in coastal systems can be highly variable, responding to processes with temporal modes ranging from seconds to centuries. Identifying scales of variability relevant to levels of biological organization requires a fuller characterization of both the frequency and magnitude domains of processes contributing to or reducing acidification in pelagic and benthic habitats. We review the processes that contribute to coastal acidification with attention to timescales of variability and habitats relevant to marine bivalves.
NASA Astrophysics Data System (ADS)
Oberberg, Moritz; Styrnoll, Tim; Ries, Stefan; Bienholz, Stefan; Awakowicz, Peter
2015-09-01
Reactive sputter processes are used for the deposition of hard, wear-resistant and non-corrosive ceramic layers such as aluminum oxide (Al2O3) . A well known problem is target poisoning at high reactive gas flows, which results from the reaction of the reactive gas with the metal target. Consequently, the sputter rate decreases and secondary electron emission increases. Both parameters show a non-linear hysteresis behavior as a function of the reactive gas flow and this leads to process instabilities. This work presents a new control method of Al2O3 deposition in a multiple frequency CCP (MFCCP) based on plasma parameters. Until today, process controls use parameters such as spectral line intensities of sputtered metal as an indicator for the sputter rate. A coupling between plasma and substrate is not considered. The control system in this work uses a new plasma diagnostic method: The multipole resonance probe (MRP) measures plasma parameters such as electron density by analyzing a typical resonance frequency of the system response. This concept combines target processes and plasma effects and directly controls the sputter source instead of the resulting target parameters.
An Integrated Framework for Parameter-based Optimization of Scientific Workflows.
Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel
2009-01-01
Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.
NASA Astrophysics Data System (ADS)
Prasad, Balla Srinivasa; Prabha, K. Aruna; Kumar, P. V. S. Ganesh
2017-03-01
In metal cutting machining, major factors that affect the cutting tool life are machine tool vibrations, tool tip/chip temperature and surface roughness along with machining parameters like cutting speed, feed rate, depth of cut, tool geometry, etc., so it becomes important for the manufacturing industry to find the suitable levels of process parameters for obtaining maintaining tool life. Heat generation in cutting was always a main topic to be studied in machining. Recent advancement in signal processing and information technology has resulted in the use of multiple sensors for development of the effective monitoring of tool condition monitoring systems with improved accuracy. From a process improvement point of view, it is definitely more advantageous to proactively monitor quality directly in the process instead of the product, so that the consequences of a defective part can be minimized or even eliminated. In the present work, a real time process monitoring method is explored using multiple sensors. It focuses on the development of a test bed for monitoring the tool condition in turning of AISI 316L steel by using both coated and uncoated carbide inserts. Proposed tool condition monitoring (TCM) is evaluated in the high speed turning using multiple sensors such as Laser Doppler vibrometer and infrared thermography technique. The results indicate the feasibility of using the dominant frequency of the vibration signals for the monitoring of high speed turning operations along with temperatures gradient. A possible correlation is identified in both regular and irregular cutting tool wear. While cutting speed and feed rate proved to be influential parameter on the depicted temperatures and depth of cut to be less influential. Generally, it is observed that lower heat and temperatures are generated when coated inserts are employed. It is found that cutting temperatures are gradually increased as edge wear and deformation developed.
Reliability of system for precise cold forging
NASA Astrophysics Data System (ADS)
Krušič, Vid; Rodič, Tomaž
2017-07-01
The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.
Passing in Command Line Arguments and Parallel Cluster/Multicore Batching in R with batch.
Hoffmann, Thomas J
2011-03-01
It is often useful to rerun a command line R script with some slight change in the parameters used to run it - a new set of parameters for a simulation, a different dataset to process, etc. The R package batch provides a means to pass in multiple command line options, including vectors of values in the usual R format, easily into R. The same script can be setup to run things in parallel via different command line arguments. The R package batch also provides a means to simplify this parallel batching by allowing one to use R and an R-like syntax for arguments to spread a script across a cluster or local multicore/multiprocessor computer, with automated syntax for several popular cluster types. Finally it provides a means to aggregate the results together of multiple processes run on a cluster.
Kovács, A; Erős, I; Csóka, I
2016-04-01
The aim of our present work was to develop stable water-in-oil-in-water (w/o/w) cosmetic multiple emulsions that are proper for cosmetic use and can also be applied on the skin as pharmaceutical vehicles by means of Quality by Design (QbD) concept. This product design concept consists of a risk assessment step and also the 'predetermination' of the critical material attributes and process parameters of a stable multiple emulsion system. We have set up the hypothesis that the stability of multiple emulsions can be improved by the development based on such systematic planning - making a map of critical product parameters - so their industrial usage can be increased. The risk assessment and the determination of critical physical-chemical stability parameters of w/o/w multiple emulsions to define critical control points were performed by means of quality tools and the leanqbd(™) (QbD Works LLC, Fremont, CA, U.S.A.) software. Critical materials and process parameters: Based on the results of preformulation experiments, three factors, namely entrapped active agent, preparation methodology and shear rate, were found to be highly critical factors for critical quality attributes (CQAs) and for stability, whereas the nature of oil was found a medium level risk factor. The results of the risk assessment are the following: (i) droplet structure and size distribution should be evaluated together to be able to predict the stability issues, (ii) the presence of entrapped active agents had a great impact on droplet structure, (iii) the viscosity curves represent the structural changes during storage, if the decrease in relative viscosity is >15% the emulsion disintegrates, and (iv) it is enough to use the shear rate between 34g and 116g relative centrifugal force (RCF). CQAs: By risk assessment, we discovered that four factors should be considered to be high-risk variables as compared to others: droplet size, droplet structure, viscosity and multiple character were found to be highly critical attributes. The preformulation experiment is the part of a development plan. On the basis of these results, the control strategy can be defined and a stable multiple emulsion can be ensured that meets the relevant stakeholders' quality expectations. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
An Architecture for Enabling Migration of Tactical Networks to Future Flexible Ad Hoc WBWF
2010-09-01
Requirements Several multiple access schemes TDMA OFDMA SC-OFDMA, FH- CDMA , DS - CDMA , hybrid access schemes, transitions between them Dynamic...parameters algorithms depend on the multiple access scheme If DS - CDMA : handling of macro-diversity (linked to cooperative routing) TDMA and/of OFDMA...Transport format Ciphering @MAC/RLC level : SCM Physical layer (PHY) : signal processing (mod, FEC, etc) CDMA : macro-diversity CDMA , OFDMA
Ruiz-Espinosa, H; Amador-Espejo, G G; Barcenas-Pozos, M E; Angulo-Guerrero, J O; Garcia, H S; Welti-Chanes, J
2013-02-01
Multiple-pass ultrahigh pressure homogenization (UHPH) was used for reducing microbial population of both indigenous spoilage microflora in whole raw milk and a baroresistant pathogen (Staphylococcus aureus) inoculated in whole sterile milk to define pasteurization-like processing conditions. Response surface methodology was followed and multiple response optimization of UHPH operating pressure (OP) (100, 175, 250 MPa) and number of passes (N) (1-5) was conducted through overlaid contour plot analysis. Increasing OP and N had a significant effect (P < 0·05) on microbial reduction of both spoilage microflora and Staph. aureus in milk. Optimized UHPH processes (five 202-MPa passes; four 232-MPa passes) defined a region where a 5-log(10) reduction of total bacterial count of milk and a baroresistant pathogen are attainable, as a requisite parameter for establishing an alternative method of pasteurization. Multiple-pass UHPH optimized conditions might help in producing safe milk without the detrimental effects associated with thermal pasteurization. © 2012 The Society for Applied Microbiology.
NASA Astrophysics Data System (ADS)
Pan, Minqiang; Zeng, Dehuai; Tang, Yong
A novel multi-cutter milling process for multiple parallel microchannels with manifolds is proposed to address the challenge of mass manufacture as required for cost-effective commercial applications. Several slotting cutters are stacked together to form a composite tool for machining microchannels simultaneously. The feasibility of this new fabrication process is experimentally investigated under different machining conditions and reaction characteristics of methanol steam reforming for hydrogen production. The influences of cutting parameters and the composite tool on the microchannel qualities and burr formation are analyzed. Experimental results indicate that larger cutting speed, smaller feed rate and cutting depth are in favor of obtaining relatively good microchannel qualities and small burrs. Of all the cutting parameters considered in these experiments, 94.2 m min -1 cutting speed, 23.5 mm min -1 feed rate and 0.5 mm cutting depth are found to be the optimum value. According to the comparisons of experimental results of multi-cutter milling process and estimated one of other alternative methods, it is found that multi-cutter milling process shows much shorter machining time and higher work removal rate than that of other alternative methods. Reaction characteristics of methanol steam reforming in microchannels also indicate that multi-cutter milling process is probably suitable for a commercial application.
Cattani, F; Dolan, K D; Oliveira, S D; Mishra, D K; Ferreira, C A S; Periago, P M; Aznar, A; Fernandez, P S; Valdramidis, V P
2016-11-01
Bacillus sporothermodurans produces highly heat-resistant endospores, that can survive under ultra-high temperature. High heat-resistant sporeforming bacteria are one of the main causes for spoilage and safety of low-acid foods. They can be used as indicators or surrogates to establish the minimum requirements for heat processes, but it is necessary to understand their thermal inactivation kinetics. The aim of the present work was to study the inactivation kinetics under both static and dynamic conditions in a vegetable soup. Ordinary least squares one-step regression and sequential procedures were applied for estimating these parameters. Results showed that multiple dynamic heating profiles, when analyzed simultaneously, can be used to accurately estimate the kinetic parameters while significantly reducing estimation errors and data collection. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics
NASA Astrophysics Data System (ADS)
Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.
2018-01-01
Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.
NASA Astrophysics Data System (ADS)
Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.
2014-05-01
Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Supercritical-Multiple-Solvent Extraction From Coal
NASA Technical Reports Server (NTRS)
Corcoran, W.; Fong, W.; Pichaichanarong, P.; Chan, P.; Lawson, D.
1983-01-01
Large and small molecules dissolve different constituents. Experimental apparatus used to test supercritical extraction of hydrogen rich compounds from coal in various organic solvents. In decreasing order of importance, relevant process parameters were found to be temperature, solvent type, pressure, and residence time.
NASA Astrophysics Data System (ADS)
vellaichamy, Lakshmanan; Paulraj, Sathiya
2018-02-01
The dissimilar welding of Incoloy 800HT and P91 steel using Gas Tungsten arc welding process (GTAW) This material is being used in the Nuclear Power Plant and Aerospace Industry based application because Incoloy 800HT possess good corrosion and oxidation resistance and P91 possess high temperature strength and creep resistance. This work discusses on multi-objective optimization using gray relational analysis (GRA) using 9CrMoV-N filler materials. The experiment conducted L9 orthogonal array. The input parameter are current, voltage, speed. The output response are Tensile strength, Hardness and Toughness. To optimize the input parameter and multiple output variable by using GRA. The optimal parameter is combination was determined as A2B1C1 so given input parameter welding current at 120 A, voltage at 16 V and welding speed at 0.94 mm/s. The output of the mechanical properties for best and least grey relational grade was validated by the metallurgical characteristics.
Automated ensemble assembly and validation of microbial genomes.
Koren, Sergey; Treangen, Todd J; Hill, Christopher M; Pop, Mihai; Phillippy, Adam M
2014-05-03
The continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible. To encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides comparable to or exceeding the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers. Ensemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to select an assembly best tailored to their specific needs.
NASA Astrophysics Data System (ADS)
Kandel, Mikhail E.; Kouzehgarani, Ghazal N.; Ngyuen, Tan H.; Gillette, Martha U.; Popescu, Gabriel
2017-02-01
Although the contrast generated in transmitted light microscopy is due to the elastic scattering of light, multiple scattering scrambles the image and reduces overall visibility. To image both thin and thick samples, we turn to gradient light interference microscopy (GLIM) to simultaneously measure morphological parameters such as cell mass, volume, and surfaces as they change through time. Because GLIM combines multiple intensity images corresponding to controlled phase offsets between laterally sheared beams, incoherent contributions from multiple scattering are implicitly cancelled during the phase reconstruction procedure. As the interfering beams traverse near identical paths, they remain comparable in power and interfere with optimal contrast. This key property lets us obtain tomographic parameters from wide field z-scans after simple numerical processing. Here we show our results on reconstructing tomograms of bovine embryos, characterizing the time-lapse growth of HeLa cells in 3D, and preliminary results on imaging much larger specimen such as brain slices.
Impulsively Induced Jets from Viscoelastic Films for High-Resolution Printing
NASA Astrophysics Data System (ADS)
Turkoz, Emre; Perazzo, Antonio; Kim, Hyoungsoo; Stone, Howard A.; Arnold, Craig B.
2018-02-01
Understanding jet formation from non-Newtonian fluids is important for improving the quality of various printing and dispensing techniques. Here, we use a laser-based nozzleless method to investigate impulsively formed jets of non-Newtonian fluids. Experiments with a time-resolved imaging setup demonstrate multiple regimes during jet formation that can result in zero, single, or multiple drops per laser pulse. These regimes depend on the ink thickness, ink rheology, and laser energy. For optimized printing, it is desirable to select parameters that result in a single-drop breakup; however, the strain-rate dependent rheology of these inks makes it challenging to determine these conditions a priori. Rather, we present a methodology for characterizing these regimes using dimensionless parameters evaluated from the process parameters and measured ink rheology that are obtained prior to printing and, so, offer a criterion for a single-drop breakup.
NASA Astrophysics Data System (ADS)
Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.
2018-05-01
The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
A Nonlinear, Multiinput, Multioutput Process Control Laboratory Experiment
ERIC Educational Resources Information Center
Young, Brent R.; van der Lee, James H.; Svrcek, William Y.
2006-01-01
Experience in using a user-friendly software, Mathcad, in the undergraduate chemical reaction engineering course is discussed. Example problems considered for illustration deal with simultaneous solution of linear algebraic equations (kinetic parameter estimation), nonlinear algebraic equations (equilibrium calculations for multiple reactions and…
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras
Morris, Mark; Sellers, William I.
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.
Peyer, Kathrin E; Morris, Mark; Sellers, William I
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.
Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel
2017-01-01
Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607
Optimize of shrink process with X-Y CD bias on hole pattern
NASA Astrophysics Data System (ADS)
Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami
2017-03-01
Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
Multiple-objective optimization in precision laser cutting of different thermoplastics
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.
2015-04-01
Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.
Estimation of kinetic parameters from list-mode data using an indirect apporach
NASA Astrophysics Data System (ADS)
Ortiz, Joseph Christian
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golovanov, Georgy
The thesis is devoted to the study of processes with multiple parton interactions (MPI) in a ppbar collision collected by D0 detector at the Fermilab Tevatron collider at sqrt(s) = 1.96 TeV. The study includes measurements of MPI event fraction and effective cross section, a process-independent parameter related to the effective interaction region inside the nucleon. The measurements are done using events with a photon and three hadronic jets in the final state. The measured effective cross section is used to estimate background from MPI for WH production at the Tevatron energy
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Beer, Sebastian; Dobler, Dorota; Gross, Alexander; Ost, Martin; Elseberg, Christiane; Maeder, Ulf; Schmidts, Thomas Michael; Keusgen, Michael; Fiebich, Martin; Runkel, Frank
2013-01-30
Multiple emulsions offer various applications in a wide range of fields such as pharmaceutical, cosmetics and food technology. Two features are known to yield a great influence on multiple emulsion quality and utility as encapsulation efficiency and prolonged stability. To achieve a prolonged stability, the production of the emulsions has to be observed and controlled, preferably in line. In line measurements provide available parameters in a short time frame without the need for the sample to be removed from the process stream, thereby enabling continuous process control. In this study, information about the physical state of multiple emulsions obtained from dielectric spectroscopy (DS) is evaluated for this purpose. Results from dielectric measurements performed in line during the production cycle are compared to theoretically expected results and to well established off line measurements. Thus, a first step to include the production of multiple emulsions into the process analytical technology (PAT) guidelines of the Food and Drug Administration (FDA) is achieved. DS proved to be beneficial in determining the crucial stopping criterion, which is essential in the production of multiple emulsions. The stopping of the process at a less-than-ideal point can severely lower the encapsulation efficiency and the stability, thereby lowering the quality of the emulsion. DS is also expected to provide further information about the multiple emulsion like encapsulation efficiency. Copyright © 2012 Elsevier B.V. All rights reserved.
KAMO: towards automated data processing for microcrystals.
Yamashita, Keitaro; Hirata, Kunio; Yamamoto, Masaki
2018-05-01
In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5-10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals. open access.
NASA Astrophysics Data System (ADS)
Mejid Elsiti, Nagwa; Noordin, M. Y.; Idris, Ani; Saed Majeed, Faraj
2017-10-01
This paper presents an optimization of process parameters of Micro-Electrical Discharge Machining (EDM) process with (γ-Fe2O3) nano-powder mixed dielectric using multi-response optimization Grey Relational Analysis (GRA) method instead of single response optimization. These parameters were optimized based on 2-Level factorial design combined with Grey Relational Analysis. The machining parameters such as peak current, gap voltage, and pulse on time were chosen for experimentation. The performance characteristics chosen for this study are material removal rate (MRR), tool wear rate (TWR), Taper and Overcut. Experiments were conducted using electrolyte copper as the tool and CoCrMo as the workpiece. Experimental results have been improved through this approach.
Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation
De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan
2017-01-01
Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006
Mrad, Rachelle; Debs, Espérance; Maroun, Richard G; Louka, Nicolas
2014-12-15
A new process, Intensification of Vaporization by Decompression to the Vacuum (IVDV), is proposed for texturizing purple maize. It consists in exposing humid kernels to high steam pressure followed by a decompression to the vacuum. Response surface methodology with three operating parameters (initial water content (W), steam pressure (P) and processing time (T)) was used to study the response parameters: Total Anthocyanins Content, Total Polyphenols Content, Free Radical Scavenging Activity, Expansion Ratio, Hardness and Work Done. P was the most important variable, followed by T. Pressure drop helped the release of bound phenolics arriving to their expulsion outside the cell. Combined with convenient T and W, it caused kernels expansion. Multiple optimization of expansion and chemical content showed that IVDV resulted in good texturization of maize while preserving the antioxidant compounds and activity. Optimal conditions were: W=29%, P=5 bar and T=37s. Copyright © 2014 Elsevier Ltd. All rights reserved.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Machining of AISI D2 Tool Steel with Multiple Hole Electrodes by EDM Process
NASA Astrophysics Data System (ADS)
Prasad Prathipati, R.; Devuri, Venkateswarlu; Cheepu, Muralimohan; Gudimetla, Kondaiah; Uzwal Kiran, R.
2018-03-01
In recent years, with the increasing of technology the demand for machining processes is increasing for the newly developed materials. The conventional machining processes are not adequate to meet the accuracy of the machining of these materials. The non-conventional machining processes of electrical discharge machining is one of the most efficient machining processes is being widely used to machining of high accuracy products of various industries. The optimum selection of process parameters is very important in machining processes as that of an electrical discharge machining as they determine surface quality and dimensional precision of the obtained parts, even though time consumption rate is higher for machining of large dimension features. In this work, D2 high carbon and chromium tool steel has been machined using electrical discharge machining with the multiple hole electrode technique. The D2 steel has several applications such as forming dies, extrusion dies and thread rolling. But the machining of this tool steel is very hard because of it shard alloyed elements of V, Cr and Mo which enhance its strength and wear properties. However, the machining is possible by using electrical discharge machining process and the present study implemented a new technique to reduce the machining time using a multiple hole copper electrode. In this technique, while machining with multiple holes electrode, fin like projections are obtained, which can be removed easily by chipping. Then the finishing is done by using solid electrode. The machining time is reduced to around 50% while using multiple hole electrode technique for electrical discharge machining.
Numerical Simulation of Cast Distortion in Gas Turbine Engine Components
NASA Astrophysics Data System (ADS)
Inozemtsev, A. A.; Dubrovskaya, A. S.; Dongauser, K. A.; Trufanov, N. A.
2015-06-01
In this paper the process of multiple airfoilvanes manufacturing through investment casting is considered. The mathematical model of the full contact problem is built to determine stress strain state in a cast during the process of solidification. Studies are carried out in viscoelastoplastic statement. Numerical simulation of the explored process is implemented with ProCASTsoftware package. The results of simulation are compared with the real production process. By means of computer analysis the optimization of technical process parameters is done in order to eliminate the defect of cast walls thickness variation.
Khanna, Swati; Goyal, Arun; Moholkar, Vijayanand S
2013-01-01
This article addresses the issue of effect of fermentation parameters for conversion of glycerol (in both pure and crude form) into three value-added products, namely, ethanol, butanol, and 1,3-propanediol (1,3-PDO), by immobilized Clostridium pasteurianum and thereby addresses the statistical optimization of this process. The analysis of effect of different process parameters such as agitation rate, fermentation temperature, medium pH, and initial glycerol concentration indicated that medium pH was the most critical factor for total alcohols production in case of pure glycerol as fermentation substrate. On the other hand, initial glycerol concentration was the most significant factor for fermentation with crude glycerol. An interesting observation was that the optimized set of fermentation parameters was found to be independent of the type of glycerol (either pure or crude) used. At optimum conditions of agitation rate (200 rpm), initial glycerol concentration (25 g/L), fermentation temperature (30°C), and medium pH (7.0), the total alcohols production was almost equal in anaerobic shake flasks and 2-L bioreactor. This essentially means that at optimum process parameters, the scale of operation does not affect the output of the process. The immobilized cells could be reused for multiple cycles for both pure and crude glycerol fermentation.
Percutaneous multiple electrode connector, design parameters and fabrication (biomedical)
NASA Technical Reports Server (NTRS)
Myers, L. A.
1977-01-01
A percutaneous multielectrode connector was designed which utilizes an ultrapure carbon collar to provide an infection free biocompatible passage through the skin. The device provides reliable electrical continuity, mates and demates readily with the implant, and is fabricated with processes and materials oriented to commercial production.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Analysis of acoustic emission signals and monitoring of machining processes
Govekar; Gradisek; Grabec
2000-03-01
Monitoring of a machining process on the basis of sensor signals requires a selection of informative inputs in order to reliably characterize and model the process. In this article, a system for selection of informative characteristics from signals of multiple sensors is presented. For signal analysis, methods of spectral analysis and methods of nonlinear time series analysis are used. With the aim of modeling relationships between signal characteristics and the corresponding process state, an adaptive empirical modeler is applied. The application of the system is demonstrated by characterization of different parameters defining the states of a turning machining process, such as: chip form, tool wear, and onset of chatter vibration. The results show that, in spite of the complexity of the turning process, the state of the process can be well characterized by just a few proper characteristics extracted from a representative sensor signal. The process characterization can be further improved by joining characteristics from multiple sensors and by application of chaotic characteristics.
Flare rates and the McIntosh active-region classifications
NASA Technical Reports Server (NTRS)
Bornmann, P. L.; Shaw, D.
1994-01-01
Multiple linear regression analysis was used to derive the effective solar flare contributions of each of the McIntosh classification parameters. The best fits to the combined average number of M- and X-class X-ray flares per day were found when the flare contributions were assumed to be multiplicative rather than additive. This suggests that nonlinear processes may amplify the effects of the following different active-region properties encoded in the McIntosh classifications: the length of the sunspot group, the size and shape of the largest spot, and the distribution of spots within the group. Since many of these active-region properties are correlated with magnetic field strengths and fluxes, we suggest that the derived correlations reflect a more fundamental relationship between flare production and the magnetic properties of the region. The derived flare contributions for the individual McIntosh parameters can be used to derive a flare rate for each of the three-parameter McIntosh classes. These derived flare rates can be interpreted as smoothed values that may provide better estimates of an active region's expected flare rate when rare classes are reported or when the multiple observing sites report slightly different classifications.
A Bulk Microphysics Parameterization with Multiple Ice Precipitation Categories.
NASA Astrophysics Data System (ADS)
Straka, Jerry M.; Mansell, Edward R.
2005-04-01
A single-moment bulk microphysics scheme with multiple ice precipitation categories is described. It has 2 liquid hydrometeor categories (cloud droplets and rain) and 10 ice categories that are characterized by habit, size, and density—two ice crystal habits (column and plate), rimed cloud ice, snow (ice crystal aggregates), three categories of graupel with different densities and intercepts, frozen drops, small hail, and large hail. The concept of riming history is implemented for conversions among the graupel and frozen drops categories. The multiple precipitation ice categories allow a range of particle densities and fall velocities for simulating a variety of convective storms with minimal parameter tuning. The scheme is applied to two cases—an idealized continental multicell storm that demonstrates the ice precipitation process, and a small Florida maritime storm in which the warm rain process is important.
2011-08-01
industries and key players providing equipment include Flow and OMAX. The decision tree for waterjet machining is shown in Figure 28. Figure 28...about the melt pool. Process parameters including powder flow , laser power, and scan speed are adjusted accordingly • Multiple materials o BD...project.eu.com/home/home_page_static.jsp o Working with multiple partners; one is Cochlear . Using LMD or SLM to fabricate cochlear implants with 10
Pareto-Zipf law in growing systems with multiplicative interactions
NASA Astrophysics Data System (ADS)
Ohtsuki, Toshiya; Tanimoto, Satoshi; Sekiyama, Makoto; Fujihara, Akihiro; Yamamoto, Hiroshi
2018-06-01
Numerical simulations of multiplicatively interacting stochastic processes with weighted selections were conducted. A feedback mechanism to control the weight w of selections was proposed. It becomes evident that when w is moderately controlled around 0, such systems spontaneously exhibit the Pareto-Zipf distribution. The simulation results are universal in the sense that microscopic details, such as parameter values and the type of control and weight, are irrelevant. The central ingredient of the Pareto-Zipf law is argued to be the mild control of interactions.
NASA Astrophysics Data System (ADS)
Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu
2016-09-01
In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2016-11-04
Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM.
Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal
2016-01-01
Fused deposition modeling (FDM) additive manufacturing has been intensively used for many industrial applications due to its attractive advantages over traditional manufacturing processes. The process parameters used in FDM have significant influence on the part quality and its properties. This process produces the plastic part through complex mechanisms and it involves complex relationships between the manufacturing conditions and the quality of the processed part. In the present study, the influence of multi-level manufacturing parameters on the temperature-dependent dynamic mechanical properties of FDM processed parts was investigated using IV-optimality response surface methodology (RSM) and multilayer feed-forward neural networks (MFNNs). The process parameters considered for optimization and investigation are slice thickness, raster to raster air gap, deposition angle, part print direction, bead width, and number of perimeters. Storage compliance and loss compliance were considered as response variables. The effect of each process parameter was investigated using developed regression models and multiple regression analysis. The surface characteristics are studied using scanning electron microscope (SEM). Furthermore, performance of optimum conditions was determined and validated by conducting confirmation experiment. The comparison between the experimental values and the predicted values by IV-Optimal RSM and MFNN was conducted for each experimental run and results indicate that the MFNN provides better predictions than IV-Optimal RSM. PMID:28774019
Multiple-Parameter, Low-False-Alarm Fire-Detection Systems
NASA Technical Reports Server (NTRS)
Hunter, Gary W.; Greensburg, Paul; McKnight, Robert; Xu, Jennifer C.; Liu, C. C.; Dutta, Prabir; Makel, Darby; Blake, D.; Sue-Antillio, Jill
2007-01-01
Fire-detection systems incorporating multiple sensors that measure multiple parameters are being developed for use in storage depots, cargo bays of ships and aircraft, and other locations not amenable to frequent, direct visual inspection. These systems are intended to improve upon conventional smoke detectors, now used in such locations, that reliably detect fires but also frequently generate false alarms: for example, conventional smoke detectors based on the blockage of light by smoke particles are also affected by dust particles and water droplets and, thus, are often susceptible to false alarms. In contrast, by utilizing multiple parameters associated with fires, i.e. not only obscuration by smoke particles but also concentrations of multiple chemical species that are commonly generated in combustion, false alarms can be significantly decreased while still detecting fires as reliably as older smoke-detector systems do. The present development includes fabrication of sensors that have, variously, micrometer- or nanometer-sized features so that such multiple sensors can be integrated into arrays that have sizes, weights, and power demands smaller than those of older macroscopic sensors. The sensors include resistors, electrochemical cells, and Schottky diodes that exhibit different sensitivities to the various airborne chemicals of interest. In a system of this type, the sensor readings are digitized and processed by advanced signal-processing hardware and software to extract such chemical indications of fires as abnormally high concentrations of CO and CO2, possibly in combination with H2 and/or hydrocarbons. The system also includes a microelectromechanical systems (MEMS)-based particle detector and classifier device to increase the reliability of measurements of chemical species and particulates. In parallel research, software for modeling the evolution of a fire within an aircraft cargo bay has been developed. The model implemented in the software can describe the concentrations of chemical species and of particulate matter as functions of time. A system of the present developmental type and a conventional fire detector were tested under both fire and false-alarm conditions in a Federal Aviation Administration cargo-compartment- testing facility. Both systems consistently detected fires. However, the conventional fire detector consistently generated false alarms, whereas the developmental system did not generate any false alarms.
NASA Astrophysics Data System (ADS)
Mahamood, Rasheedat M.; Akinlabi, Esther T.
2016-03-01
Ti6Al4V is an important Titanium alloy that is mostly used in many applications such as: aerospace, petrochemical and medicine. The excellent corrosion resistance property, the high strength to weight ratio and the retention of properties at high temperature makes them to be favoured in most applications. The high cost of Titanium and its alloys makes their use to be prohibitive in some applications. Ti6Al4V can be cladded on a less expensive material such as steel, thereby reducing cost and providing excellent properties. Laser Metal Deposition (LMD) process, an additive manufacturing process is capable of producing complex part directly from the 3-D CAD model of the part and it also has the capability of handling multiple materials. Processing parameters play an important role in LMD process and in order to achieve desired results at a minimum cost, then the processing parameters need to be properly controlled. This paper investigates the role of processing parameters: laser power, scanning speed, powder flow rate and gas flow rate, on the material utilization efficiency in laser metal deposited Ti6Al4V. A two-level full factorial design of experiment was used in this investigation, to be able to understand the processing parameters that are most significant as well as the interactions among these processing parameters. Four process parameters were used, each with upper and lower settings which results in a combination of sixteen experiments. The laser power settings used was 1.8 and 3 kW, the scanning speed was 0.05 and 0.1 m/s, the powder flow rate was 2 and 4 g/min and the gas flow rate was 2 and 4 l/min. The experiments were designed and analyzed using Design Expert 8 software. The software was used to generate the optimized process parameters which were found to be laser power of 3.2 kW, scanning speed of 0.06 m/s, powder flow rate of 2 g/min and gas flow rate of 3 l/min.
Process optimization by use of design of experiments: Application for liposomalization of FK506.
Toyota, Hiroyasu; Asai, Tomohiro; Oku, Naoto
2017-05-01
Design of experiments (DoE) can accelerate the optimization of drug formulations, especially complexed formulas such as those of drugs, using delivery systems. Administration of FK506 encapsulated in liposomes (FK506 liposomes) is an effective approach to treat acute stroke in animal studies. To provide FK506 liposomes as a brain protective agent, it is necessary to manufacture these liposomes with good reproducibility. The objective of this study was to confirm the usefulness of DoE for the process-optimization study of FK506 liposomes. The Box-Behnken design was used to evaluate the effect of the process parameters on the properties of FK506 liposomes. The results of multiple regression analysis showed that there was interaction between the hydration temperature and the freeze-thaw cycle on both the particle size and encapsulation efficiency. An increase in the PBS hydration volume resulted in an increase in encapsulation efficiency. Process parameters had no effect on the ζ-potential. The multiple regression equation showed good predictability of the particle size and the encapsulation efficiency. These results indicated that manufacturing conditions must be taken into consideration to prepare liposomes with desirable properties. DoE would thus be promising approach to optimize the conditions for the manufacturing of liposomes. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiple Phase Transitions in the Culture Dissemination
NASA Astrophysics Data System (ADS)
Wang, Bing; Han, Yuexing; Chen, Luonan; Aihara, Kazuyuki
We study the coevolution process in the Axelrod’s model with the consideration of agents’ abilities to access to the information. With a parameter to control the ability of communication, we observe two kinds of phase transitions both for cultural domains and network fragments, respectively. With the simulation results, we find the relationship between the critical value and the controlled parameter. The results indicate that the powerful ability to access to the information benefits the dissemination of culture in the system.
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
Research on polarization imaging information parsing method
NASA Astrophysics Data System (ADS)
Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong
2016-11-01
Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Investigation of Parametric Influence on the Properties of Al6061-SiCp Composite
NASA Astrophysics Data System (ADS)
Adebisi, A. A.; Maleque, M. A.; Bello, K. A.
2017-03-01
The influence of process parameter in stir casting play a major role on the development of aluminium reinforced silicon carbide particle (Al-SiCp) composite. This study aims to investigate the influence of process parameters on wear and density properties of Al-SiCp composite using stir casting technique. Experimental data are generated based on a four-factors-five-level central composite design of response surface methodology. Analysis of variance is utilized to confirm the adequacy and validity of developed models considering the significant model terms. Optimization of the process parameters adequately predicts the Al-SiCp composite properties with stirring speed as the most influencing factor. The aim of optimization process is to minimize wear and maximum density. The multiple objective optimization (MOO) achieved an optimal value of 14 wt% reinforcement fraction (RF), 460 rpm stirring speed (SS), 820 °C processing temperature (PTemp) and 150 secs processing time (PT). Considering the optimum parametric combination, wear mass loss achieved a minimum of 1 x 10-3 g and maximum density value of 2.780g/mm3 with a confidence and desirability level of 95.5%.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
NASA Astrophysics Data System (ADS)
Shrivastava, Prashant Kumar; Pandey, Arun Kumar
2018-06-01
Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.
Elzayat, Ehab M; Abdel-Rahman, Ali A; Ahmed, Sayed M; Alanazi, Fars K; Habib, Walid A; Sakr, Adel
2017-11-01
Multiple response optimization is an efficient technique to develop sustained release formulation while decreasing the number of experiments based on trial and error approach. Diclofenac matrix tablets were optimized to achieve a release profile conforming to USP monograph, matching Voltaren ® SR and withstand formulation variables. The percent of drug released at predetermined multiple time points were the response variables in the design. Statistical models were obtained with relative contour diagrams being overlaid to predict process and formulation parameters expected to produce the target release profile. Tablets were prepared by wet granulation using mixture of equivalent quantities of Eudragit RL/RS at overall polymer concentration of 10-30%w/w and compressed at 5-15KN. Drug release from the optimized formulation E4 (15%w/w, 15KN) was similar to Voltaren, conformed to USP monograph and found to be stable. Substituting lactose with mannitol, reversing the ratio between lactose and microcrystalline cellulose or increasing drug load showed no significant difference in drug release. Using dextromethorphan hydrobromide as a model soluble drug showed burst release due to higher solubility and formation of micro cavities. A numerical optimization technique was employed to develop a stable consistent promising formulation for sustained delivery of diclofenac.
Conditional probability of rainfall extremes across multiple durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2017-04-01
The conditional probability that extreme rainfall will occur at one location given that it is occurring at another location is critical in engineering design and management circumstances including planning of evacuation routes and the sitting of emergency infrastructure. A challenge with this conditional simulation is that in many situations the interest is not so much the conditional distributions of rainfall of the same duration at two locations, but rather the conditional distribution of flooding in two neighbouring catchments, which may be influenced by rainfall of different critical durations. To deal with this challenge, a model that can consider both spatial and duration dependence of extremes is required. The aim of this research is to develop a model that can take account both spatial dependence and duration dependence into the dependence structure of extreme rainfalls. To achieve this aim, this study is a first attempt at combining extreme rainfall for multiple durations within a spatial extreme model framework based on max-stable process theory. Max-stable processes provide a general framework for modelling multivariate extremes with spatial dependence for just a single duration extreme rainfall. To achieve dependence across multiple timescales, this study proposes a new approach that includes addition elements representing duration dependence of extremes to the covariance matrix of max-stable model. To improve the efficiency of calculation, a re-parameterization proposed by Koutsoyiannis et al. (1998) is used to reduce the number of parameters necessary to be estimated. This re-parameterization enables the GEV parameters to be represented as a function of timescale. A stepwise framework has been adopted to achieve the overall aims of this research. Firstly, the re-parameterization is used to define a new set of common parameters for marginal distribution across multiple durations. Secondly, spatial interpolation of the new parameter set is used to estimate marginal parameters across the full spatial domain. Finally, spatial interpolation result is used as initial condition to estimate dependence parameters via a likelihood function of max-stable model for multiple durations. The Hawkesbury-Nepean catchment near Sydney in Australia was selected as case study for this research. This catchment has 25 sub-daily rain gauges with the minimum record length of 24 years over a region of 300 km × 300 km area. The re-parameterization was applied for each station for durations from 1 hour to 24 hours and then is evaluated by comparing with the at-site fitted GEV. The evaluation showed that the average R2 for all station is around 0.80 with the range from 0.26 to 1.0. The output of re-parameterization then was used to construct the spatial surface based on covariates including longitude, latitude, and elevation. The dependence model showed good agreements between empirical extremal coefficient and theoretical extremal coefficient for multiple durations. For the overall model, a leave-one-out cross-validation for all stations showed it works well for 20 out of 25 stations. The potential application of this model framework was illustrated through a conditional map of return period and return level across multiple durations, both of which are important for engineering design and management.
Exciton multiplication from first principles.
Jaeger, Heather M; Hyeon-Deuk, Kim; Prezhdo, Oleg V
2013-06-18
Third-generation photovolatics require demanding cost and power conversion efficiency standards, which may be achieved through efficient exciton multiplication. Therefore, generating more than one electron-hole pair from the absorption of a single photon has vast ramifications on solar power conversion technology. Unlike their bulk counterparts, irradiated semiconductor quantum dots exhibit efficient exciton multiplication, due to confinement-enhanced Coulomb interactions and slower nonradiative losses. The exact characterization of the complicated photoexcited processes within quantum-dot photovoltaics is a work in progress. In this Account, we focus on the photophysics of nanocrystals and investigate three constituent processes of exciton multiplication, including photoexcitation, phonon-induced dephasing, and impact ionization. We quantify the role of each process in exciton multiplication through ab initio computation and analysis of many-electron wave functions. The probability of observing a multiple exciton in a photoexcited state is proportional to the magnitude of electron correlation, where correlated electrons can be simultaneously promoted across the band gap. Energies of multiple excitons are determined directly from the excited state wave functions, defining the threshold for multiple exciton generation. This threshold is strongly perturbed in the presence of surface defects, dopants, and ionization. Within a few femtoseconds following photoexcitation, the quantum state loses coherence through interactions with the vibrating atomic lattice. The phase relationship between single excitons and multiple excitons dissipates first, followed by multiple exciton fission. Single excitons are coupled to multiple excitons through Coulomb and electron-phonon interactions, and as a consequence, single excitons convert to multiple excitons and vice versa. Here, exciton multiplication depends on the initial energy and coupling magnitude and competes with electron-phonon energy relaxation. Multiple excitons are generated through impact ionization within picoseconds. The basis of exciton multiplication in quantum dots is the collective result of photoexcitation, dephasing, and nonadiabatic evolution. Each process is characterized by a distinct time-scale, and the overall multiple exciton generation dynamics is complete by about 10 ps. Without relying on semiempirical parameters, we computed quantum mechanical probabilities of multiple excitons for small model systems. Because exciton correlations and coherences are microscopic, quantum properties, results for small model systems can be extrapolated to larger, realistic quantum dots.
Measurement accuracy of a stressed contact lens during its relaxation period
NASA Astrophysics Data System (ADS)
Compertore, David C.; Ignatovich, Filipp V.
2018-02-01
We examine the dioptric power and transmitted wavefront of a contact lens as it releases its handling stresses. Handling stresses are introduced as part of the contact lens loading process and are common across all contact lens measurement procedures and systems. The latest advances in vision correction require tighter quality control during the manufacturing of the contact lenses. The optical power of contact lenses is one of the critical characteristics for users. Power measurements are conducted in the hydrated state, where the lens is resting inside a solution-filled glass cuvette. In a typical approach, the contact lens must be subject to long settling times prior to any measurements. Alternatively, multiple measurements must be averaged. Apart from potential operator dependency of such approach, it is extremely time-consuming, and therefore it precludes higher rates of testing. Comprehensive knowledge about the settling process can be obtained by monitoring multiple parameters of the lens simultaneously. We have developed a system that combines co-aligned a Shack-Hartmann transmitted wavefront sensor and a time-domain low coherence interferometer to measure several optical and physical parameters (power, cylinder power, aberrations, center thickness, sagittal depth, and diameter) simultaneously. We monitor these parameters during the stress relaxation period and show correlations that can be used by manufacturers to devise methods for improved quality control procedures.
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Akbarzadeh, Rosa; Yousefi, Azizeh-Mitra
2014-08-01
Tissue engineering makes use of 3D scaffolds to sustain three-dimensional growth of cells and guide new tissue formation. To meet the multiple requirements for regeneration of biological tissues and organs, a wide range of scaffold fabrication techniques have been developed, aiming to produce porous constructs with the desired pore size range and pore morphology. Among different scaffold fabrication techniques, thermally induced phase separation (TIPS) method has been widely used in recent years because of its potential to produce highly porous scaffolds with interconnected pore morphology. The scaffold architecture can be closely controlled by adjusting the process parameters, including polymer type and concentration, solvent composition, quenching temperature and time, coarsening process, and incorporation of inorganic particles. The objective of this review is to provide information pertaining to the effect of these parameters on the architecture and properties of the scaffolds fabricated by the TIPS technique. © 2014 Wiley Periodicals, Inc.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...
2017-02-23
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming
2015-01-01
High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Pan, M.-Ch.; Chu, W.-Ch.; Le, Duc-Do
2016-12-01
The paper presents an alternative Vold-Kalman filter order tracking (VKF_OT) method, i.e. adaptive angular-velocity VKF_OT technique, to extract and characterize order components in an adaptive manner for the condition monitoring and fault diagnosis of rotary machinery. The order/spectral waveforms to be tracked can be recursively solved by using Kalman filter based on the one-step state prediction. The paper comprises theoretical derivation of computation scheme, numerical implementation, and parameter investigation. Comparisons of the adaptive VKF_OT scheme with two other ones are performed through processing synthetic signals of designated order components. Processing parameters such as the weighting factor and the correlation matrix of process noise, and data conditions like the sampling frequency, which influence tracking behavior, are explored. The merits such as adaptive processing nature and computation efficiency brought by the proposed scheme are addressed although the computation was performed in off-line conditions. The proposed scheme can simultaneously extract multiple spectral components, and effectively decouple close and crossing orders associated with multi-axial reference rotating speeds.
Optimization of Robotic Spray Painting process Parameters using Taguchi Method
NASA Astrophysics Data System (ADS)
Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar
2018-02-01
Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.
A modal parameter extraction procedure applicable to linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Kurdila, A. J.; Craig, R. R., Jr.
1985-01-01
Modal analysis has emerged as a valuable tool in many phases of the engineering design process. Complex vibration and acoustic problems in new designs can often be remedied through use of the method. Moreover, the technique has been used to enhance the conceptual understanding of structures by serving to verify analytical models. A new modal parameter estimation procedure is presented. The technique is applicable to linear, time-invariant systems and accommodates multiple input excitations. In order to provide a background for the derivation of the method, some modal parameter extraction procedures currently in use are described. Key features implemented in the new technique are elaborated upon.
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
Choi, Jungyill; Harvey, Judson W.; Conklin, Martha H.
2000-01-01
The fate of contaminants in streams and rivers is affected by exchange and biogeochemical transformation in slowly moving or stagnant flow zones that interact with rapid flow in the main channel. In a typical stream, there are multiple types of slowly moving flow zones in which exchange and transformation occur, such as stagnant or recirculating surface water as well as subsurface hyporheic zones. However, most investigators use transport models with just a single storage zone in their modeling studies, which assumes that the effects of multiple storage zones can be lumped together. Our study addressed the following question: Can a single‐storage zone model reliably characterize the effects of physical retention and biogeochemical reactions in multiple storage zones? We extended an existing stream transport model with a single storage zone to include a second storage zone. With the extended model we generated 500 data sets representing transport of nonreactive and reactive solutes in stream systems that have two different types of storage zones with variable hydrologic conditions. The one storage zone model was tested by optimizing the lumped storage parameters to achieve a best fit for each of the generated data sets. Multiple storage processes were categorized as possessing I, additive; II, competitive; or III, dominant storage zone characteristics. The classification was based on the goodness of fit of generated data sets, the degree of similarity in mean retention time of the two storage zones, and the relative distributions of exchange flux and storage capacity between the two storage zones. For most cases (>90%) the one storage zone model described either the effect of the sum of multiple storage processes (category I) or the dominant storage process (category III). Failure of the one storage zone model occurred mainly for category II, that is, when one of the storage zones had a much longer mean retention time (ts ratio > 5.0) and when the dominance of storage capacity and exchange flux occurred in different storage zones. We also used the one storage zone model to estimate a “single” lumped rate constant representing the net removal of a solute by biogeochemical reactions in multiple storage zones. For most cases the lumped rate constant that was optimized by one storage zone modeling estimated the flux‐weighted rate constant for multiple storage zones. Our results explain how the relative hydrologic properties of multiple storage zones (retention time, storage capacity, exchange flux, and biogeochemical reaction rate constant) affect the reliability of lumped parameters determined by a one storage zone transport model. We conclude that stream transport models with a single storage compartment will in most cases reliably characterize the dominant physical processes of solute retention and biogeochemical reactions in streams with multiple storage zones.
How to retrieve additional information from the multiplicity distributions
NASA Astrophysics Data System (ADS)
Wilk, Grzegorz; Włodarczyk, Zbigniew
2017-01-01
Multiplicity distributions (MDs) P(N) measured in multiparticle production processes are most frequently described by the negative binomial distribution (NBD). However, with increasing collision energy some systematic discrepancies have become more and more apparent. They are usually attributed to the possible multi-source structure of the production process and described using a multi-NBD form of the MD. We investigate the possibility of keeping a single NBD but with its parameters depending on the multiplicity N. This is done by modifying the widely known clan model of particle production leading to the NBD form of P(N). This is then confronted with the approach based on the so-called cascade-stochastic formalism which is based on different types of recurrence relations defining P(N). We demonstrate that a combination of both approaches allows the retrieval of additional valuable information from the MDs, namely the oscillatory behavior of the counting statistics apparently visible in the high energy data.
Overview Of Dry-Etch Techniques
NASA Astrophysics Data System (ADS)
Salzer, John M.
1986-08-01
With pattern dimensions shrinking, dry methods of etching providing controllable degrees of anisotropy become a necessity. A number of different configurations of equipment - inline, hex, planar, barrel - have been offered, and within each type, there are numerous significant variations. Further, each specific type of machine must be perfected over a complex, interactive parameter space to achieve suitable removal of various materials. Among the most critical system parameters are the choice of cathode or anode to hold the wafers, the chamber pressure, the plasma excitation frequency, and the electrode and magnetron structures. Recent trends include the use of vacuum load locks, multiple chambers, multiple electrodes, downstream etching or stripping, and multistep processes. A major percentage of etches in production handle the three materials: polysilicon, oxide and aluminum. Recent process developments have targeted refractory metals, their silicides, and with increasing emphasis, silicon trenching. Indeed, with new VLSI structures, silicon trenching has become the process of greatest interest. For stripping, dry processes provide advantages other than anisotropy. Here, too, new configurations and methods have been introduced recently. While wet processes are less than desirable from a number of viewpoints (handling, safety, disposal, venting, classes of clean room, automatability), dry methods are still being perfected as a direct, universal replacement. The paper will give an overview of these machine structures and process solutions, together with examples of interest. These findings and the trends discussed are based on semiannual survey of manufacturers and users of the various types of equipment.
NASA Astrophysics Data System (ADS)
Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.
2013-12-01
A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)
Afolabi, Afolawemi; Akinlabi, Olakemi; Bilgili, Ecevit
2014-01-23
Wet stirred media milling has proven to be a robust process for producing nanoparticle suspensions of poorly water-soluble drugs. As the process is expensive and energy-intensive, it is important to study the breakage kinetics, which determines the cycle time and production rate for a desired fineness. Although the impact of process parameters on the properties of final product suspensions has been investigated, scant information is available regarding their impact on the breakage kinetics. Here, we elucidate the impact of stirrer speed, bead concentration, and drug loading on the breakage kinetics via a microhydrodynamic model for the bead-bead collisions. Suspensions of griseofulvin, a model poorly water-soluble drug, were prepared in the presence of two stabilizers: hydroxypropyl cellulose and sodium dodecyl sulfate. Laser diffraction, scanning electron microscopy, and rheometry were used to characterize them. Various microhydrodynamic parameters including a newly defined milling intensity factor was calculated. An increase in either the stirrer speed or the bead concentration led to an increase in the specific energy and the milling intensity factor, consequently faster breakage. On the other hand, an increase in the drug loading led to a decrease in these parameters and consequently slower breakage. While all microhydrodynamic parameters provided significant physical insight, only the milling intensity factor was capable of explaining the influence of all parameters directly through its strong correlation with the process time constant. Besides guiding process optimization, the analysis rationalizes the preparation of a single high drug-loaded batch (20% or higher) instead of multiple dilute batches. Copyright © 2013 Elsevier B.V. All rights reserved.
Numerical simulation study on rolling-chemical milling process of aluminum-lithium alloy skin panel
NASA Astrophysics Data System (ADS)
Huang, Z. B.; Sun, Z. G.; Sun, X. F.; Li, X. Q.
2017-09-01
Single curvature parts such as aircraft fuselage skin panels are usually manufactured by rolling-chemical milling process, which is usually faced with the problem of geometric accuracy caused by springback. In most cases, the methods of manual adjustment and multiple roll bending are used to control or eliminate the springback. However, these methods can cause the increase of product cost and cycle, and lead to material performance degradation. Therefore, it is of significance to precisely control the springback of rolling-chemical milling process. In this paper, using the method of experiment and numerical simulation on rolling-chemical milling process, the simulation model for rolling-chemical milling process of 2060-T8 aluminum-lithium alloy skin was established and testified by the comparison between numerical simulation and experiment results for the validity. Then, based on the numerical simulation model, the relative technological parameters which influence on the curvature of the skin panel were analyzed. Finally, the prediction of springback and the compensation can be realized by controlling the process parameters.
Nowak, Przemyslaw; Dobbins, Allan C.; Gawne, Timothy J.; Grzywacz, Norberto M.
2011-01-01
The ganglion cell output of the retina constitutes a bottleneck in sensory processing in that ganglion cells must encode multiple stimulus parameters in their responses. Here we investigate encoding strategies of On-Off directionally selective retinal ganglion cells (On-Off DS RGCs) in rabbits, a class of cells dedicated to representing motion. The exquisite axial discrimination of these cells to preferred vs. null direction motion is well documented: it is invariant with respect to speed, contrast, spatial configuration, spatial frequency, and motion extent. However, these cells have broad direction tuning curves and their responses also vary as a function of other parameters such as speed and contrast. In this study, we examined whether the variation in responses across multiple stimulus parameters is systematic, that is the same for all cells, and separable, such that the response to a stimulus is a product of the effects of each stimulus parameter alone. We extracellularly recorded single On-Off DS RGCs in a superfused eyecup preparation while stimulating them with moving bars. We found that spike count responses of these cells scaled as independent functions of direction, speed, and luminance. Moreover, the speed and luminance functions were common across the whole sample of cells. Based on these findings, we developed a model that accurately predicted responses of On-Off DS RGCs as products of separable functions of direction, speed, and luminance (r = 0.98; P < 0.0001). Such a multiplicatively separable encoding strategy may simplify the decoding of these cells' outputs by the higher visual centers. PMID:21325684
xspec_emcee: XSPEC-friendly interface for the emcee package
NASA Astrophysics Data System (ADS)
Sanders, Jeremy
2018-05-01
XSPEC_EMCEE is an XSPEC-friendly interface for emcee (ascl:1303.002). It carries out MCMC analyses of X-ray spectra in the X-ray spectral fitting program XSPEC (ascl:9910.005). It can run multiple xspec processes simultaneously, speeding up the analysis, and can switch to parameterizing norm parameters in log space.
An Optimal Parameter Discretization Strategy for Multiple Model Adaptive Estimation and Control
1989-12-01
Zicker . MMAE-Based Control with Space- Time Point Process Observations. IEEE Transactions on Aerospace and Elec- tronic Systems, AES-21 (3):292-300, 1985...Transactions of the Conference of Army Math- ematicians, Bethesda MD, 1982. (AD-POO1 033). 65. William L. Zicker . Pointing and Tracking of Particle
IRT-ZIP Modeling for Multivariate Zero-Inflated Count Data
ERIC Educational Resources Information Center
Wang, Lijuan
2010-01-01
This study introduces an item response theory-zero-inflated Poisson (IRT-ZIP) model to investigate psychometric properties of multiple items and predict individuals' latent trait scores for multivariate zero-inflated count data. In the model, two link functions are used to capture two processes of the zero-inflated count data. Item parameters are…
Experimental study of switching in a rho-i(MQW)-eta vertical coupler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavailles, J.A.; Erman, M.; Woodbridge, K.
1989-11-01
Electrically controlled switching in a vertically arranged directional coupler with GaAs/GaAIAs multiple quantum well waveguides is demonstrated. Coupling lengths and extinction parameters are determined by using a sample processed in such a way that injection conditions are well defined and that the coupler length can be varied continuously.
NASA Astrophysics Data System (ADS)
Dai, Z.; Wolfsberg, A. V.; Zhu, L.; Reimus, P. W.
2017-12-01
Colloids have the potential to enhance mobility of strongly sorbing radionuclide contaminants in fractured rocks at underground nuclear test sites. This study presents an experimental and numerical investigation of colloid-facilitated plutonium reactive transport in fractured porous media for identifying plutonium sorption/filtration processes. The transport parameters for dispersion, diffusion, sorption, and filtration are estimated with inverse modeling for minimizing the least squares objective function of multicomponent concentration data from multiple transport experiments with the Shuffled Complex Evolution Metropolis (SCEM). Capitalizing on an unplanned experimental artifact that led to colloid formation and migration, we adopt a stepwise strategy to first interpret the data from each experiment separately and then to incorporate multiple experiments simultaneously to identify a suite of plutonium-colloid transport processes. Nonequilibrium or kinetic attachment and detachment of plutonium-colloid in fractures was clearly demonstrated and captured in the inverted modeling parameters along with estimates of the source plutonium fraction that formed plutonium-colloids. The results from this study provide valuable insights for understanding the transport mechanisms and environmental impacts of plutonium in fractured formations and groundwater aquifers.
Part height control of laser metal additive manufacturing process
NASA Astrophysics Data System (ADS)
Pan, Yu-Herng
Laser Metal Deposition (LMD) has been used to not only make but also repair damaged parts in a layer-by-layer fashion. Parts made in this manner may produce less waste than those made through conventional machining processes. However, a common issue of LMD involves controlling the deposition's layer thickness. Accuracy is important, and as it increases, both the time required to produce the part and the material wasted during the material removal process (e.g., milling, lathe) decrease. The deposition rate is affected by multiple parameters, such as the powder feed rate, laser input power, axis feed rate, material type, and part design, the values of each of which may change during the LMD process. Using a mathematical model to build a generic equation that predicts the deposition's layer thickness is difficult due to these complex parameters. In this thesis, we propose a simple method that utilizes a single device. This device uses a pyrometer to monitor the current build height, thereby allowing the layer thickness to be controlled during the LMD process. This method also helps the LMD system to build parts even with complex parameters and to increase material efficiency.
Inverse Thermal Analysis of Titanium GTA Welds Using Multiple Constraints
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.; Shabaev, A.; Huang, L.
2015-06-01
Inverse thermal analysis of titanium gas-tungsten-arc welds using multiple constraint conditions is presented. This analysis employs a methodology that is in terms of numerical-analytical basis functions for inverse thermal analysis of steady-state energy deposition in plate structures. The results of this type of analysis provide parametric representations of weld temperature histories that can be adopted as input data to various types of computational procedures, such as those for prediction of solid-state phase transformations. In addition, these temperature histories can be used to construct parametric function representations for inverse thermal analysis of welds corresponding to other process parameters or welding processes whose process conditions are within similar regimes. The present study applies an inverse thermal analysis procedure that provides for the inclusion of constraint conditions associated with both solidification and phase transformation boundaries.
Effect of multiplicative noise on stationary stochastic process
NASA Astrophysics Data System (ADS)
Kargovsky, A. V.; Chikishev, A. Yu.; Chichigina, O. A.
2018-03-01
An open system that can be analyzed using the Langevin equation with multiplicative noise is considered. The stationary state of the system results from a balance of deterministic damping and random pumping simulated as noise with controlled periodicity. The dependence of statistical moments of the variable that characterizes the system on parameters of the problem is studied. A nontrivial decrease in the mean value of the main variable with an increase in noise stochasticity is revealed. Applications of the results in several physical, chemical, biological, and technical problems of natural and humanitarian sciences are discussed.
GMTI Direction of Arrival Measurements from Multiple Phase Centers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin W.; Bickel, Douglas L.
2015-03-01
Ground Moving Target Indicator (GMTI) radar attempts to detect and locate targets with unknown motion. Very slow-moving targets are difficult to locate in the presence of surrounding clutter. This necessitates multiple antenna phase centers (or equivalent) to offer independent Direction of Arrival (DOA) measurements. DOA accuracy and precision generally remains dependent on target Signal-to-Noise Ratio (SNR), Clutter-toNoise Ratio (CNR), scene topography, interfering signals, and a number of antenna parameters. This is true even for adaptive techniques like Space-Time-AdaptiveProcessing (STAP) algorithms.
NASA Technical Reports Server (NTRS)
Whitlock, C. H., III
1977-01-01
Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Meyer, Hans Jonas; Leifels, Leonard; Schob, Stefan; Garnov, Nikita; Surov, Alexey
2018-01-01
Nowadays, multiparametric investigations of head and neck squamous cell carcinoma (HNSCC) are established. These approaches can better characterize tumor biology and behavior. Diffusion weighted imaging (DWI) can by means of apparent diffusion coefficient (ADC) quantitatively characterize different tissue compartments. Dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) reflects perfusion and vascularization of tissues. Recently, a novel approach of data acquisition, namely histogram analysis of different images is a novel diagnostic approach, which can provide more information of tissue heterogeneity. The purpose of this study was to analyze possible associations between DWI, and DCE parameters derived from histogram analysis in patients with HNSCC. Overall, 34 patients, 9 women and 25 men, mean age, 56.7±10.2years, with different HNSCC were involved in the study. DWI was obtained by using of an axial echo planar imaging sequence with b-values of 0 and 800s/mm 2 . Dynamic T1w DCE sequence after intravenous application of contrast medium was performed for estimation of the following perfusion parameters: volume transfer constant (K trans ), volume of the extravascular extracellular leakage space (Ve), and diffusion of contrast medium from the extravascular extracellular leakage space back to the plasma (Kep). Both ADC and perfusion parameters maps were processed offline in DICOM format with custom-made Matlab-based application. Thereafter, polygonal ROIs were manually drawn on the transferred maps on each slice. For every parameter, mean, maximal, minimal, and median values, as well percentiles 10th, 25th, 75th, 90th, kurtosis, skewness, and entropy were estimated. Сorrelation analysis identified multiple statistically significant correlations between the investigated parameters. Ve related parameters correlated well with different ADC values. Especially, percentiles 10 and 75, mode, and median values showed stronger correlations in comparison to other parameters. Thereby, the calculated correlation coefficients ranged from 0.62 to 0.69. Furthermore, K trans related parameters showed multiple slightly to moderate significant correlations with different ADC values. Strongest correlations were identified between ADC P75 and K trans min (p=0.58, P=0.0007), and ADC P75 and K trans P10 (p=0.56, P=0.001). Only four K ep related parameters correlated statistically significant with ADC fractions. Strongest correlation was found between K ep max and ADC mode (p=-0.47, P=0.008). Multiple statistically significant correlations between, DWI and DCE MRI parameters derived from histogram analysis were identified in HNSCC. Copyright © 2017 Elsevier Inc. All rights reserved.
Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc
2014-09-15
Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.
Multi-epoch observations with high spatial resolution of multiple T Tauri systems
NASA Astrophysics Data System (ADS)
Csépány, Gergely; van den Ancker, Mario; Ábrahám, Péter; Köhler, Rainer; Brandner, Wolfgang; Hormuth, Felix; Hiss, Hector
2017-07-01
Context. In multiple pre-main-sequence systems the lifetime of circumstellar discs appears to be shorter than around single stars, and the actual dissipation process may depend on the binary parameters of the systems. Aims: We report high spatial resolution observations of multiple T Tauri systems at optical and infrared wavelengths. We determine whether the components are gravitationally bound and orbital motion is visible, derive orbital parameters, and investigate possible correlations between the binary parameters and disc states. Methods: We selected 18 T Tau multiple systems (16 binary and two triple systems, yielding 16 + 2 × 2 = 20 binary pairs) in the Taurus-Auriga star-forming region from a previous survey, with spectral types from K1 to M5 and separations from 0.22″ (31 AU) to 5.8″ (814 AU). We analysed data acquired in 2006-07 at Calar Alto using the AstraLux lucky imaging system, along with data from SPHERE and NACO at the VLT, and from the literature. Results: We found ten pairs to orbit each other, five pairs that may show orbital motion, and five likely common proper motion pairs. We found no obvious correlation between the stellar parameters and binary configuration. The 10 μm infra-red excess varies between 0.1 and 7.2 mag (similar to the distribution in single stars, where it is between 1.7 and 9.1), implying that the presence of the binary star does not greatly influence the emission from the inner disc. Conclusions: We have detected orbital motion in young T Tauri systems over a timescale of ≈ 20 yr. Further observations with even longer temporal baseline will provide crucial information on the dynamics of these young stellar systems.
Kröner, Frieder; Elsäßer, Dennis; Hubbuch, Jürgen
2013-11-29
The accelerating growth of the market for biopharmaceutical proteins, the market entry of biosimilars and the growing interest in new, more complex molecules constantly pose new challenges for bioseparation process development. In the presented work we demonstrate the application of a multidimensional, analytical separation approach to obtain the relevant physicochemical parameters of single proteins in a complex mixture for in silico chromatographic process development. A complete cell lysate containing a low titre target protein was first fractionated by multiple linear salt gradient anion exchange chromatography (AEC) with varying gradient length. The collected fractions were subsequently analysed by high-throughput capillary gel electrophoresis (HT-CGE) after being desalted and concentrated. From the obtained data of the 2D-separation the retention-volumes and the concentration of the single proteins were determined. The retention-volumes of the single proteins were used to calculate the related steric-mass action model parameters. In a final evaluation experiment the received parameters were successfully applied to predict the retention behaviour of the single proteins in salt gradient AEC. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mai, J.; Cuntz, M.; Zink, M.; Schaefer, D.; Thober, S.; Samaniego, L. E.; Shafii, M.; Tolson, B.
2015-12-01
Hydrologic models are traditionally calibrated against discharge. Recent studies have shown however, that only a few global model parameters are constrained using the integral discharge measurements. It is therefore advisable to use additional information to calibrate those models. Snow pack data, for example, could improve the parametrization of snow-related processes, which might be underrepresented when using only discharge. One common approach is to combine these multiple objectives into one single objective function and allow the use of a single-objective algorithm. Another strategy is to consider the different objectives separately and apply a Pareto-optimizing algorithm. Both methods are challenging in the choice of appropriate multiple objectives with either conflicting interests or the focus on different model processes. A first aim of this study is to compare the two approaches employing the mesoscale Hydrologic Model mHM at several distinct river basins over Europe and North America. This comparison will allow the identification of the single-objective solution on the Pareto front. It is elucidated if this position is determined by the weighting and scaling of the multiple objectives when combing them to the single objective. The principal second aim is to guide the selection of proper objectives employing sensitivity analyses. These analyses are used to determine if an additional information would help to constrain additional model parameters. The additional information are either multiple data sources or multiple signatures of one measurement. It is evaluated if specific discharge signatures can inform different parts of the hydrologic model. The results show that an appropriate selection of discharge signatures increased the number of constrained parameters by more than 50% compared to using only NSE of the discharge time series. It is further assessed if the use of these signatures impose conflicting objectives on the hydrologic model. The usage of signatures is furthermore contrasted to the use of additional observations such as soil moisture or snow height. The gain of using an auxiliary dataset is determined using the parametric sensitivity on the respective modeled variable.
Comprehensive analysis of line-edge and line-width roughness for EUV lithography
NASA Astrophysics Data System (ADS)
Bonam, Ravi; Liu, Chi-Chun; Breton, Mary; Sieg, Stuart; Seshadri, Indira; Saulnier, Nicole; Shearer, Jeffrey; Muthinti, Raja; Patlolla, Raghuveer; Huang, Huai
2017-03-01
Pattern transfer fidelity is always a major challenge for any lithography process and needs continuous improvement. Lithographic processes in semiconductor industry are primarily driven by optical imaging on photosensitive polymeric material (resists). Quality of pattern transfer can be assessed by quantifying multiple parameters such as, feature size uniformity (CD), placement, roughness, sidewall angles etc. Roughness in features primarily corresponds to variation of line edge or line width and has gained considerable significance, particularly due to shrinking feature sizes and variations of features in the same order. This has caused downstream processes (Etch (RIE), Chemical Mechanical Polish (CMP) etc.) to reconsider respective tolerance levels. A very important aspect of this work is relevance of roughness metrology from pattern formation at resist to subsequent processes, particularly electrical validity. A major drawback of current LER/LWR metric (sigma) is its lack of relevance across multiple downstream processes which effects material selection at various unit processes. In this work we present a comprehensive assessment of Line Edge and Line Width Roughness at multiple lithographic transfer processes. To simulate effect of roughness a pattern was designed with periodic jogs on the edges of lines with varying amplitudes and frequencies. There are numerous methodologies proposed to analyze roughness and in this work we apply them to programmed roughness structures to assess each technique's sensitivity. This work also aims to identify a relevant methodology to quantify roughness with relevance across downstream processes.
Laser Cladding of TiAl Intermetallic Alloy on Ti6Al4V -Process Optimization and Properties
NASA Astrophysics Data System (ADS)
Cárcel, B.; Serrano, A.; Zambrano, J.; Amigó, V.; Cárcel, A. C.
In order to improve Ti6Al4V high-temperature resistance and its tribological properties, the deposition of TiAl intermetallic (Ti-48Al-2Cr-2Nb) coating on a Ti6Al4V substrate by coaxial laser cladding has been investigated. Laser cladding by powder injection is an emerging laser material processing technique that allows the deposition of thick protective coatings on substrates,using a high power laser beam as heat source. Laser cladding is a multiple-parameter-dependent process. The main process parameters involved (laser power, powder feeding rate, scanning speed and preheating temperature) has been optimized. The microstructure and geometrical quantities (clad area and dilution) of the coating was characterized by optical microscopy and scanning electron microscopy (SEM). In addition the cooling rate of the clad during the process was measured by a dual-color pyrometer. This result has been related to defectology and mechanical coating properties.
Effect of processor temperature on film dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, Shiv P.; Das, Indra J., E-mail: idas@iupui.edu
2012-07-01
Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d{sub max.}, 10 Multiplication-Sign 10 cm{sup 2}, 100 cm) to a given dose. Anmore » automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4-40.6 Degree-Sign C (85-105 Degree-Sign F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.« less
Stature estimation from the lengths of the growing foot-a study on North Indian adolescents.
Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam; DiMaggio, John A
2012-12-01
Stature estimation is considered as one of the basic parameters of the investigation process in unknown and commingled human remains in medico-legal case work. Race, age and sex are the other parameters which help in this process. Stature estimation is of the utmost importance as it completes the biological profile of a person along with the other three parameters of identification. The present research is intended to formulate standards for stature estimation from foot dimensions in adolescent males from North India and study the pattern of foot growth during the growing years. 154 male adolescents from the Northern part of India were included in the study. Besides stature, five anthropometric measurements that included the length of the foot from each toe (T1, T2, T3, T4, and T5 respectively) to pternion were measured on each foot. The data was analyzed statistically using Student's t-test, Pearson's correlation, linear and multiple regression analysis for estimation of stature and growth of foot during ages 13-18 years. Correlation coefficients between stature and all the foot measurements were found to be highly significant and positively correlated. Linear regression models and multiple regression models (with age as a co-variable) were derived for estimation of stature from the different measurements of the foot. Multiple regression models (with age as a co-variable) estimate stature with greater accuracy than the regression models for 13-18 years age group. The study shows the growth pattern of feet in North Indian adolescents and indicates that anthropometric measurements of the foot and its segments are valuable in estimation of stature in growing individuals of that population. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Audebert, M.; Clément, R.; Touze-Foltz, N.; Günther, T.; Moreau, S.; Duquennoi, C.
2014-12-01
Leachate recirculation is a key process in municipal waste landfills functioning as bioreactors. To quantify the water content and to assess the leachate injection system, in-situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). This geophysical method is based on the inversion process, which presents two major problems in terms of delimiting the infiltration area. First, it is difficult for ERT users to choose an appropriate inversion parameter set. Indeed, it might not be sufficient to interpret only the optimum model (i.e. the model with the chosen regularisation strength) because it is not necessarily the model which best represents the physical process studied. Second, it is difficult to delineate the infiltration front based on resistivity models because of the smoothness of the inversion results. This paper proposes a new methodology called MICS (multiple inversions and clustering strategy), which allows ERT users to improve the delimitation of the infiltration area in leachate injection monitoring. The MICS methodology is based on (i) a multiple inversion step by varying the inversion parameter values to take a wide range of resistivity models into account and (ii) a clustering strategy to improve the delineation of the infiltration front. In this paper, MICS was assessed on two types of data. First, a numerical assessment allows us to optimise and test MICS for different infiltration area sizes, contrasts and shapes. Second, MICS was applied to a field data set gathered during leachate recirculation on a bioreactor.
Multi-functional micromotor: microfluidic fabrication and water treatment application.
Chen, Anqi; Ge, Xue-Hui; Chen, Jian; Zhang, Liyuan; Xu, Jian-Hong
2017-12-05
Micromotors are important for a wide variety of applications. Here, we develop a microfluidic approach for one-step fabrication of a Janus self-propelled micromotor with multiple functions. By fine tuning the fabrication parameters and loading functional nanoparticles, our micromotor reaches a high speed and achieves an oriented function to promote the water purification efficiency and recycling process.
ERIC Educational Resources Information Center
Stenneken, Prisca; Egetemeir, Johanna; Schulte-Korne, Gerd; Muller, Hermann J.; Schneider, Werner X.; Finke, Kathrin
2011-01-01
The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these…
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
GLobal Integrated Design Environment
NASA Technical Reports Server (NTRS)
Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.
2011-01-01
The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Polymer waveguide grating sensor integrated with a thin-film photodetector
Song, Fuchuan; Xiao, Jing; Xie, Antonio Jou; Seo, Sang-Woo
2014-01-01
This paper presents a planar waveguide grating sensor integrated with a photodetector (PD) for on-chip optical sensing systems which are suitable for diagnostics in the field and in-situ measurements. III–V semiconductor-based thin-film PD is integrated with a polymer based waveguide grating device on a silicon platform. The fabricated optical sensor successfully discriminates optical spectral characteristics of the polymer waveguide grating from the on-chip PD. In addition, its potential use as a refractive index sensor is demonstrated. Based on a planar waveguide structure, the demonstrated sensor chip may incorporate multiple grating waveguide sensing regions with their own optical detection PDs. In addition, the demonstrated processing is based on a post-integration process which is compatible with silicon complementary metal-oxide semiconductor (CMOS) electronics. Potentially, this leads a compact, chip-scale optical sensing system which can monitor multiple physical parameters simultaneously without need for external signal processing. PMID:24466407
Direct calculation of modal parameters from matrix orthogonal polynomials
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Guillaume, Patrick
2011-10-01
The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.
Electron-impact Multiple-ionization Cross Sections for Atoms and Ions of Helium through Zinc
NASA Astrophysics Data System (ADS)
Hahn, M.; Müller, A.; Savin, D. W.
2017-12-01
We compiled a set of electron-impact multiple-ionization (EIMI) cross section for astrophysically relevant ions. EIMIs can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature or if there is a non-thermal electron energy distribution, such as a kappa distribution. Cross section for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to available experimental EIMI cross-section data. Based on this comparison, we interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for 3466 EIMI cross sections and the associated Maxwellian plasma rate coefficients. We also highlight some outstanding issues that remain to be resolved.
Single particle momentum and angular distributions in hadron-hadron collisions at ultrahigh energies
NASA Technical Reports Server (NTRS)
Chou, T. T.; Chen, N. Y.
1985-01-01
The forward-backward charged multiplicity distribution (P n sub F, n sub B) of events in the 540 GeV antiproton-proton collider has been extensively studied by the UA5 Collaboration. It was pointed out that the distribution with respect to n = n sub F + n sub B satisfies approximate KNO scaling and that with respect to Z = n sub F - n sub B is binomial. The geometrical model of hadron-hadron collision interprets the large multiplicity fluctuation as due to the widely different nature of collisions at different impact parameters b. For a single impact parameter b, the collision in the geometrical model should exhibit stochastic behavior. This separation of the stochastic and nonstochastic (KNO) aspects of multiparticle production processes gives conceptually a lucid and attractive picture of such collisions, leading to the concept of partition temperature T sub p and the single particle momentum spectrum to be discussed in detail.
Raguin, Olivier; Gruaz-Guyon, Anne; Barbet, Jacques
2002-11-01
An add-in to Microsoft Excel was developed to simulate multiple binding equilibriums. A partition function, readily written even when the equilibrium is complex, describes the experimental system. It involves the concentrations of the different free molecular species and of the different complexes present in the experiment. As a result, the software is not restricted to a series of predefined experimental setups but can handle a large variety of problems involving up to nine independent molecular species. Binding parameters are estimated by nonlinear least-square fitting of experimental measurements as supplied by the user. The fitting process allows user-defined weighting of the experimental data. The flexibility of the software and the way it may be used to describe common experimental situations and to deal with usual problems such as tracer reactivity or nonspecific binding is demonstrated by a few examples. The software is available free of charge upon request.
Reichardt, J; Hess, M; Macke, A
2000-04-20
Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.
Cherepy, Nerine Jane; Payne, Stephen Anthony; Drury, Owen B.; Sturm, Benjamin W.
2016-02-09
According to one embodiment, a scintillator radiation detector system includes a scintillator, and a processing device for processing pulse traces corresponding to light pulses from the scintillator, where the processing device is configured to: process each pulse trace over at least two temporal windows and to use pulse digitization to improve energy resolution of the system. According to another embodiment, a scintillator radiation detector system includes a processing device configured to: fit digitized scintillation waveforms to an algorithm, perform a direct integration of fit parameters, process multiple integration windows for each digitized scintillation waveform to determine a correction factor, and apply the correction factor to each digitized scintillation waveform.
Characteristics, Process Parameters, and Inner Components of Anaerobic Bioreactors
Abdelgadir, Awad; Chen, Xiaoguang; Liu, Jianshe; Xie, Xuehui; Zhang, Jian; Zhang, Kai; Wang, Heng; Liu, Na
2014-01-01
The anaerobic bioreactor applies the principles of biotechnology and microbiology, and nowadays it has been used widely in the wastewater treatment plants due to their high efficiency, low energy use, and green energy generation. Advantages and disadvantages of anaerobic process were shown, and three main characteristics of anaerobic bioreactor (AB), namely, inhomogeneous system, time instability, and space instability were also discussed in this work. For high efficiency of wastewater treatment, the process parameters of anaerobic digestion, such as temperature, pH, Hydraulic retention time (HRT), Organic Loading Rate (OLR), and sludge retention time (SRT) were introduced to take into account the optimum conditions for living, growth, and multiplication of bacteria. The inner components, which can improve SRT, and even enhance mass transfer, were also explained and have been divided into transverse inner components, longitudinal inner components, and biofilm-packing material. At last, the newly developed special inner components were discussed and found more efficient and productive. PMID:24672798
Characteristics, process parameters, and inner components of anaerobic bioreactors.
Abdelgadir, Awad; Chen, Xiaoguang; Liu, Jianshe; Xie, Xuehui; Zhang, Jian; Zhang, Kai; Wang, Heng; Liu, Na
2014-01-01
The anaerobic bioreactor applies the principles of biotechnology and microbiology, and nowadays it has been used widely in the wastewater treatment plants due to their high efficiency, low energy use, and green energy generation. Advantages and disadvantages of anaerobic process were shown, and three main characteristics of anaerobic bioreactor (AB), namely, inhomogeneous system, time instability, and space instability were also discussed in this work. For high efficiency of wastewater treatment, the process parameters of anaerobic digestion, such as temperature, pH, Hydraulic retention time (HRT), Organic Loading Rate (OLR), and sludge retention time (SRT) were introduced to take into account the optimum conditions for living, growth, and multiplication of bacteria. The inner components, which can improve SRT, and even enhance mass transfer, were also explained and have been divided into transverse inner components, longitudinal inner components, and biofilm-packing material. At last, the newly developed special inner components were discussed and found more efficient and productive.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
NASA Astrophysics Data System (ADS)
Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.
2016-12-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Modern supercritical fluid technology for food applications.
King, Jerry W
2014-01-01
This review provides an update on the use of supercritical fluid (SCF) technology as applied to food-based materials. It advocates the use of the solubility parameter theory (SPT) for rationalizing the results obtained when employing sub- and supercritical media to food and nutrient-bearing materials and for optimizing processing conditions. Total extraction and fractionation of foodstuffs employing SCFs are compared and are illustrated by using multiple fluids and unit processes to obtain the desired food product. Some of the additional prophylactic benefits of using carbon dioxide as the processing fluid are explained and illustrated with multiple examples of commercial products produced using SCF media. I emphasize the role of SCF technology in the context of environmentally benign and sustainable processing, as well as its integration into an overall biorefinery concept. Conclusions are drawn in terms of current trends in the field and future research that is needed to secure new applications of the SCF platform as applied in food science and technology.
NASA Astrophysics Data System (ADS)
Liu, Jian; Ma, Yushu; Dou, Shidan; Wang, Yi; La, Dongsheng; Liu, Jianghong; Ma, Zhenhe
2016-07-01
A blockage of the middle cerebral artery (MCA) on the cortical branch will seriously affect the blood supply of the cerebral cortex. Real-time monitoring of MCA hemodynamic parameters is critical for therapy and rehabilitation. Optical coherence tomography (OCT) is a powerful imaging modality that can produce not only structural images but also functional information on the tissue. We use OCT to detect hemodynamic changes after MCA branch occlusion. We injected a selected dose of endothelin-1 (ET-1) at a depth of 1 mm near the MCA and let the blood vessels follow a process first of occlusion and then of slow reperfusion as realistically as possible to simulate local cerebral ischemia. During this period, we used optical microangiography and Doppler OCT to obtain multiple hemodynamic MCA parameters. The change trend of these parameters from before to after ET-1 injection clearly reflects the dynamic regularity of the MCA. These results show the mechanism of the cerebral ischemia-reperfusion process after a transient middle cerebral artery occlusion and confirm that OCT can be used to monitor hemodynamic parameters.
NASA Astrophysics Data System (ADS)
Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.
2018-05-01
A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
1996-05-01
Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.
Typecasting catchments: Classification, directionality, and the pursuit of universality
NASA Astrophysics Data System (ADS)
Smith, Tyler; Marshall, Lucy; McGlynn, Brian
2018-02-01
Catchment classification poses a significant challenge to hydrology and hydrologic modeling, restricting widespread transfer of knowledge from well-studied sites. The identification of important physical, climatological, or hydrologic attributes (to varying degrees depending on application/data availability) has traditionally been the focus for catchment classification. Classification approaches are regularly assessed with regard to their ability to provide suitable hydrologic predictions - commonly by transferring fitted hydrologic parameters at a data-rich catchment to a data-poor catchment deemed similar by the classification. While such approaches to hydrology's grand challenges are intuitive, they often ignore the most uncertain aspect of the process - the model itself. We explore catchment classification and parameter transferability and the concept of universal donor/acceptor catchments. We identify the implications of the assumption that the transfer of parameters between "similar" catchments is reciprocal (i.e., non-directional). These concepts are considered through three case studies situated across multiple gradients that include model complexity, process description, and site characteristics. Case study results highlight that some catchments are more successfully used as donor catchments and others are better suited as acceptor catchments. These results were observed for both black-box and process consistent hydrologic models, as well as for differing levels of catchment similarity. Therefore, we suggest that similarity does not adequately satisfy the underlying assumptions being made in parameter regionalization approaches regardless of model appropriateness. Furthermore, we suggest that the directionality of parameter transfer is an important factor in determining the success of parameter regionalization approaches.
NASA Astrophysics Data System (ADS)
Qin, Zhongzhong; Cao, Leiming; Jing, Jietai
2015-05-01
Quantum correlations and entanglement shared among multiple modes are fundamental ingredients of most continuous-variable quantum technologies. Recently, a method used to generate multiple quantum correlated beams using cascaded four-wave mixing (FWM) processes was theoretically proposed and experimentally realized by our group [Z. Qin et al., Phys. Rev. Lett. 113, 023602 (2014)]. Our study of triple-beam quantum correlation paves the way to showing the tripartite entanglement in our system. Our system also promises to find applications in quantum information and precision measurement such as the controlled quantum communications, the generation of multiple quantum correlated images, and the realization of a multiport nonlinear interferometer. For its applications, the degree of quantum correlation is a crucial figure of merit. In this letter, we experimentally study how various parameters, such as the cell temperatures, one-photon, and two-photon detunings, influence the degree of quantum correlation between the triple beams generated from the cascaded two-FWM configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Zhongzhong; Cao, Leiming; Jing, Jietai, E-mail: jtjing@phy.ecnu.edu.cn
2015-05-25
Quantum correlations and entanglement shared among multiple modes are fundamental ingredients of most continuous-variable quantum technologies. Recently, a method used to generate multiple quantum correlated beams using cascaded four-wave mixing (FWM) processes was theoretically proposed and experimentally realized by our group [Z. Qin et al., Phys. Rev. Lett. 113, 023602 (2014)]. Our study of triple-beam quantum correlation paves the way to showing the tripartite entanglement in our system. Our system also promises to find applications in quantum information and precision measurement such as the controlled quantum communications, the generation of multiple quantum correlated images, and the realization of a multiportmore » nonlinear interferometer. For its applications, the degree of quantum correlation is a crucial figure of merit. In this letter, we experimentally study how various parameters, such as the cell temperatures, one-photon, and two-photon detunings, influence the degree of quantum correlation between the triple beams generated from the cascaded two-FWM configuration.« less
An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.
Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero
2017-04-01
The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.
A time to search: finding the meaning of variable activation energy.
Vyazovkin, Sergey
2016-07-28
This review deals with the phenomenon of variable activation energy frequently observed when studying the kinetics in the liquid or solid phase. This phenomenon commonly manifests itself through nonlinear Arrhenius plots or dependencies of the activation energy on conversion computed by isoconversional methods. Variable activation energy signifies a multi-step process and has a meaning of a collective parameter linked to the activation energies of individual steps. It is demonstrated that by using appropriate models of the processes, the link can be established in algebraic form. This allows one to analyze experimentally observed dependencies of the activation energy in a quantitative fashion and, as a result, to obtain activation energies of individual steps, to evaluate and predict other important parameters of the process, and generally to gain deeper kinetic and mechanistic insights. This review provides multiple examples of such analysis as applied to the processes of crosslinking polymerization, crystallization and melting of polymers, gelation, and solid-solid morphological and glass transitions. The use of appropriate computational techniques is discussed as well.
ERIC Educational Resources Information Center
Pawade, Yogesh R.; Diwase, Dipti S.
2016-01-01
Item analysis of Multiple Choice Questions (MCQs) is the process of collecting, summarizing and utilizing information from students' responses to evaluate the quality of test items. Difficulty Index (p-value), Discrimination Index (DI) and Distractor Efficiency (DE) are the parameters which help to evaluate the quality of MCQs used in an…
Multi-objective optimization of GENIE Earth system models.
Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J
2009-07-13
The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset
NASA Astrophysics Data System (ADS)
Zhou, Y. K.
2018-05-01
Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.
Keune, Philipp M; Hansen, Sascha; Weber, Emily; Zapf, Franziska; Habich, Juliane; Muenssinger, Jana; Wolf, Sebastian; Schönenberg, Michael; Oschmann, Patrick
2017-09-01
Neurophysiologic monitoring parameters related to cognition in Multiple Sclerosis (MS) are sparse. Previous work reported an association between magnetoencephalographic (MEG) alpha-1 activity and information processing speed. While this remains to be replicated by more available electroencephalographic (EEG) methods, also other established EEG markers, e.g. the slow-wave/fast-wave ratio (theta/beta ratio), remain to be explored in this context. Performance on standard tests addressing information processing speed and attention (Symbol-Digit Modalities Test, SDMT; Test of Attention Performance, TAP) was examined in relation to resting-state EEG alpha-1 and alpha-2 activity and the theta/beta ratio in 25MS patients. Increased global alpha-1 and alpha-2 activity and an increased frontal theta/beta ratio (pronounced slow-wave relative to fast-wave activity) were associated with lower SDMT processing speed. In an exploratory analysis, clinically impaired attention was associated with a significantly increased frontal theta/beta ratio whereas alpha power did not show sensitivity to clinical impairment. EEG global alpha power and the frontal theta/beta ratio were both associated with attention. The theta/beta ratio involved potential clinical sensitivity. Resting-state EEG recordings can be obtained during the routine clinical process. The examined resting-state measures may represent feasible monitoring parameters in MS. This notion should be explored in future intervention studies. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
A Nonlinear Model for Fuel Atomization in Spray Combustion
NASA Technical Reports Server (NTRS)
Liu, Nan-Suey (Technical Monitor); Ibrahim, Essam A.; Sree, Dave
2003-01-01
Most gas turbine combustion codes rely on ad-hoc statistical assumptions regarding the outcome of fuel atomization processes. The modeling effort proposed in this project is aimed at developing a realistic model to produce accurate predictions of fuel atomization parameters. The model involves application of the nonlinear stability theory to analyze the instability and subsequent disintegration of the liquid fuel sheet that is produced by fuel injection nozzles in gas turbine combustors. The fuel sheet is atomized into a multiplicity of small drops of large surface area to volume ratio to enhance the evaporation rate and combustion performance. The proposed model will effect predictions of fuel sheet atomization parameters such as drop size, velocity, and orientation as well as sheet penetration depth, breakup time and thickness. These parameters are essential for combustion simulation codes to perform a controlled and optimized design of gas turbine fuel injectors. Optimizing fuel injection processes is crucial to improving combustion efficiency and hence reducing fuel consumption and pollutants emissions.
A hybrid optimization approach in non-isothermal glass molding
NASA Astrophysics Data System (ADS)
Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz
2016-10-01
Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.
ERIC Educational Resources Information Center
Abad, Francisco J.; Olea, Julio; Ponsoda, Vicente
2009-01-01
This article deals with some of the problems that have hindered the application of Samejima's and Thissen and Steinberg's multiple-choice models: (a) parameter estimation difficulties owing to the large number of parameters involved, (b) parameter identifiability problems in the Thissen and Steinberg model, and (c) their treatment of omitted…
Chattree, A; Barbour, J A; Thomas-Gibson, S; Bhandari, P; Saunders, B P; Veitch, A M; Anderson, J; Rembacken, B J; Loughrey, M B; Pullan, R; Garrett, W V; Lewis, G; Dolwani, S; Rutter, M D
2017-01-01
The management of large non-pedunculated colorectal polyps (LNPCPs) is complex, with widespread variation in management and outcome, even amongst experienced clinicians. Variations in the assessment and decision-making processes are likely to be a major factor in this variability. The creation of a standardized minimum dataset to aid decision-making may therefore result in improved clinical management. An official working group of 13 multidisciplinary specialists was appointed by the Association of Coloproctology of Great Britain and Ireland (ACPGBI) and the British Society of Gastroenterology (BSG) to develop a minimum dataset on LNPCPs. The literature review used to structure the ACPGBI/BSG guidelines for the management of LNPCPs was used by a steering subcommittee to identify various parameters pertaining to the decision-making processes in the assessment and management of LNPCPs. A modified Delphi consensus process was then used for voting on proposed parameters over multiple voting rounds with at least 80% agreement defined as consensus. The minimum dataset was used in a pilot process to ensure rigidity and usability. A 23-parameter minimum dataset with parameters relating to patient and lesion factors, including six parameters relating to image retrieval, was formulated over four rounds of voting with two pilot processes to test rigidity and usability. This paper describes the development of the first reported evidence-based and expert consensus minimum dataset for the management of LNPCPs. It is anticipated that this dataset will allow comprehensive and standardized lesion assessment to improve decision-making in the assessment and management of LNPCPs. Colorectal Disease © 2016 The Association of Coloproctology of Great Britain and Ireland.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
NASA Astrophysics Data System (ADS)
Odbert, Henry; Aspinall, Willy
2014-05-01
Evidence-based hazard assessment at volcanoes assimilates knowledge about the physical processes of hazardous phenomena and observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We discuss the uncertainty of inferences, and how our method provides a route to formal propagation of uncertainties in hazard models. Such approaches provide an attractive route to developing an interface between volcano monitoring analyses and probabilistic hazard scenario analysis. We discuss the use of BBNs in hazard analysis as a tractable and traceable tool for fast, rational assimilation of complex, multi-parameter data sets in the context of timely volcanic crisis decision support.
Geometric Characterization of Multi-Axis Multi-Pinhole SPECT
DiFilippo, Frank P.
2008-01-01
A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574
Gailani, Joseph Z; Lackey, Tahirih C; King, David B; Bryant, Duncan; Kim, Sung-Chan; Shafer, Deborah J
2016-03-01
Model studies were conducted to investigate the potential coral reef sediment exposure from dredging associated with proposed development of a deepwater wharf in Apra Harbor, Guam. The Particle Tracking Model (PTM) was applied to quantify the exposure of coral reefs to material suspended by the dredging operations at two alternative sites. Key PTM features include the flexible capability of continuous multiple releases of sediment parcels, control of parcel/substrate interaction, and the ability to efficiently track vast numbers of parcels. This flexibility has facilitated simulating the combined effects of sediment released from clamshell dredging and chiseling within Apra Harbor. Because the rate of material released into the water column by some of the processes is not well understood or known a priori, the modeling approach was to bracket parameters within reasonable ranges to produce a suite of potential results from multiple model runs. Sensitivity analysis to model parameters is used to select the appropriate parameter values for bracketing. Data analysis results include mapping the time series and the maximum values of sedimentation, suspended sediment concentration, and deposition rate. Data were used to quantify various exposure processes that affect coral species in Apra Harbor. The goal of this research is to develop a robust methodology for quantifying and bracketing exposure mechanisms to coral (or other receptors) from dredging operations. These exposure values were utilized in an ecological assessment to predict effects (coral reef impacts) from various dredging scenarios. Copyright © 2015. Published by Elsevier Ltd.
U.S. Seismic Design Maps Web Application
NASA Astrophysics Data System (ADS)
Martinez, E.; Fee, J.
2015-12-01
The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.
Additive Manufacturing in Production: A Study Case Applying Technical Requirements
NASA Astrophysics Data System (ADS)
Ituarte, Iñigo Flores; Coatanea, Eric; Salmi, Mika; Tuomi, Jukka; Partanen, Jouni
Additive manufacturing (AM) is expanding the manufacturing capabilities. However, quality of AM produced parts is dependent on a number of machine, geometry and process parameters. The variability of these parameters affects the manufacturing drastically and therefore standardized processes and harmonized methodologies need to be developed to characterize the technology for end use applications and enable the technology for manufacturing. This research proposes a composite methodology integrating Taguchi Design of Experiments, multi-objective optimization and statistical process control, to optimize the manufacturing process and fulfil multiple requirements imposed to an arbitrary geometry. The proposed methodology aims to characterize AM technology depending upon manufacturing process variables as well as to perform a comparative assessment of three AM technologies (Selective Laser Sintering, Laser Stereolithography and Polyjet). Results indicate that only one machine, laser-based Stereolithography, was feasible to fulfil simultaneously macro and micro level geometrical requirements but mechanical properties were not at required level. Future research will study a single AM system at the time to characterize AM machine technical capabilities and stimulate pre-normative initiatives of the technology for end use applications.
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Transformation optics with windows
NASA Astrophysics Data System (ADS)
Oxburgh, Stephen; White, Chris D.; Antoniou, Georgios; Orife, Ejovbokoghene; Courtial, Johannes
2014-09-01
Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.
An Efficient Method for Verifying Gyrokinetic Microstability Codes
NASA Astrophysics Data System (ADS)
Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.
2009-11-01
Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.
CHAM: weak signals detection through a new multivariate algorithm for process control
NASA Astrophysics Data System (ADS)
Bergeret, François; Soual, Carole; Le Gratiet, B.
2016-10-01
Derivatives technologies based on core CMOS processes are significantly aggressive in term of design rules and process control requirements. Process control plan is a derived from Process Assumption (PA) calculations which result in a design rule based on known process variability capabilities, taking into account enough margin to be safe not only for yield but especially for reliability. Even though process assumptions are calculated with a 4 sigma known process capability margin, efficient and competitive designs are challenging the process especially for derivatives technologies in 40 and 28nm nodes. For wafer fab process control, PA are declined in monovariate (layer1 CD, layer2 CD, layer2 to layer1 overlay, layer3 CD etc….) control charts with appropriated specifications and control limits which all together are securing the silicon. This is so far working fine but such system is not really sensitive to weak signals coming from interactions of multiple key parameters (high layer2 CD combined with high layer3 CD as an example). CHAM is a software using an advanced statistical algorithm specifically designed to detect small signals, especially when there are many parameters to control and when the parameters can interact to create yield issues. In this presentation we will first present the CHAM algorithm, then the case-study on critical dimensions, with the results, and we will conclude on future work. This partnership between Ippon and STM is part of E450LMDAP, European project dedicated to metrology and lithography development for future technology nodes, especially 10nm.
Apical polarity in three-dimensional culture systems: where to now?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inman, J.L.; Bissell, Mina
2010-01-21
Delineation of the mechanisms that establish and maintain the polarity of epithelial tissues is essential to understanding morphogenesis, tissue specificity and cancer. Three-dimensional culture assays provide a useful platform for dissecting these processes but, as discussed in a recent study in BMC Biology on the culture of mammary gland epithelial cells, multiple parameters that influence the model must be taken into account.
Building a Predictive Capability for Decision-Making that Supports MultiPEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, Joshua Daniel
Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.
He, Fuyuan; Deng, Kaiwen; Zou, Huan; Qiu, Yun; Chen, Feng; Zhou, Honghao
2011-01-01
To study on the differences between chromatopharmacokinetics (pharmacokinetics with fingerprint chromatography) and chromatopharmacodynamics (pharmacodynamics with fingerprint chromatography) of Chinese materia medica formulae to answer the question whether the pharmacokinetic parameters of multiple composites can be utilized to guide the medication of multiple composites. On the base of established four chromatopharmacology (pharmacology with chromatographic fingerprint), the pharmacokinetics, and pharmacodynamics were analyzed comparably on their mathematical model and parameter definition. On the basis of quantitative pharmacology, the function expressions and total statistical parameters, such as total zero moment, total first moment, total second moment of the pharmacokinetics, and pharmacodynamics were analyzed to the common expressions and elucidated results for single and multiple components in Chinese materia medica formulae. Total quantitative pharmacokinetic, i.e., chromatopharmacokinetic parameter were decided by each component pharmacokinetic parameters, whereas the total quantitative pharmacodynamic, i.e., chromatopharmacodynamic parameter were decided by both of pharmacokinetic and pharmacodynamic parameters of each components. The pharmacokinetic parameters were corresponded to pharmacodynamic parameters with an existing stable effective coefficient when the constitutive ratio of each composite was a constant. The effects of Chinese materia medica were all controlled by pharmacokinetic and pharmacodynamic coefficient. It is a special case that the pharmacokinetic parameter could independently guide the clinical medication for single component whereas the chromatopharmacokinetic parameters are not applied to the multiple drug combination system, and not be used to solve problems of chromatopharmacokinetic of Chinese materia medica formulae.
Damage modeling and statistical analysis of optics damage performance in MJ-class laser systems.
Liao, Zhi M; Raymond, B; Gaylord, J; Fallejo, R; Bude, J; Wegner, P
2014-11-17
Modeling the lifetime of a fused silica optic is described for a multiple beam, MJ-class laser system. This entails combining optic processing data along with laser shot data to account for complete history of optic processing and shot exposure. Integrating with online inspection data allows for the construction of a performance metric to describe how an optic performs with respect to the model. This methodology helps to validate the damage model as well as allows strategic planning and identifying potential hidden parameters that are affecting the optic's performance.
Process for combining multiple passes of interferometric SAR data
Bickel, Douglas L.; Yocky, David A.; Hensley, Jr., William H.
2000-11-21
Interferometric synthetic aperture radar (IFSAR) is a promising technology for a wide variety of military and civilian elevation modeling requirements. IFSAR extends traditional two dimensional SAR processing to three dimensions by utilizing the phase difference between two SAR images taken from different elevation positions to determine an angle of arrival for each pixel in the scene. This angle, together with the two-dimensional location information in the traditional SAR image, can be transformed into geographic coordinates if the position and motion parameters of the antennas are known accurately.
Liu, Wei; Yang, Xiang-Liang; Ho, W S Winston
2011-01-01
Much attention has in recent years been paid to fine applications of drug delivery systems, such as multiple emulsions, micro/nano solid lipid and polymer particles (spheres or capsules). Precise control of particle size and size distribution is especially important in such fine applications. Membrane emulsification can be used to prepare uniform-sized multiple emulsions and micro/nano particulates for drug delivery. It is a promising technique because of the better control of size and size distribution, the mildness of the process, the low energy consumption, easy operation and simple equipment, and amendable for large scale production. This review describes the state of the art of membrane emulsification in the preparation of monodisperse multiple emulsions and micro/nano particulates for drug delivery in recent years. The principles, influence of process parameters, advantages and disadvantages, and applications in preparing different types of drug delivery systems are reviewed. It can be concluded that the membrane emulsification technique in preparing emulsion/particulate products for drug delivery will further expand in the near future in conjunction with more basic investigations on this technique. Copyright © 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Multiscale metrologies for process optimization of carbon nanotube polymer composites
Natarajan, Bharath; Orloff, Nathan D.; Ashkar, Rana; ...
2016-07-18
Carbon nanotube (CNT) polymer nanocomposites are attractive multifunctional materials with a growing range of commercial applications. With the increasing demand for these materials, it is imperative to develop and validate methods for on-line quality control and process monitoring during production. In this work, a novel combination of characterization techniques is utilized, that facilitates the non-invasive assessment of CNT dispersion in epoxy produced by the scalable process of calendering. First, the structural parameters of these nanocomposites are evaluated across multiple length scales (10 -10 m to 10 -3 m) using scanning gallium-ion microscopy, transmission electron microscopy and small-angle neutron scattering. Then,more » a non-contact resonant microwave cavity perturbation (RCP) technique is employed to accurately measure the AC electrical conductivity of the nanocomposites. Quantitative correlations between the conductivity and structural parameters find the RCP measurements to be sensitive to CNT mass fraction, spatial organization and, therefore, the processing parameters. These results, and the non-contact nature and speed of RCP measurements identify this technique as being ideally suited for quality control of CNT nanocomposites in a nanomanufacturing environment. In conclusion, when validated by the multiscale characterization suite, RCP may be broadly applicable in the production of hybrid functional materials, such as graphene, gold nanorod, and carbon black nanocomposites.« less
On using the Multiple Signal Classification algorithm to study microbaroms
NASA Astrophysics Data System (ADS)
Marcillo, O. E.; Blom, P. S.; Euler, G. G.
2016-12-01
Multiple Signal Classification (MUSIC) (Schmidt, 1986) is a well-known high-resolution algorithm used in array processing for parameter estimation. We report on the application of MUSIC to infrasonic array data in a study of the structure of microbaroms. Microbaroms can be globally observed and display energy centered around 0.2 Hz. Microbaroms are an infrasonic signal generated by the non-linear interaction of ocean surface waves that radiate into the ocean and atmosphere as well as the solid earth in the form of microseisms. Microbaroms sources are dynamic and, in many cases, distributed in space and moving in time. We assume that the microbarom energy detected by an infrasonic array is the result of multiple sources (with different back-azimuths) in the same bandwidth and apply the MUSIC algorithm accordingly to recover the back-azimuth and trace velocity of the individual components. Preliminary results show that the multiple component assumption in MUSIC allows one to resolve the fine structure in the microbarom band that can be related to multiple ocean surface phenomena.
A wireless body measurement system to study fatigue in multiple sclerosis.
Yu, Fei; Bilberg, Arne; Stenager, Egon; Rabotti, Chiara; Zhang, Bin; Mischi, Massimo
2012-12-01
Fatigue is reported as the most common symptom by patients with multiple sclerosis (MS). The physiological and functional parameters related to fatigue in MS patients are currently not well established. A new wearable wireless body measurement system, named Fatigue Monitoring System (FAMOS), was developed to study fatigue in MS. It can continuously measure electrocardiogram, body-skin temperature, electromyogram and motions of feet. The goal of this study is to test the ability of distinguishing fatigued MS patients from healthy subjects by the use of FAMOS. This paper presents the realization of the measurement system including the design of both hardware and dedicated signal processing algorithms. Twenty-six participants including 17 MS patients with fatigue and 9 sex- and age-matched healthy controls were included in the study for continuous 24 h monitoring. The preliminary results show significant differences between fatigued MS patients and healthy controls. In conclusion, the FAMOS enables continuous data acquisition and estimation of multiple physiological and functional parameters. It provides a new, flexible and objective approach to study fatigue in MS, which can distinguish between fatigued MS patients and healthy controls. The usability and reliability of the FAMOS should however be further improved and validated through larger clinical trials.
Analysis of Generator Oscillation Characteristics Based on Multiple Synchronized Phasor Measurements
NASA Astrophysics Data System (ADS)
Hashiguchi, Takuhei; Yoshimoto, Masamichi; Mitani, Yasunori; Saeki, Osamu; Tsuji, Kiichiro
In recent years, there has been considerable interest in the on-line measurement, such as observation of power system dynamics and evaluation of machine parameters. On-line methods are particularly attractive since the machine’s service need not be interrupted and parameter estimation is performed by processing measurements obtained during the normal operation of the machine. Authors placed PMU (Phasor Measurement Unit) connected to 100V outlets in some Universities in the 60Hz power system and examine oscillation characteristics in power system. PMU is synchronized based on the global positioning system (GPS) and measured data are transmitted via Internet. This paper describes an application of PMU for generator oscillation analysis. The purpose of this paper is to show methods for processing phase difference and to estimate damping coeffcient and natural angular frequency from phase difference at steady state.
High speed demodulation systems for fiber optic grating sensors
NASA Technical Reports Server (NTRS)
Udd, Eric (Inventor); Weisshaar, Andreas (Inventor)
2002-01-01
Fiber optic grating sensor demodulation systems are described that offer high speed and multiplexing options for both single and multiple parameter fiber optic grating sensors. To attain very high speeds for single parameter fiber grating sensors ratio techniques are used that allow a series of sensors to be placed in a single fiber while retaining high speed capability. These methods can be extended to multiparameter fiber grating sensors. Optimization of speeds can be obtained by minimizing the number of spectral peaks that must be processed and it is shown that two or three spectral peak measurements may in specific multiparameter applications offer comparable or better performance than processing four spectral peaks. Combining the ratio methods with minimization of peak measurements allows very high speed measurement of such important environmental effects as transverse strain and pressure.
Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor
Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki
2015-01-01
This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760
An Index and Test of Linear Moderated Mediation.
Hayes, Andrew F
2015-01-01
I describe a test of linear moderated mediation in path analysis based on an interval estimate of the parameter of a function linking the indirect effect to values of a moderator-a parameter that I call the index of moderated mediation. This test can be used for models that integrate moderation and mediation in which the relationship between the indirect effect and the moderator is estimated as linear, including many of the models described by Edwards and Lambert ( 2007 ) and Preacher, Rucker, and Hayes ( 2007 ) as well as extensions of these models to processes involving multiple mediators operating in parallel or in serial. Generalization of the method to latent variable models is straightforward. Three empirical examples describe the computation of the index and the test, and its implementation is illustrated using Mplus and the PROCESS macro for SPSS and SAS.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Hanke, Alexander T; Tsintavi, Eleni; Ramirez Vazquez, Maria Del Pilar; van der Wielen, Luuk A M; Verhaert, Peter D E M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel
2016-09-01
Knowledge-based development of chromatographic separation processes requires efficient techniques to determine the physicochemical properties of the product and the impurities to be removed. These characterization techniques are usually divided into approaches that determine molecular properties, such as charge, hydrophobicity and size, or molecular interactions with auxiliary materials, commonly in the form of adsorption isotherms. In this study we demonstrate the application of a three-dimensional liquid chromatography approach to a clarified cell homogenate containing a therapeutic enzyme. Each separation dimension determines a molecular property relevant to the chromatographic behavior of each component. Matching of the peaks across the different separation dimensions and against a high-resolution reference chromatogram allows to assign the determined parameters to pseudo-components, allowing to determine the most promising technique for the removal of each impurity. More detailed process design using mechanistic models requires isotherm parameters. For this purpose, the second dimension consists of multiple linear gradient separations on columns in a high-throughput screening compatible format, that allow regression of isotherm parameters with an average standard error of 8%. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1283-1291, 2016. © 2016 American Institute of Chemical Engineers.
Pervez, Hifsa; Mozumder, Mohammad S; Mourad, Abdel-Hamid I
2016-08-22
The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO₂ nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO₂), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young's modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L₉ orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO₂, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO₂ nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO₂ nanocomposites fabricated through the injection molding process.
NASA Astrophysics Data System (ADS)
Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj
2015-12-01
Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.
QCScreen: a software tool for data quality control in LC-HRMS based metabolomics.
Simader, Alexandra Maria; Kluger, Bernhard; Neumann, Nora Katharina Nicole; Bueschl, Christoph; Lemmens, Marc; Lirk, Gerald; Krska, Rudolf; Schuhmacher, Rainer
2015-10-24
Metabolomics experiments often comprise large numbers of biological samples resulting in huge amounts of data. This data needs to be inspected for plausibility before data evaluation to detect putative sources of error e.g. retention time or mass accuracy shifts. Especially in liquid chromatography-high resolution mass spectrometry (LC-HRMS) based metabolomics research, proper quality control checks (e.g. for precision, signal drifts or offsets) are crucial prerequisites to achieve reliable and comparable results within and across experimental measurement sequences. Software tools can support this process. The software tool QCScreen was developed to offer a quick and easy data quality check of LC-HRMS derived data. It allows a flexible investigation and comparison of basic quality-related parameters within user-defined target features and the possibility to automatically evaluate multiple sample types within or across different measurement sequences in a short time. It offers a user-friendly interface that allows an easy selection of processing steps and parameter settings. The generated results include a coloured overview plot of data quality across all analysed samples and targets and, in addition, detailed illustrations of the stability and precision of the chromatographic separation, the mass accuracy and the detector sensitivity. The use of QCScreen is demonstrated with experimental data from metabolomics experiments using selected standard compounds in pure solvent. The application of the software identified problematic features, samples and analytical parameters and suggested which data files or compounds required closer manual inspection. QCScreen is an open source software tool which provides a useful basis for assessing the suitability of LC-HRMS data prior to time consuming, detailed data processing and subsequent statistical analysis. It accepts the generic mzXML format and thus can be used with many different LC-HRMS platforms to process both multiple quality control sample types as well as experimental samples in one or more measurement sequences.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1996-01-01
This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.
Ruys, Andrew J.
2018-01-01
Electrospun fibres have gained broad interest in biomedical applications, including tissue engineering scaffolds, due to their potential in mimicking extracellular matrix and producing structures favourable for cell and tissue growth. The development of scaffolds often involves multivariate production parameters and multiple output characteristics to define product quality. In this study on electrospinning of polycaprolactone (PCL), response surface methodology (RSM) was applied to investigate the determining parameters and find optimal settings to achieve the desired properties of fibrous scaffold for acetabular labrum implant. The results showed that solution concentration influenced fibre diameter, while elastic modulus was determined by solution concentration, flow rate, temperature, collector rotation speed, and interaction between concentration and temperature. Relationships between these variables and outputs were modelled, followed by an optimization procedure. Using the optimized setting (solution concentration of 10% w/v, flow rate of 4.5 mL/h, temperature of 45 °C, and collector rotation speed of 1500 RPM), a target elastic modulus of 25 MPa could be achieved at a minimum possible fibre diameter (1.39 ± 0.20 µm). This work demonstrated that multivariate factors of production parameters and multiple responses can be investigated, modelled, and optimized using RSM. PMID:29562614
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.; Karyakina, T. I.
2018-06-01
In this paper, we consider the activation processes in nonlinear meta-stable system based on a lateral (quasi-two-dimensional) superlattice and study the dynamics of such a system externally driven by a harmonic force. The internal control parameters are the longitudinal applied electric field and the sample temperature. The spontaneous transverse electric field is considered as an order parameter. The forced violations of order parameter are considered as a response of a system to periodic driving. We investigate the cooperative effects of self-organization and high harmonic forcing from the viewpoint of catastrophe theory and show the possibility of generation of third and higher odd harmonics in output signal that lead to distortion of its wave front. A higher harmonics detection strategy is further proposed and explained in detail by exploring the influences of system parameters on the response output of the system that are discussed through numerical simulations.
NASA Technical Reports Server (NTRS)
Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.
1995-01-01
A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.
Finite-size effects and switching times for Moran process with mutation.
DeVille, Lee; Galiardi, Meghan
2017-04-01
We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.
Lin-Gibson, Sheng; Sung, Lipiin; Forster, Aaron M; Hu, Haiqing; Cheng, Yajun; Lin, Nancy J
2009-07-01
Multicomponent formulations coupled with complex processing conditions govern the final properties of photopolymerizable dental composites. In this study, a single test substrate was fabricated to support multiple formulations with a gradient in degree of conversion (DC), allowing the evaluation of multiple processing conditions and formulations on one specimen. Mechanical properties and damage response were evaluated as a function of filler type/content and irradiation. DC, surface roughness, modulus, hardness, scratch deformation and cytotoxicity were quantified using techniques including near-infrared spectroscopy, laser confocal scanning microscopy, depth-sensing indentation, scratch testing and cell viability. Scratch parameters (depth, width, percent recovery) were correlated to composite modulus and hardness. Total filler content, nanofiller and irradiation time/intensity all affected the final properties, with the dominant factor for improved properties being a higher DC. This combinatorial platform accelerates the screening of dental composites through the direct comparison of properties and processing conditions across the same sample.
Universality in the tail of musical note rank distribution
NASA Astrophysics Data System (ADS)
Beltrán del Río, M.; Cocho, G.; Naumis, G. G.
2008-09-01
Although power laws have been used to fit rank distributions in many different contexts, they usually fail at the tails. Languages as sequences of symbols have been a popular subject for ranking distributions, and for this purpose, music can be treated as such. Here we show that more than 1800 musical compositions are very well fitted by the first kind two parameter beta distribution, which arises in the ranking of multiplicative stochastic processes. The parameters a and b are obtained for classical, jazz and rock music, revealing interesting features. Specially, we have obtained a clear trend in the values of the parameters for major and minor tonal modes. Finally, we discuss the distribution of notes for each octave and its connection with the ranking of the notes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Systematic development of technical textiles
NASA Astrophysics Data System (ADS)
Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.
2016-07-01
Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.
Software Computes Tape-Casting Parameters
NASA Technical Reports Server (NTRS)
deGroh, Henry C., III
2003-01-01
Tcast2 is a FORTRAN computer program that accelerates the setup of a process in which a slurry containing metal particles and a polymeric binder is cast, to a thickness regulated by a doctor blade, onto fibers wound on a rotating drum to make a green precursor of a metal-matrix/fiber composite tape. Before Tcast2, setup parameters were determined by trial and error in time-consuming multiple iterations of the process. In Tcast2, the fiber architecture in the final composite is expressed in terms of the lateral distance between fibers and the thickness-wise distance between fibers in adjacent plies. The lateral distance is controlled via the manner of winding. The interply spacing is controlled via the characteristics of the slurry and the doctor-blade height. When a new combination of fibers and slurry is first cast and dried to a green tape, the shrinkage from the wet to the green condition and a few other key parameters of the green tape are measured. These parameters are provided as input to Tcast2, which uses them to compute the doctor-blade height and fiber spacings needed to obtain the desired fiber architecture and fiber volume fraction in the final composite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
A Telemetry Browser Built with Java Components
NASA Astrophysics Data System (ADS)
Poupart, E.
In the context of CNES balloon scientific campaigns and telemetry survey field, a generic telemetry processing product, called TelemetryBrowser in the following, was developed reusing COTS, Java Components for most of them. Connection between those components relies on a software architecture based on parameter producers and parameter consumers. The first one transmit parameter values to the second one which has registered to it. All of those producers and consumers can be spread over the network thanks to Corba, and over every kind of workstation thanks to Java. This gives a very powerful mean to adapt to constraints like network bandwidth, or workstations processing or memory. It's also very useful to display and correlate at the same time information coming from multiple and various sources. An important point of this architecture is that the coupling between parameter producers and parameter consumers is reduced to the minimum and that transmission of information on the network is made asynchronously. So, if a parameter consumer goes down or runs slowly, there is no consequence on the other consumers, because producers don't wait for their consumers to finish their data processing before sending it to other consumers. An other interesting point is that parameter producers, also called TelemetryServers in the following are generated nearly automatically starting from a telemetry description using Flavori component. Keywords Java components, Corba, distributed application, OpenORBii, software reuse, COTS, Internet, Flavor. i Flavor (Formal Language for Audio-Visual Object Representation) is an object-oriented media representation language being developed at Columbia University. It is designed as an extension of Java and C++ and simplifies the development of applications that involve a significant media processing component (encoding, decoding, editing, manipulation, etc.) by providing bitstream representation semantics. (flavor.sourceforge.net) ii OpenORB provides a Java implementation of the OMG Corba 2.4.2 specification (openorb.sourceforge.net) 1/16
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Wang, Zhiyue J; Seo, Youngseob; Babcock, Evelyn; Huang, Hao; Bluml, Stefan; Wisnowski, Jessica; Holshouser, Barbara; Panigrahy, Ashok; Shaw, Dennis W W; Altman, Nolan; McColl, Roderick W; Rollins, Nancy K
2016-05-08
The purpose of this study was to explore the feasibility of assessing quality of diffusion tensor imaging (DTI) from multiple sites and vendors using American College of Radiology (ACR) phantom. Participating sites (Siemens (n = 2), GE (n= 2), and Philips (n = 4)) reached consensus on parameters for DTI and used the widely available ACR phantom. Tensor data were processed at one site. B0 and eddy current distortions were assessed using grid line displacement on phantom Slice 5; signal-to-noise ratio (SNR) was measured at the center and periphery of the b = 0 image; fractional anisotropy (FA) and mean diffusivity (MD) were assessed using phantom Slice 7. Variations of acquisition parameters and deviations from specified sequence parameters were recorded. Nonlinear grid line distortion was higher with linear shimming and could be corrected using the 2nd order shimming. Following image registration, eddy current distortion was consistently smaller than acquisi-tion voxel size. SNR was consistently higher in the image periphery than center by a factor of 1.3-2.0. ROI-based FA ranged from 0.007 to 0.024. ROI-based MD ranged from 1.90 × 10-3 to 2.33 × 10-3 mm2/s (median = 2.04 × 10-3 mm2/s). Two sites had image void artifacts. The ACR phantom can be used to compare key qual-ity measures of diffusion images acquired from multiple vendors at multiple sites.
Electron Impact Multiple Ionization Cross Sections for Solar Physics
NASA Astrophysics Data System (ADS)
Hahn, M.; Savin, D. W.; Mueller, A.
2017-12-01
We have compiled a set of electron-impact multiple ionization (EIMI) cross sections for astrophysically relevant ions. EIMI can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature, as in solar flares or in nanoflare coronal heating. EIMI is also likely to be significant when the electron energy distribution is non-thermal, such as if the electrons follow a kappa distribution. Cross sections for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to the available experimental EIMI cross section data. Based on this comparison, we have interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for thousands of EIMI cross sections. We also highlight some outstanding issues that remain to be resolved.
Kasbawati; Gunawan, Agus Yodi; Sidarto, Kuntjoro Adjie
2017-07-01
An unstructured model for the growth of yeast cell on glucose due to growth inhibitions by substrate, products, and cell density is discussed. The proposed model describes the dynamical behavior of fermentation system that shows multiple steady states for a certain regime of operating parameters such as inlet glucose and dilution rate. Two types of steady state solutions are found, namely washout and non-washout solutions. Furthermore, different numerical impositions to the two parameters put in evidence three results regarding non-washout solution: a unique locally stable non-washout solution, a unique locally stable non-washout solution towards which other nearby solutions exhibit damped oscillations, and multiple non-washout solutions where one is locally stable while the other is unstable. It is also found an optimal inlet glucose which produces the highest cell and ethanol concentration. Copyright © 2017 Elsevier Inc. All rights reserved.
Range data description based on multiple characteristics
NASA Technical Reports Server (NTRS)
Al-Hujazi, Ezzet; Sood, Arun
1988-01-01
An algorithm for describing range images based on Mean curvature (H) and Gaussian curvature (K) is presented. Range images are unique in that they directly approximate the physical surfaces of a real world 3-D scene. The curvature parameters are derived from the fundamental theorems of differential geometry and provides visible invariant pixel labels that can be used to characterize the scene. The sign of H and K can be used to classify each pixel into one of eight possible surface types. Due to the sensitivity of these parameters to noise the resulting HK-sing map does not directly identify surfaces in the range images and must be further processed. A region growing algorithm based on modeling the scene points with a Markov Random Field (MRF) of variable neighborhood size and edge models is suggested. This approach allows the integration of information from multiple characteristics in an efficient way. The performance of the proposed algorithm on a number of synthetic and real range images is discussed.
Design and optimization of an energy degrader with a multi-wedge scheme based on Geant4
NASA Astrophysics Data System (ADS)
Liang, Zhikai; Liu, Kaifeng; Qin, Bin; Chen, Wei; Liu, Xu; Li, Dong; Xiong, Yongqian
2018-05-01
A proton therapy facility based on an isochronous superconducting cyclotron is under construction in Huazhong University of Science and Technology (HUST). To meet the clinical requirements, an energy degrader is essential in the beamline to modulate the fixed beam energy extracted from the cyclotron. Because of the multiple Coulomb scattering in the degrader, the beam emittance and the energy spread will be considerably increased during the energy degradation process. Therefore, a set of collimators is designed to restrict the increase in beam emittance after the energy degradation. The energy spread will be reduced in the following beam line which is not discussed in this paper. In this paper, the design considerations of an energy degrader and collimators are introduced, and the properties of the degrader material, degrader structure and the initial beam parameters are discussed using the Geant4 Monte-Carlo toolkit, with the main purpose of improving the overall performance of the degrader by multiple parameter optimization.
NASA Astrophysics Data System (ADS)
Gabrielse, C.; Angelopoulos, V.; Artemyev, A.; Runov, A.; Harris, C.
2016-12-01
We study energetic electron injections using an analytical model that self-consistently describes electric and magnetic field perturbations of transient, localized dipolarizing flux bundles (DFBs). Previous studies using THEMIS, Van Allen Probes, and the Magnetospheric Multiscale Mission have shown that injections can occur on short (minutes) or long (10s of minutes) timescales. These studies suggest that the short timescale injections correspond to a single DFB, whereas long timescale injections are likely caused by an aggregate of multiple DFBs, each incrementally heating the particle population. We therefore model the effects of multiple DFBs on the electron population using multi-spacecraft observations of the fields and particle fluxes to constrain the model parameters. The analytical model is the first of its kind to model multiple dipolarization fronts in order to better understand the transport and acceleration process throughout the plasma sheet. It can reproduce most injection signatures at multiple locations simultaneously, reaffirming earlier findings that multiple earthward-traveling DFBs can both transport and accelerate electrons to suprathermal energies, and can thus be considered the injections' primary driver.
A highly scalable information system as extendable framework solution for medical R&D projects.
Holzmüller-Laue, Silke; Göde, Bernd; Stoll, Regina; Thurow, Kerstin
2009-01-01
For research projects in preventive medicine a flexible information management is needed that offers a free planning and documentation of project specific examinations. The system should allow a simple, preferably automated data acquisition from several distributed sources (e.g., mobile sensors, stationary diagnostic systems, questionnaires, manual inputs) as well as an effective data management, data use and analysis. An information system fulfilling these requirements has been developed at the Center for Life Science Automation (celisca). This system combines data of multiple investigations and multiple devices and displays them on a single screen. The integration of mobile sensor systems for comfortable, location-independent capture of time-based physiological parameter and the possibility of observation of these measurements directly by this system allow new scenarios. The web-based information system presented in this paper is configurable by user interfaces. It covers medical process descriptions, operative process data visualizations, a user-friendly process data processing, modern online interfaces (data bases, web services, XML) as well as a comfortable support of extended data analysis with third-party applications.
Low-cost and high-speed optical mark reader based on an intelligent line camera
NASA Astrophysics Data System (ADS)
Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin
2003-08-01
Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
NASA Astrophysics Data System (ADS)
Pitts, James Daniel
Rotary ultrasonic machining (RUM), a hybrid process combining ultrasonic machining and diamond grinding, was created to increase material removal rates for the fabrication of hard and brittle workpieces. The objective of this research was to experimentally derive empirical equations for the prediction of multiple machined surface roughness parameters for helically pocketed rotary ultrasonic machined Zerodur glass-ceramic workpieces by means of a systematic statistical experimental approach. A Taguchi parametric screening design of experiments was employed to systematically determine the RUM process parameters with the largest effect on mean surface roughness. Next empirically determined equations for the seven common surface quality metrics were developed via Box-Behnken surface response experimental trials. Validation trials were conducted resulting in predicted and experimental surface roughness in varying levels of agreement. The reductions in cutting force and tool wear associated with RUM, reported by previous researchers, was experimentally verified to also extended to helical pocketing of Zerodur glass-ceramic.
The Use of Video-Tacheometric Technology for Documenting and Analysing Geometric Features of Objects
NASA Astrophysics Data System (ADS)
Woźniak, Marek; Świerczyńska, Ewa; Jastrzębski, Sławomir
2015-12-01
This paper analyzes selected aspects of the use of video-tacheometric technology for inventorying and documenting geometric features of objects. Data was collected with the use of the video-tacheometer Topcon Image Station IS-3 and the professional camera Canon EOS 5D Mark II. During the field work and the development of data the following experiments have been performed: multiple determination of the camera interior orientation parameters and distortion parameters of five lenses with different focal lengths, reflectorless measurements of profiles for the elevation and inventory of decorative surface wall of the building of Warsaw Ballet School. During the research the process of acquiring and integrating video-tacheometric data was analysed as well as the process of combining "point cloud" acquired by using video-tacheometer in the scanning process with independent photographs taken by a digital camera. On the basis of tests performed, utility of the use of video-tacheometric technology in geodetic surveys of geometrical features of buildings has been established.
Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection
NASA Astrophysics Data System (ADS)
Brasche, L. J. H.; Lopez, R.; Eisenmann, D.
2006-03-01
Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.
Low Dose Radiation Cancer Risks: Epidemiological and Toxicological Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
David G. Hoel, PhD
2012-04-19
The basic purpose of this one year research grant was to extend the two stage clonal expansion model (TSCE) of carcinogenesis to exposures other than the usual single acute exposure. The two-stage clonal expansion model of carcinogenesis incorporates the biological process of carcinogenesis, which involves two mutations and the clonal proliferation of the intermediate cells, in a stochastic, mathematical way. The current TSCE model serves a general purpose of acute exposure models but requires numerical computation of both the survival and hazard functions. The primary objective of this research project was to develop the analytical expressions for the survival functionmore » and the hazard function of the occurrence of the first cancer cell for acute, continuous and multiple exposure cases within the framework of the piece-wise constant parameter two-stage clonal expansion model of carcinogenesis. For acute exposure and multiple exposures of acute series, it is either only allowed to have the first mutation rate vary with the dose, or to have all the parameters be dose dependent; for multiple exposures of continuous exposures, all the parameters are allowed to vary with the dose. With these analytical functions, it becomes easy to evaluate the risks of cancer and allows one to deal with the various exposure patterns in cancer risk assessment. A second objective was to apply the TSCE model with varing continuous exposures from the cancer studies of inhaled plutonium in beagle dogs. Using step functions to estimate the retention functions of the pulmonary exposure of plutonium the multiple exposure versions of the TSCE model was to be used to estimate the beagle dog lung cancer risks. The mathematical equations of the multiple exposure versions of the TSCE model were developed. A draft manuscript which is attached provides the results of this mathematical work. The application work using the beagle dog data from plutonium exposure has not been completed due to the fact that the research project did not continue beyond its first year.« less
Monterial, Mateusz; Marleau, Peter; Paff, Marc; ...
2017-01-20
Here, we present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with bothmore » multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.« less
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
NASA Astrophysics Data System (ADS)
Monterial, Mateusz; Marleau, Peter; Paff, Marc; Clarke, Shaun; Pozzi, Sara
2017-04-01
We present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with both multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.
Development of Parallel Architectures for Sensor Array Processing. Volume 1
1993-08-01
required for the DOA estimation [ 1-7]. The Multiple Signal Classification ( MUSIC ) [ 1] and the Estimation of Signal Parameters by Rotational...manifold and the estimated subspace. Although MUSIC is a high resolution algorithm, it has several drawbacks including the fact that complete knowledge of...thoroughly, MUSIC algorithm was selected to develop special purpose hardware for real time computation. Summary of the MUSIC algorithm is as follows
Anarjan, Navideh; Jafarizadeh-Malmiri, Hoda; Nehdi, Imededdine Arbi; Sbihi, Hassen Mohamed; Al-Resayes, Saud Ibrahim; Tan, Chin Ping
2015-01-01
Nanodispersion systems allow incorporation of lipophilic bioactives, such as astaxanthin (a fat soluble carotenoid) into aqueous systems, which can improve their solubility, bioavailability, and stability, and widen their uses in water-based pharmaceutical and food products. In this study, response surface methodology was used to investigate the influences of homogenization time (0.5–20 minutes) and speed (1,000–9,000 rpm) in the formation of astaxanthin nanodispersions via the solvent-diffusion process. The product was characterized for particle size and astaxanthin concentration using laser diffraction particle size analysis and high performance liquid chromatography, respectively. Relatively high determination coefficients (ranging from 0.896 to 0.969) were obtained for all suggested polynomial regression models. The overall optimal homogenization conditions were determined by multiple response optimization analysis to be 6,000 rpm for 7 minutes. In vitro cellular uptake of astaxanthin from the suggested individual and multiple optimized astaxanthin nanodispersions was also evaluated. The cellular uptake of astaxanthin was found to be considerably increased (by more than five times) as it became incorporated into optimum nanodispersion systems. The lack of a significant difference between predicted and experimental values confirms the suitability of the regression equations connecting the response variables studied to the independent parameters. PMID:25709435
Jung, Hyung-Sup; Lee, Won-Jin; Zhang, Lei
2014-01-01
The measurement of precise along-track displacements has been made with the multiple-aperture interferometry (MAI). The empirical accuracies of the MAI measurements are about 6.3 and 3.57 cm for ERS and ALOS data, respectively. However, the estimated empirical accuracies cannot be generalized to any interferometric pair because they largely depend on the processing parameters and coherence of the used SAR data. A theoretical formula is given to calculate an expected MAI measurement accuracy according to the system and processing parameters and interferometric coherence. In this paper, we have investigated the expected MAI measurement accuracy on the basis of the theoretical formula for the existing X-, C- and L-band satellite SAR systems. The similarity between the expected and empirical MAI measurement accuracies has been tested as well. The expected accuracies of about 2–3 cm and 3–4 cm (γ = 0.8) are calculated for the X- and L-band SAR systems, respectively. For the C-band systems, the expected accuracy of Radarsat-2 ultra-fine is about 3–4 cm and that of Sentinel-1 IW is about 27 cm (γ = 0.8). The results indicate that the expected MAI measurement accuracy of a given interferometric pair can be easily calculated by using the theoretical formula. PMID:25251408
Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-01-01
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385
Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping
2015-12-14
A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.
NASA Astrophysics Data System (ADS)
Rafiq Abuturab, Muhammad
2018-01-01
A new asymmetric multiple information cryptosystem based on chaotic spiral phase mask (CSPM) and random spectrum decomposition is put forwarded. In the proposed system, each channel of secret color image is first modulated with a CSPM and then gyrator transformed. The gyrator spectrum is randomly divided into two complex-valued masks. The same procedure is applied to multiple secret images to get their corresponding first and second complex-valued masks. Finally, first and second masks of each channel are independently added to produce first and second complex ciphertexts, respectively. The main feature of the proposed method is the different secret images encrypted by different CSPMs using different parameters as the sensitive decryption/private keys which are completely unknown to unauthorized users. Consequently, the proposed system would be resistant to potential attacks. Moreover, the CSPMs are easier to position in the decoding process owing to their own centering mark on axis focal ring. The retrieved secret images are free from cross-talk noise effects. The decryption process can be implemented by optical experiment. Numerical simulation results demonstrate the viability and security of the proposed method.
Nguyen, Dinh Duc; Yoon, Yong Soo; Bui, Xuan Thanh; Kim, Sung Su; Chang, Soon Woong; Guo, Wenshan; Ngo, Huu Hao
2017-11-01
Performance of an electrocoagulation (EC) process in batch and continuous operating modes was thoroughly investigated and evaluated for enhancing wastewater phosphorus removal under various operating conditions, individually or combined with initial phosphorus concentration, wastewater conductivity, current density, and electrolysis times. The results revealed excellent phosphorus removal (72.7-100%) for both processes within 3-6 min of electrolysis, with relatively low energy requirements, i.e., less than 0.5 kWh/m 3 for treated wastewater. However, the removal efficiency of phosphorus in the continuous EC operation mode was better than that in batch mode within the scope of the study. Additionally, the rate and efficiency of phosphorus removal strongly depended on operational parameters, including wastewater conductivity, initial phosphorus concentration, current density, and electrolysis time. Based on experimental data, statistical model verification of the response surface methodology (RSM) (multiple factor optimization) was also established to provide further insights and accurately describe the interactive relationship between the process variables, thus optimizing the EC process performance. The EC process using iron electrodes is promising for improving wastewater phosphorus removal efficiency, and RSM can be a sustainable tool for predicting the performance of the EC process and explaining the influence of the process variables.
Modulation and synchronization technique for MF-TDMA system
NASA Technical Reports Server (NTRS)
Faris, Faris; Inukai, Thomas; Sayegh, Soheil
1994-01-01
This report addresses modulation and synchronization techniques for a multi-frequency time division multiple access (MF-TDMA) system with onboard baseband processing. The types of synchronization techniques analyzed are asynchronous (conventional) TDMA, preambleless asynchronous TDMA, bit synchronous timing with a preamble, and preambleless bit synchronous timing. Among these alternatives, preambleless bit synchronous timing simplifies onboard multicarrier demultiplexer/demodulator designs (about 2:1 reduction in mass and power), requires smaller onboard buffers (10:1 to approximately 3:1 reduction in size), and provides better frame efficiency as well as lower onboard processing delay. Analysis and computer simulation illustrate that this technique can support a bit rate of up to 10 Mbit/s (or higher) with proper selection of design parameters. High bit rate transmission may require Doppler compensation and multiple phase error measurements. The recommended modulation technique for bit synchronous timing is coherent QPSK with differential encoding for the uplink and coherent QPSK for the downlink.
Characterization and production of multifunctional cationic peptides derived from rice proteins.
Taniguchi, Masayuki; Ochiai, Akihito
2017-04-01
Food proteins have been identified as a source of bioactive peptides. These peptides are inactive within the sequence of the parent protein and must be released during gastrointestinal digestion, fermentation, or food processing. Of bioactive peptides, multifunctional cationic peptides are more useful than other peptides that have specific activity in promotion of health and/or the treatment of diseases. We have identified and characterized cationic peptides from rice enzymes and proteins that possess multiple functions, including antimicrobial, endotoxin-neutralizing, arginine gingipain-inhibitory, and/or angiogenic activities. In particular, we have elucidated the contribution of cationic amino acids (arginine and lysine) in the peptides to their bioactivities. Further, we have discussed the critical parameters, particularly proteinase preparations and fractionation or purification, in the enzymatic hydrolysis process for producing bioactive peptides from food proteins. Using an ampholyte-free isoelectric focusing (autofocusing) technique as a tool for fractionation, we successfully prepared fractions containing cationic peptides with multiple functions.
Mixed-order phase transition of the contact process near multiple junctions.
Juhász, Róbert; Iglói, Ferenc
2017-02-01
We have studied the phase transition of the contact process near a multiple junction of M semi-infinite chains by Monte Carlo simulations. As opposed to the continuous transitions of the translationally invariant (M=2) and semi-infinite (M=1) system, the local order parameter is found to be discontinuous for M>2. Furthermore, the temporal correlation length diverges algebraically as the critical point is approached, but with different exponents on the two sides of the transition. In the active phase, the estimate is compatible with the bulk value, while in the inactive phase it exceeds the bulk value and increases with M. The unusual local critical behavior is explained by a scaling theory with an irrelevant variable, which becomes dangerous in the inactive phase. Quenched spatial disorder is found to make the transition continuous in agreement with earlier renormalization group results.
Optimal growth trajectories with finite carrying capacity.
Caravelli, F; Sindoni, L; Caccioli, F; Ududec, C
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
Investigation on Multiple-Pulse Propulsion Performance for a Parabolic Nozzle with Inlet Slit
NASA Astrophysics Data System (ADS)
Wen, Ming; Hong, Yanji; Song, Junling
2011-11-01
The multiple-pulse impulse coupling coefficient Cm is lower than the single pulse one with the same laser parameters. It is always explained that air recovery in nozzle does not work on time. Three kinds of parabolic nozzles are employed to improve air recovery in the experiments and simulation. There exist inlet slits on side wall of them with width of 1 mm, 2 mm, respectively. The curves of thrust and the process of flow fluid field are presented to study the slit effects on Cm under 20 Hz pulse frequency. The results show: an inlet slit can accelerate the air breathing process in the nozzle and Cm for each pulse exhibits a little variation; the lower Cm is obtained due to the increasing energy loss by a larger size slit; the flat-roofed nozzle gets higher Cm than others.
Methods for consistent forewarning of critical events across multiple data channels
Hively, Lee M.
2006-11-21
This invention teaches further method improvements to forewarn of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves conversion of time-serial data into equiprobable symbols. A second improvement is a method to maximize the channel-consistent total-true rate of forewarning from a plurality of data channels over multiple data sets from the same patient or process. This total-true rate requires resolution of the forewarning indications into true positives, true negatives, false positives and false negatives. A third improvement is the use of various objective functions, as derived from the phase-space dissimilarity measures, to give the best forewarning indication. A fourth improvement uses various search strategies over the phase-space analysis parameters to maximize said objective functions. A fifth improvement shows the usefulness of the method for various biomedical and machine applications.
Optimal growth trajectories with finite carrying capacity
NASA Astrophysics Data System (ADS)
Caravelli, F.; Sindoni, L.; Caccioli, F.; Ududec, C.
2016-08-01
We consider the problem of finding optimal strategies that maximize the average growth rate of multiplicative stochastic processes. For a geometric Brownian motion, the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics. We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance. We formulate the problem in terms of a stochastic process with multiplicative noise and a nonlinear drift term that is determined by the specific functional form of carrying capacity. We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases we compute the optimal trajectories of the control parameter. We further test the validity of our analytical results using numerical simulations.
The Overgrid Interface for Computational Simulations on Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; Kwak, Dochan (Technical Monitor)
2002-01-01
Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.
A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps
NASA Astrophysics Data System (ADS)
Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.
2017-06-01
The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.
NASA Astrophysics Data System (ADS)
Hohimer, Cameron J.; Petrossian, Gayaneh; Ameli, Amir; Mo, Changki; Pötschke, Petra
2018-03-01
Additive manufacturing (AM) is an emerging field experiencing rapid growth. This paper presents a feasibility study of using fused-deposition modeling (FDM) techniques with smart materials to fabricate objects with sensing and actuating capabilities. The fabrication of objects with sensing typically requires the integration and assembly of multiple components. Incorporating sensing elements into a single FDM process has the potential to significantly simplify manufacturing. The integration of multiple materials, especially smart materials and those with multi-functional properties, into the FDM process is challenging and still requires further development. Previous works by the authors have demonstrated a good printability of thermoplastic polyurethane/multiwall carbon nanotubes (TPU/MWCNT) while maintaining conductivity and piezoresistive response. This research explores the effects of layer height, nozzle temperature, and bed temperature on the electrical conductivity and piezoresistive response of printed TPU/MWCNT nanocomposites. An impedance analyzer was used to determine the conductivity of printed samples under different printing conditions from 5Hz-13MHz. The samples were then tested under compression loads to measure the piezoresistive response. Results show the conductivity and piezoresistive response are only slightly affected by the print parameters and they can be largely considered independent of the print conditions within the examined ranges of print parameters. This behavior simplifies the printing process design for TPU/MWCNT complex structures. This work demonstrates the possibility of manufacturing embedded and multidirectional flexible strain sensors using an inexpensive and versatile method, with potential applications in soft robotics, flexible electronics, and health monitoring.
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
Simms, Laura E.; Engebretson, Mark J.; Pilipenko, Viacheslav; ...
2016-04-07
The daily maximum relativistic electron flux at geostationary orbit can be predicted well with a set of daily averaged predictor variables including previous day's flux, seed electron flux, solar wind velocity and number density, AE index, IMF Bz, Dst, and ULF and VLF wave power. As predictor variables are intercorrelated, we used multiple regression analyses to determine which are the most predictive of flux when other variables are controlled. Empirical models produced from regressions of flux on measured predictors from 1 day previous were reasonably effective at predicting novel observations. Adding previous flux to the parameter set improves the predictionmore » of the peak of the increases but delays its anticipation of an event. Previous day's solar wind number density and velocity, AE index, and ULF wave activity are the most significant explanatory variables; however, the AE index, measuring substorm processes, shows a negative correlation with flux when other parameters are controlled. This may be due to the triggering of electromagnetic ion cyclotron waves by substorms that cause electron precipitation. VLF waves show lower, but significant, influence. The combined effect of ULF and VLF waves shows a synergistic interaction, where each increases the influence of the other on flux enhancement. Correlations between observations and predictions for this 1 day lag model ranged from 0.71 to 0.89 (average: 0.78). Furthermore, a path analysis of correlations between predictors suggests that solar wind and IMF parameters affect flux through intermediate processes such as ring current ( Dst), AE, and wave activity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simms, Laura E.; Engebretson, Mark J.; Pilipenko, Viacheslav
The daily maximum relativistic electron flux at geostationary orbit can be predicted well with a set of daily averaged predictor variables including previous day's flux, seed electron flux, solar wind velocity and number density, AE index, IMF Bz, Dst, and ULF and VLF wave power. As predictor variables are intercorrelated, we used multiple regression analyses to determine which are the most predictive of flux when other variables are controlled. Empirical models produced from regressions of flux on measured predictors from 1 day previous were reasonably effective at predicting novel observations. Adding previous flux to the parameter set improves the predictionmore » of the peak of the increases but delays its anticipation of an event. Previous day's solar wind number density and velocity, AE index, and ULF wave activity are the most significant explanatory variables; however, the AE index, measuring substorm processes, shows a negative correlation with flux when other parameters are controlled. This may be due to the triggering of electromagnetic ion cyclotron waves by substorms that cause electron precipitation. VLF waves show lower, but significant, influence. The combined effect of ULF and VLF waves shows a synergistic interaction, where each increases the influence of the other on flux enhancement. Correlations between observations and predictions for this 1 day lag model ranged from 0.71 to 0.89 (average: 0.78). Furthermore, a path analysis of correlations between predictors suggests that solar wind and IMF parameters affect flux through intermediate processes such as ring current ( Dst), AE, and wave activity.« less
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
Particle size analysis of some water/oil/water multiple emulsions.
Ursica, L; Tita, D; Palici, I; Tita, B; Vlaia, V
2005-04-29
Particle size analysis gives useful information about the structure and stability of multiple emulsions, which are important characteristics of these systems. It also enables the observation of the growth process of particles dispersed in multiple emulsions, accordingly, the evolution of their dimension in time. The size of multiple particles in the seven water/oil/water (W/O/W) emulsions was determined by measuring the particles size observed during the microscopic examination. In order to describe the distribution of the size of multiple particles, the value of two parameters that define the particle size was calculated: the arithmetical mean diameter and the median diameter. The results of the particle size analysis in the seven multiple emulsions W/O/W studied are presented as histograms of the distribution density immediately, 1 and 3 months after the preparation of each emulsion, as well as by establishing the mean and the median diameter of particles. The comparative study of the distribution histograms and of the mean and median diameters of W/O/W multiple particles indicates that the prepared emulsions are fine and very fine dispersions, stable, and presenting a growth of the abovementioned diameters during the study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radhakrishnan, Balasubramaniam; Fattebert, Jean-Luc; Gorti, Sarma B.
Additive Manufacturing (AM) refers to a process by which digital three-dimensional (3-D) design data is converted to build up a component by depositing material layer-by-layer. United Technologies Corporation (UTC) is currently involved in fabrication and certification of several AM aerospace structural components made from aerospace materials. This is accomplished by using optimized process parameters determined through numerous design-of-experiments (DOE)-based studies. Certification of these components is broadly recognized as a significant challenge, with long lead times, very expensive new product development cycles and very high energy consumption. Because of these challenges, United Technologies Research Center (UTRC), together with UTC business unitsmore » have been developing and validating an advanced physics-based process model. The specific goal is to develop a physics-based framework of an AM process and reliably predict fatigue properties of built-up structures as based on detailed solidification microstructures. Microstructures are predicted using process control parameters including energy source power, scan velocity, deposition pattern, and powder properties. The multi-scale multi-physics model requires solution and coupling of governing physics that will allow prediction of the thermal field and enable solution at the microstructural scale. The state-of-the-art approach to solve these problems requires a huge computational framework and this kind of resource is only available within academia and national laboratories. The project utilized the parallel phase-fields codes at Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL), along with the high-performance computing (HPC) capabilities existing at the two labs to demonstrate the simulation of multiple dendrite growth in threedimensions (3-D). The LLNL code AMPE was used to implement the UTRC phase field model that was previously developed for a model binary alloy, and the simulation results were compared against the UTRC simulation results, followed by extension of the UTRC model to simulate multiple dendrite growth in 3-D. The ORNL MEUMAPPS code was used to simulate dendritic growth in a model ternary alloy with the same equilibrium solidification range as the Ni-base alloy 718 using realistic model parameters, including thermodynamic integration with a Calphad based model for the ternary alloy. Implementation of the UTRC model in AMPE met with several numerical and parametric issues that were resolved and good comparison between the simulation results obtained by the two codes was demonstrated for two dimensional (2-D) dendrites. 3-D dendrite growth was then demonstrated with the AMPE code using nondimensional parameters obtained in 2-D simulations. Multiple dendrite growth in 2-D and 3-D were demonstrated using ORNL’s MEUMAPPS code using simple thermal boundary conditions. MEUMAPPS was then modified to incorporate the complex, time-dependent thermal boundary conditions obtained by UTRC’s thermal modeling of single track AM experiments to drive the phase field simulations. The results were in good agreement with UTRC’s experimental measurements.« less
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
NASA Technical Reports Server (NTRS)
Poosti, Sassaneh; Akopyan, Sirvard; Sakurai, Regina; Yun, Hyejung; Saha, Pranjit; Strickland, Irina; Croft, Kevin; Smith, Weldon; Hoffman, Rodney; Koffend, John;
2006-01-01
TES Level 2 Subsystem is a set of computer programs that performs functions complementary to those of the program summarized in the immediately preceding article. TES Level-2 data pertain to retrieved species (or temperature) profiles, and errors thereof. Geolocation, quality, and other data (e.g., surface characteristics for nadir observations) are also included. The subsystem processes gridded meteorological information and extracts parameters that can be interpolated to the appropriate latitude, longitude, and pressure level based on the date and time. Radiances are simulated using the aforementioned meteorological information for initial guesses, and spectroscopic-parameter tables are generated. At each step of the retrieval, a nonlinear-least-squares- solving routine is run over multiple iterations, retrieving a subset of atmospheric constituents, and error analysis is performed. Scientific TES Level-2 data products are written in a format known as Hierarchical Data Format Earth Observing System 5 (HDF-EOS 5) for public distribution.
Magnesium degradation as determined by artificial neural networks.
Willumeit, Regine; Feyerabend, Frank; Huber, Norbert
2013-11-01
Magnesium degradation under physiological conditions is a highly complex process in which temperature, the use of cell culture growth medium and the presence of CO2, O2 and proteins can influence the corrosion rate and the composition of the resulting corrosion layer. Due to the complexity of this process it is almost impossible to predict the parameters that are most important and whether some parameters have a synergistic effect on the corrosion rate. Artificial neural networks are a mathematical tool that can be used to approximate and analyse non-linear problems with multiple inputs. In this work we present the first analysis of corrosion data obtained using this method, which reveals that CO2 and the composition of the buffer system play a crucial role in the corrosion of magnesium, whereas O2, proteins and temperature play a less prominent role. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Emotion, Cognition, and Mental State Representation in Amygdala and Prefrontal Cortex
Salzman, C. Daniel; Fusi, Stefano
2011-01-01
Neuroscientists have often described cognition and emotion as separable processes implemented by different regions of the brain, such as the amygdala for emotion and the prefrontal cortex for cognition. In this framework, functional interactions between the amygdala and prefrontal cortex mediate emotional influences on cognitive processes such as decision-making, as well as the cognitive regulation of emotion. However, neurons in these structures often have entangled representations, whereby single neurons encode multiple cognitive and emotional variables. Here we review studies using anatomical, lesion, and neurophysiological approaches to investigate the representation and utilization of cognitive and emotional parameters. We propose that these mental state parameters are inextricably linked and represented in dynamic neural networks composed of interconnected prefrontal and limbic brain structures. Future theoretical and experimental work is required to understand how these mental state representations form and how shifts between mental states occur, a critical feature of adaptive cognitive and emotional behavior. PMID:20331363
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Sin, Gürkan; Gernaey, Krist V
2017-03-01
A mechanistic model-based soft sensor is developed and validated for 550L filamentous fungus fermentations operated at Novozymes A/S. The soft sensor is comprised of a parameter estimation block based on a stoichiometric balance, coupled to a dynamic process model. The on-line parameter estimation block models the changing rates of formation of product, biomass, and water, and the rate of consumption of feed using standard, available on-line measurements. This parameter estimation block, is coupled to a mechanistic process model, which solves the current states of biomass, product, substrate, dissolved oxygen and mass, as well as other process parameters including k L a, viscosity and partial pressure of CO 2 . State estimation at this scale requires a robust mass model including evaporation, which is a factor not often considered at smaller scales of operation. The model is developed using a historical data set of 11 batches from the fermentation pilot plant (550L) at Novozymes A/S. The model is then implemented on-line in 550L fermentation processes operated at Novozymes A/S in order to validate the state estimator model on 14 new batches utilizing a new strain. The product concentration in the validation batches was predicted with an average root mean sum of squared error (RMSSE) of 16.6%. In addition, calculation of the Janus coefficient for the validation batches shows a suitably calibrated model. The robustness of the model prediction is assessed with respect to the accuracy of the input data. Parameter estimation uncertainty is also carried out. The application of this on-line state estimator allows for on-line monitoring of pilot scale batches, including real-time estimates of multiple parameters which are not able to be monitored on-line. With successful application of a soft sensor at this scale, this allows for improved process monitoring, as well as opening up further possibilities for on-line control algorithms, utilizing these on-line model outputs. Biotechnol. Bioeng. 2017;114: 589-599. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Hodge, N. E.; Ferencz, R. M.; Vignes, R. M.
2016-05-30
Selective laser melting (SLM) is an additive manufacturing process in which multiple, successive layers of metal powders are heated via laser in order to build a part. Modeling of SLM requires consideration of the complex interaction between heat transfer and solid mechanics. Here, the present work describes the authors initial efforts to validate their first generation model. In particular, the comparison of model-generated solid mechanics results, including both deformation and stresses, is presented. Additionally, results of various perturbations of the process parameters and modeling strategies are discussed.
NASA Astrophysics Data System (ADS)
Farahmand, Parisa; Kovacevic, Radovan
2014-12-01
In laser cladding, the performance of the deposited layers subjected to severe working conditions (e.g., wear and high temperature conditions) depends on the mechanical properties, the metallurgical bond to the substrate, and the percentage of dilution. The clad geometry and mechanical characteristics of the deposited layer are influenced greatly by the type of laser used as a heat source and process parameters used. Nowadays, the quality of fabricated coating by laser cladding and the efficiency of this process has improved thanks to the development of high-power diode lasers, with power up to 10 kW. In this study, the laser cladding by a high power direct diode laser (HPDDL) as a new heat source in laser cladding was investigated in detail. The high alloy tool steel material (AISI H13) as feedstock was deposited on mild steel (ASTM A36) by a HPDDL up to 8kW laser and with new design lateral feeding nozzle. The influences of the main process parameters (laser power, powder flow rate, and scanning speed) on the clad-bead geometry (specifically layer height and depth of the heat affected zone), and clad microhardness were studied. Multiple regression analysis was used to develop the analytical models for desired output properties according to input process parameters. The Analysis of Variance was applied to check the accuracy of the developed models. The response surface methodology (RSM) and desirability function were used for multi-criteria optimization of the cladding process. In order to investigate the effect of process parameters on the molten pool evolution, in-situ monitoring was utilized. Finally, the validation results for optimized process conditions show the predicted results were in a good agreement with measured values. The multi-criteria optimization makes it possible to acquire an efficient process for a combination of clad geometrical and mechanical characteristics control.
Compact touchless fingerprint reader based on digital variable-focus liquid lens
NASA Astrophysics Data System (ADS)
Tsai, C. W.; Wang, P. J.; Yeh, J. A.
2014-09-01
Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.
Digital Beamforming Synthetic Aperture Radar Developments at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung Kuk; Du Toit, Cornelis F.; Perrine, Martin; Ranson, K. Jon; Sun, Guoqing; Deshpande, Manohar; Beck, Jaclyn;
2016-01-01
Advanced Digital Beamforming (DBF) Synthetic Aperture Radar (SAR) technology is an area of research and development pursued at the NASA Goddard Space Flight Center (GSFC). Advanced SAR architectures enhances radar performance and opens a new set of capabilities in radar remote sensing. DBSAR-2 and EcoSAR are two state-of-the-art radar systems recently developed and tested. These new instruments employ multiple input-multiple output (MIMO) architectures characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instruments have been developed to support several disciplines in Earth and Planetary sciences. This paper describes the radars advanced features and report on the latest SAR processing and calibration efforts.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Electromagnetic Dissociation Cross Sections for High LET Fragments
NASA Technical Reports Server (NTRS)
Norbury, John
2016-01-01
Nuclear interaction cross sections are used in space radiation transport codes to calculate the probability of fragment emission in high energy nucleus-nucleus collisions. Strong interactions usually dominate in these collisions, but electromagnetic (EM) interactions can also sometimes be important. Strong interactions typically occur when the projectile nucleus hits a target nucleus, with a small impact parameter. For impact parameters larger than the sum of the nuclear radii, EM reactions dominate and the process is called electromagnetic dissociation (EMD) if one of the nuclei undergo fragmentation. Previous models of EMD have been used to calculate single proton (p) production, single neutron (n) production or light ion production, where a light ion is defined as an isotope of hydrogen (H) or helium (He), such as a deuteron (2H), a triton (3H), a helion (3He) or an alpha particle (4He). A new model is described which can also account for multiple nucleon production, such as 2p, 2n, 1p1n, 2p1n, 2p2n, etc. in addition to light ion production. Such processes are important to include for the following reasons. Consider, for example, the EMD reaction 56Fe + Al --> 52Cr + X + Al, for a 56Fe projectile impacting Al, which produces the high linear energy transfer (LET) fragment 52Cr. In this reaction, the most probable particles representing X are either 2p2n or 4He. Therefore, production of the high LET fragment 52Cr, must include the multiple nucleon production of 2p2n in addition to the light ion production of 4He. Previous models, such as the NUCFRG3 model, could only account for the 4He production process in this reaction and could not account for 2p2n. The new EMD model presented in this work accounts for both the light ion and multiple nucleon processes, and is therefore able to correctly account for the production of high LET products such as 52Cr. The model will be described and calculations will be presented that show the importance of light ion and multiple nucleon production. The work will also show that EMD reactions contribute most to those fragments with the highest LET.
A new method of hybrid frequency hopping signals selection and blind parameter estimation
NASA Astrophysics Data System (ADS)
Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian
2018-04-01
Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Guo, Chaohua; Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.
Wei, Mingzhen; Liu, Hong
2018-01-01
Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489
ERIC Educational Resources Information Center
Hamadneh, Iyad Mohammed
2015-01-01
This study aimed at investigating the impact changing of escape alternative position in multiple-choice test on the psychometric properties of a test and it's items parameters (difficulty, discrimination & guessing), and estimation of examinee ability. To achieve the study objectives, a 4-alternative multiple choice type achievement test…
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reconstruction of Porous Media with Multiple Solid Phases
Losic; Thovert; Adler
1997-02-15
A process is proposed to generate three-dimensional multiphase porous media with fixed phase probabilities and an overall correlation function. By varying the parameters, a specific phase can be located either at the interface between two phases or within a single phase. When the interfacial phase has a relatively small probability, its shape can be chosen as granular or lamellar. The influence of a third phase on the macroscopic conductivity of a medium is illustrated.
Optimization of an optically implemented on-board FDMA demultiplexer
NASA Technical Reports Server (NTRS)
Fargnoli, J.; Riddle, L.
1991-01-01
Performance of a 30 GHz frequency division multiple access (FDMA) uplink to a processing satellite is modelled for the case where the onboard demultiplexer is implemented optically. Included in the performance model are the effects of adjacent channel interference, intersymbol interference, and spurious signals associated with the optical implementation. Demultiplexer parameters are optimized to provide the minimum bit error probability at a given bandwidth efficiency when filtered QPSK modulation is employed.
Application of multi response optimization with grey relational analysis and fuzzy logic method
NASA Astrophysics Data System (ADS)
Winarni, Sri; Wahyu Indratno, Sapto
2018-01-01
Multi-response optimization is an optimization process by considering multiple responses simultaneously. The purpose of this research is to get the optimum point on multi-response optimization process using grey relational analysis and fuzzy logic method. The optimum point is determined from the Fuzzy-GRG (Grey Relational Grade) variable which is the conversion of the Signal to Noise Ratio of the responses involved. The case study used in this research are case optimization of electrical process parameters in electrical disharge machining. It was found that the combination of treatments resulting to optimum MRR and SR was a 70 V gap voltage factor, peak current 9 A and duty factor 0.8.
A systematic petri net approach for multiple-scale modeling and simulation of biochemical processes.
Chen, Ming; Hu, Minjie; Hofestädt, Ralf
2011-06-01
A method to exploit hybrid Petri nets for modeling and simulating biochemical processes in a systematic way was introduced. Both molecular biology and biochemical engineering aspects are manipulated. With discrete and continuous elements, the hybrid Petri nets can easily handle biochemical factors such as metabolites concentration and kinetic behaviors. It is possible to translate both molecular biological behavior and biochemical processes workflow into hybrid Petri nets in a natural manner. As an example, penicillin production bioprocess is modeled to illustrate the concepts of the methodology. Results of the dynamic of production parameters in the bioprocess were simulated and observed diagrammatically. Current problems and post-genomic perspectives were also discussed.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
Testing and Performance Analysis of the Multichannel Error Correction Code Decoder
NASA Technical Reports Server (NTRS)
Soni, Nitin J.
1996-01-01
This report provides the test results and performance analysis of the multichannel error correction code decoder (MED) system for a regenerative satellite with asynchronous, frequency-division multiple access (FDMA) uplink channels. It discusses the system performance relative to various critical parameters: the coding length, data pattern, unique word value, unique word threshold, and adjacent-channel interference. Testing was performed under laboratory conditions and used a computer control interface with specifically developed control software to vary these parameters. Needed technologies - the high-speed Bose Chaudhuri-Hocquenghem (BCH) codec from Harris Corporation and the TRW multichannel demultiplexer/demodulator (MCDD) - were fully integrated into the mesh very small aperture terminal (VSAT) onboard processing architecture and were demonstrated.
Engineering Digestion: Multiscale Processes of Food Digestion.
Bornhorst, Gail M; Gouseti, Ourania; Wickham, Martin S J; Bakalis, Serafim
2016-03-01
Food digestion is a complex, multiscale process that has recently become of interest to the food industry due to the developing links between food and health or disease. Food digestion can be studied by using either in vitro or in vivo models, each having certain advantages or disadvantages. The recent interest in food digestion has resulted in a large number of studies in this area, yet few have provided an in-depth, quantitative description of digestion processes. To provide a framework to develop these quantitative comparisons, a summary is given here between digestion processes and parallel unit operations in the food and chemical industry. Characterization parameters and phenomena are suggested for each step of digestion. In addition to the quantitative characterization of digestion processes, the multiscale aspect of digestion must also be considered. In both food systems and the gastrointestinal tract, multiple length scales are involved in food breakdown, mixing, absorption. These different length scales influence digestion processes independently as well as through interrelated mechanisms. To facilitate optimized development of functional food products, a multiscale, engineering approach may be taken to describe food digestion processes. A framework for this approach is described in this review, as well as examples that demonstrate the importance of process characterization as well as the multiple, interrelated length scales in the digestion process. © 2016 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Li, S.; Rupp, D. E.; Hawkins, L.; Mote, P.; McNeall, D. J.; Sarah, S.; Wallom, D.; Betts, R. A.
2017-12-01
This study investigates the potential to reduce known summer hot/dry biases over Pacific Northwest in the UK Met Office's atmospheric model (HadAM3P) by simultaneously varying multiple model parameters. The bias-reduction process is done through a series of steps: 1) Generation of perturbed physics ensemble (PPE) through the volunteer computing network weather@home; 2) Using machine learning to train "cheap" and fast statistical emulators of climate model, to rule out regions of parameter spaces that lead to model variants that do not satisfy observational constraints, where the observational constraints (e.g., top-of-atmosphere energy flux, magnitude of annual temperature cycle, summer/winter temperature and precipitation) are introduced sequentially; 3) Designing a new PPE by "pre-filtering" using the emulator results. Steps 1) through 3) are repeated until results are considered to be satisfactory (3 times in our case). The process includes a sensitivity analysis to find dominant parameters for various model output metrics, which reduces the number of parameters to be perturbed with each new PPE. Relative to observational uncertainty, we achieve regional improvements without introducing large biases in other parts of the globe. Our results illustrate the potential of using machine learning to train cheap and fast statistical emulators of climate model, in combination with PPEs in systematic model improvement.
Feasibility of Rapid Multitracer PET Tumor Imaging
NASA Astrophysics Data System (ADS)
Kadrmas, D. J.; Rust, T. C.
2005-10-01
Positron emission tomography (PET) can characterize different aspects of tumor physiology using various tracers. PET scans are usually performed using only one tracer since there is no explicit signal for distinguishing multiple tracers. We tested the feasibility of rapidly imaging multiple PET tracers using dynamic imaging techniques, where the signals from each tracer are separated based upon differences in tracer half-life, kinetics, and distribution. Time-activity curve populations for FDG, acetate, ATSM, and PTSM were simulated using appropriate compartment models, and noisy dual-tracer curves were computed by shifting and adding the single-tracer curves. Single-tracer components were then estimated from dual-tracer data using two methods: principal component analysis (PCA)-based fits of single-tracer components to multitracer data, and parallel multitracer compartment models estimating single-tracer rate parameters from multitracer time-activity curves. The PCA analysis found that there is information content present for separating multitracer data, and that tracer separability depends upon tracer kinetics, injection order and timing. Multitracer compartment modeling recovered rate parameters for individual tracers with good accuracy but somewhat higher statistical uncertainty than single-tracer results when the injection delay was >10 min. These approaches to processing rapid multitracer PET data may potentially provide a new tool for characterizing multiple aspects of tumor physiology in vivo.
Pervez, Hifsa; Mozumder, Mohammad S.; Mourad, Abdel-Hamid I.
2016-01-01
The current study presents an investigation on the optimization of injection molding parameters of HDPE/TiO2 nanocomposites using grey relational analysis with the Taguchi method. Four control factors, including filler concentration (i.e., TiO2), barrel temperature, residence time and holding time, were chosen at three different levels of each. Mechanical properties, such as yield strength, Young’s modulus and elongation, were selected as the performance targets. Nine experimental runs were carried out based on the Taguchi L9 orthogonal array, and the data were processed according to the grey relational steps. The optimal process parameters were found based on the average responses of the grey relational grades, and the ideal operating conditions were found to be a filler concentration of 5 wt % TiO2, a barrel temperature of 225 °C, a residence time of 30 min and a holding time of 20 s. Moreover, analysis of variance (ANOVA) has also been applied to identify the most significant factor, and the percentage of TiO2 nanoparticles was found to have the most significant effect on the properties of the HDPE/TiO2 nanocomposites fabricated through the injection molding process. PMID:28773830
Simulated discharge trends indicate robustness of hydrological models in a changing climate
NASA Astrophysics Data System (ADS)
Addor, Nans; Nikolova, Silviya; Seibert, Jan
2016-04-01
Assessing the robustness of hydrological models under contrasted climatic conditions should be part any hydrological model evaluation. Robust models are particularly important for climate impact studies, as models performing well under current conditions are not necessarily capable of correctly simulating hydrological perturbations caused by climate change. A pressing issue is the usually assumed stationarity of parameter values over time. Modeling experiments using conceptual hydrological models revealed that assuming transposability of parameters values in changing climatic conditions can lead to significant biases in discharge simulations. This raises the question whether parameter values should to be modified over time to reflect changes in hydrological processes induced by climate change. Such a question denotes a focus on the contribution of internal processes (i.e., catchment processes) to discharge generation. Here we adopt a different perspective and explore the contribution of external forcing (i.e., changes in precipitation and temperature) to changes in discharge. We argue that in a robust hydrological model, discharge variability should be induced by changes in the boundary conditions, and not by changes in parameter values. In this study, we explore how well the conceptual hydrological model HBV captures transient changes in hydrological signatures over the period 1970-2009. Our analysis focuses on research catchments in Switzerland undisturbed by human activities. The precipitation and temperature forcing are extracted from recently released 2km gridded data sets. We use a genetic algorithm to calibrate HBV for the whole 40-year period and for the eight successive 5-year periods to assess eventual trends in parameter values. Model calibration is run multiple times to account for parameter uncertainty. We find that in alpine catchments showing a significant increase of winter discharge, this trend can be captured reasonably well with constant parameter values over the whole reference period. Further, preliminary results suggest that some trends in parameter values do not reflect changes in hydrological processes, as reported by others previously, but instead might stem from a modeling artifact related to the parameterization of evapotranspiration, which is overly sensitive to temperature increase. We adopt a trading-space-for-time approach to better understand whether robust relationships between parameter values and forcing can be established, and to critically explore the rationale behind time-dependent parameter values in conceptual hydrological models.
Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel
CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Liancheng, E-mail: wanglc@semi.ac.cn, E-mail: lzq@semi.ac.cn, E-mail: zh.zhang@hebut.edu.cn; Semiconductor Lighting Technology Research and Development Center, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083; Mind Star
The effects of graphene on the optical properties of active system, e.g., the InGaN/GaN multiple quantum wells, are thoroughly investigated and clarified. Here, we have investigated the mechanisms accounting for the photoluminescence reduction for the graphene covered GaN/InGaN multiple quantum wells hybrid structure. Compared to the bare multiple quantum wells, the photoluminescence intensity of graphene covered multiple quantum wells showed a 39% decrease after excluding the graphene absorption losses. The responsible mechanisms have been identified with the following factors: (1) the graphene two dimensional hole gas intensifies the polarization field in multiple quantum wells, thus steepening the quantum well bandmore » profile and causing hole-electron pairs to further separate; (2) a lower affinity of graphene compared to air leading to a weaker capability to confine the excited hot electrons in multiple quantum wells; and (3) exciton transfer through non-radiative energy transfer process. These factors are theoretically analysed based on advanced physical models of semiconductor devices calculations and experimentally verified by varying structural parameters, such as the indium fraction in multiple quantum wells and the thickness of the last GaN quantum barrier spacer layer.« less
Modeling and Tool Wear in Routing of CFRP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliescu, D.; Fernandez, A.; Gutierrez-Orrantia, M. E.
2011-01-17
This paper presents the prediction and evaluation of feed force in routing of carbon composite material. In order to extend tool life and improve quality of the machined surface, a better understanding of uncoated and coated tool behaviors is required. This work describes (1) the optimization of the geometry of multiple teeth tools minimizing the tool wear and the feed force, (2) the optimization of tool coating and (3) the development of a phenomenological model between the feed force, the routing parameters and the tool wear. The experimental results indicate that the feed rate, the cutting speed and the toolmore » wear are the most significant factors affecting the feed force. In the case of multiple teeth tools, a particular geometry with 14 teeth right helix right cut and 11 teeth left helix right cut gives the best results. A thick AlTiN coating or a diamond coating can dramatically improve the tool life while minimizing the axial force, roughness and delamination. A wear model has then been developed based on an abrasive behavior of the tool. The model links the feed rate to the tool geometry parameters (tool diameter), to the process parameters (feed rate, cutting speed and depth of cut) and to the wear. The model presented has been verified by experimental tests.« less
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
Systematic study of rapidity dispersion parameter in high energy nucleus-nucleus interactions
NASA Astrophysics Data System (ADS)
Bhattacharyya, Swarnapratim; Haiduc, Maria; Neagu, Alina Tania; Firu, Elena
2014-03-01
A systematic study of rapidity dispersion parameter as a quantitative measure of clustering of particles has been carried out in the interactions of 16O, 28Si and 32S projectiles at 4.5 A GeV/c with heavy (AgBr) and light (CNO) groups of targets present in the nuclear emulsion. For all the interactions, the total ensemble of events has been divided into four overlapping multiplicity classes depending on the number of shower particles. For all the interactions and for each multiplicity class, the rapidity dispersion parameter values indicate the occurrence of clusterization during the multiparticle production at Dubna energy. The measured rapidity dispersion parameter values are found to decrease with the increase of average multiplicity for all the interactions. The dependence of rapidity dispersion parameter on the average multiplicity can be successfully described by a relation D(η) = a + b
Interface circuit for a multiple-beam tuning-fork gyroscope with high quality factors
NASA Astrophysics Data System (ADS)
Wang, Ren
This research work presents the design, theoretical analysis, fabrication, interface electronics, and experimental results of a Silicon-On-Insulator (SOI) based Multiple-Beam Tuning-Fork Gyroscope (MB-TFG). Based on a numerical model of Thermo-Elastic Damping (TED), a Multiple-Beam Tuning-Fork Structure (MB-TFS) is designed with high Quality factors (Qs) in its two operation modes. A comprehensive theoretical analysis of the MB-TFG design is conducted to relate the design parameters to its operation parameters and further performance parameters. In conjunction with a mask that defines the device through trenches to alleviate severe fabrication effect on anchor loss, a simple one-mask fabrication process is employed to implement this MB-TFG design on SOI wafers. The fabricated MB-TFGs are tested with PCB-level interface electronics and a thorough comparison between the experimental results and a theoretical analysis is conducted to verify the MB-TFG design and accurately interpret the measured performance. The highest measured Qs of the fabricated MB-TFGs in vacuum are 255,000 in the drive-mode and 103,000 in the sense-mode, at a frequency of 15.7kHz. Under a frequency difference of 4Hz between the two modes (operation frequency is 16.8kHz) and a drive-mode vibration amplitude of 3.0um, the measured rate sensitivity is 80mVpp/°/s with an equivalent impedance of 6MQ. The calculated overall rate resolution of this device is 0.37/hrhiElz, while the measured Angle Random Walk (ARW) and bias instability are 6.67°/'vhr and 95°/hr, respectively.
Simic, Vladimir
2016-06-01
As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dudkin, V. E.; Kovalev, E. E.; Nefedov, N. A.; Antonchik, V. A.; Bogdanov, S. D.; Kosmach, V. F.; Likhachev, A. YU.; Benton, E. V.; Crawford, H. J.
1995-01-01
A method is proposed for finding the dependence of mean multiplicities of secondaries on the nucleus-collision impact parameter from the data on the total interaction ensemble. The impact parameter has been shown to completely define the mean characteristics of an individual interaction event. A difference has been found between experimental results and the data calculated in terms of the cascade-evaporation model at impact-parameter values below 3 fm.
Optimization Methods for Spiking Neurons and Networks
Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph
2011-01-01
Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265
NASA Astrophysics Data System (ADS)
Cunningham, Ross; Narra, Sneha P.; Ozturk, Tugce; Beuth, Jack; Rollett, A. D.
2016-03-01
Electron beam melting (EBM) is one of the subsets of direct metal additive manufacturing (AM), an emerging manufacturing method that fabricates metallic parts directly from a three-dimensional (3D) computer model by the successive melting of powder layers. This family of technologies has seen significant growth in recent years due to its potential to manufacture complex components with shorter lead times, reduced material waste and minimal post-processing as a "near-net-shape" process, making it of particular interest to the biomedical and aerospace industries. The popular titanium alloy Ti-6Al-4V has been the focus of multiple studies due to its importance to these two industries, which can be attributed to its high strength to weight ratio and corrosion resistance. While previous research has found that most tensile properties of EBM Ti-6Al-4V meet or exceed conventional manufacturing standards, fatigue properties have been consistently inferior due to a significant presence of porosity. Studies have shown that adjusting processing parameters can reduce overall porosity; however, they frequently utilize methods that give insufficient information to properly characterize the porosity (e.g., Archimedes' method). A more detailed examination of the result of process parameter adjustments on the size and spatial distribution of gas porosity was performed utilizing synchrotron-based x-ray microtomography with a minimum feature resolution of 1.5 µm. Cross-sectional melt pool area was varied systematically via process mapping. Increasing melt pool area through the speed function variable was observed to significantly reduce porosity in the part.
Prosperini, Luca; Fanelli, Fulvia; Petsas, Nikolaos; Sbardella, Emilia; Tona, Francesca; Raz, Eytan; Fortuna, Deborah; De Angelis, Floriana; Pozzilli, Carlo; Pantano, Patrizia
2014-11-01
To determine if high-intensity, task-oriented, visual feedback training with a video game balance board (Nintendo Wii) induces significant changes in diffusion-tensor imaging ( DTI diffusion-tensor imaging ) parameters of cerebellar connections and other supratentorial associative bundles and if these changes are related to clinical improvement in patients with multiple sclerosis. The protocol was approved by local ethical committee; each participant provided written informed consent. In this 24-week, randomized, two-period crossover pilot study, 27 patients underwent static posturography and brain magnetic resonance (MR) imaging at study entry, after the first 12-week period, and at study termination. Thirteen patients started a 12-week training program followed by a 12-week period without any intervention, while 14 patients received the intervention in reverse order. Fifteen healthy subjects also underwent MR imaging once and underwent static posturography. Virtual dissection of white matter tracts was performed with streamline tractography; values of DTI diffusion-tensor imaging parameters were then obtained for each dissected tract. Repeated measures analyses of variance were performed to evaluate whether DTI diffusion-tensor imaging parameters significantly changed after intervention, with false discovery rate correction for multiple hypothesis testing. There were relevant differences between patients and healthy control subjects in postural sway and DTI diffusion-tensor imaging parameters (P < .05). Significant main effects of time by group interaction for fractional anisotropy and radial diffusivity of the left and right superior cerebellar peduncles were found (F2,23 range, 5.555-3.450; P = .036-.088 after false discovery rate correction). These changes correlated with objective measures of balance improvement detected at static posturography (r = -0.381 to 0.401, P < .05). However, both clinical and DTI diffusion-tensor imaging changes did not persist beyond 12 weeks after training. Despite the low statistical power (35%) due to the small sample size, the results showed that training with the balance board system modified the microstructure of superior cerebellar peduncles. The clinical improvement observed after training might be mediated by enhanced myelination-related processes, suggesting that high-intensity, task-oriented exercises could induce favorable microstructural changes in the brains of patients with multiple sclerosis.
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
Leong, Wai Fun; Che Man, Yaakob B; Lai, Oi Ming; Long, Kamariah; Misran, Misni; Tan, Chin Ping
2009-09-23
The purpose of this study was to optimize the parameters involved in the production of water-soluble phytosterol microemulsions for use in the food industry. In this study, response surface methodology (RSM) was employed to model and optimize four of the processing parameters, namely, the number of cycles of high-pressure homogenization (1-9 cycles), the pressure used for high-pressure homogenization (100-500 bar), the evaporation temperature (30-70 degrees C), and the concentration ratio of microemulsions (1-5). All responses-particle size (PS), polydispersity index (PDI), and percent ethanol residual (%ER)-were well fit by a reduced cubic model obtained by multiple regression after manual elimination. The coefficient of determination (R(2)) and absolute average deviation (AAD) value for PS, PDI, and %ER were 0.9628 and 0.5398%, 0.9953 and 0.7077%, and 0.9989 and 1.0457%, respectively. The optimized processing parameters were 4.88 (approximately 5) homogenization cycles, homogenization pressure of 400 bar, evaporation temperature of 44.5 degrees C, and concentration ratio of microemulsions of 2.34 cycles (approximately 2 cycles) of high-pressure homogenization. The corresponding responses for the optimized preparation condition were a minimal particle size of 328 nm, minimal polydispersity index of 0.159, and <0.1% of ethanol residual. The chi-square test verified the model, whereby the experimental values of PS, PDI, and %ER agreed with the predicted values at a 0.05 level of significance.
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Multi-scale modularity and motif distributional effect in metabolic networks.
Gao, Shang; Chen, Alan; Rahmani, Ali; Zeng, Jia; Tan, Mehmet; Alhajj, Reda; Rokne, Jon; Demetrick, Douglas; Wei, Xiaohui
2016-01-01
Metabolism is a set of fundamental processes that play important roles in a plethora of biological and medical contexts. It is understood that the topological information of reconstructed metabolic networks, such as modular organization, has crucial implications on biological functions. Recent interpretations of modularity in network settings provide a view of multiple network partitions induced by different resolution parameters. Here we ask the question: How do multiple network partitions affect the organization of metabolic networks? Since network motifs are often interpreted as the super families of evolved units, we further investigate their impact under multiple network partitions and investigate how the distribution of network motifs influences the organization of metabolic networks. We studied Homo sapiens, Saccharomyces cerevisiae and Escherichia coli metabolic networks; we analyzed the relationship between different community structures and motif distribution patterns. Further, we quantified the degree to which motifs participate in the modular organization of metabolic networks.
NASA Astrophysics Data System (ADS)
Tang, Jiafu; Liu, Yang; Fung, Richard; Luo, Xinggang
2008-12-01
Manufacturers have a legal accountability to deal with industrial waste generated from their production processes in order to avoid pollution. Along with advances in waste recovery techniques, manufacturers may adopt various recycling strategies in dealing with industrial waste. With reuse strategies and technologies, byproducts or wastes will be returned to production processes in the iron and steel industry, and some waste can be recycled back to base material for reuse in other industries. This article focuses on a recovery strategies optimization problem for a typical class of industrial waste recycling process in order to maximize profit. There are multiple strategies for waste recycling available to generate multiple byproducts; these byproducts are then further transformed into several types of chemical products via different production patterns. A mixed integer programming model is developed to determine which recycling strategy and which production pattern should be selected with what quantity of chemical products corresponding to this strategy and pattern in order to yield maximum marginal profits. The sales profits of chemical products and the set-up costs of these strategies, patterns and operation costs of production are considered. A simulated annealing (SA) based heuristic algorithm is developed to solve the problem. Finally, an experiment is designed to verify the effectiveness and feasibility of the proposed method. By comparing a single strategy to multiple strategies in an example, it is shown that the total sales profit of chemical products can be increased by around 25% through the simultaneous use of multiple strategies. This illustrates the superiority of combinatorial multiple strategies. Furthermore, the effects of the model parameters on profit are discussed to help manufacturers organize their waste recycling network.
Zepeda-Mendoza, Marie Lisandra; Bohmann, Kristine; Carmona Baez, Aldo; Gilbert, M Thomas P
2016-05-03
DNA metabarcoding is an approach for identifying multiple taxa in an environmental sample using specific genetic loci and taxa-specific primers. When combined with high-throughput sequencing it enables the taxonomic characterization of large numbers of samples in a relatively time- and cost-efficient manner. One recent laboratory development is the addition of 5'-nucleotide tags to both primers producing double-tagged amplicons and the use of multiple PCR replicates to filter erroneous sequences. However, there is currently no available toolkit for the straightforward analysis of datasets produced in this way. We present DAMe, a toolkit for the processing of datasets generated by double-tagged amplicons from multiple PCR replicates derived from an unlimited number of samples. Specifically, DAMe can be used to (i) sort amplicons by tag combination, (ii) evaluate PCR replicates dissimilarity, and (iii) filter sequences derived from sequencing/PCR errors, chimeras, and contamination. This is attained by calculating the following parameters: (i) sequence content similarity between the PCR replicates from each sample, (ii) reproducibility of each unique sequence across the PCR replicates, and (iii) copy number of the unique sequences in each PCR replicate. We showcase the insights that can be obtained using DAMe prior to taxonomic assignment, by applying it to two real datasets that vary in their complexity regarding number of samples, sequencing libraries, PCR replicates, and used tag combinations. Finally, we use a third mock dataset to demonstrate the impact and importance of filtering the sequences with DAMe. DAMe allows the user-friendly manipulation of amplicons derived from multiple samples with PCR replicates built in a single or multiple sequencing libraries. It allows the user to: (i) collapse amplicons into unique sequences and sort them by tag combination while retaining the sample identifier and copy number information, (ii) identify sequences carrying unused tag combinations, (iii) evaluate the comparability of PCR replicates of the same sample, and (iv) filter tagged amplicons from a number of PCR replicates using parameters of minimum length, copy number, and reproducibility across the PCR replicates. This enables an efficient analysis of complex datasets, and ultimately increases the ease of handling datasets from large-scale studies.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.
1983-01-01
An investigation was conducted to continue the development of perturbation procedures and associated computational codes for rapidly determining approximations to nonlinear flow solutions, with the purpose of establishing a method for minimizing computational requirements associated with parametric design studies of transonic flows in turbomachines. The results reported here concern the extension of the previously developed successful method for single parameter perturbations to simultaneous multiple-parameter perturbations, and the preliminary application of the multiple-parameter procedure in combination with an optimization method to blade design/optimization problem. In order to provide as severe a test as possible of the method, attention is focused in particular on transonic flows which are highly supercritical. Flows past both isolated blades and compressor cascades, involving simultaneous changes in both flow and geometric parameters, are considered. Comparisons with the corresponding exact nonlinear solutions display remarkable accuracy and range of validity, in direct correspondence with previous results for single-parameter perturbations.
Thermal regulation in multiple-source arc welding involving material transformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doumanidis, C.C.
1995-06-01
This article addresses regulation of the thermal field generated during arc welding, as the cause of solidification, heat-affected zone and cooling rate related metallurgical transformations affecting the final microstructure and mechanical properties of various welded materials. This temperature field is described by a dynamic real-time process model, consisting of an analytical composite conduction expression for the solid region, and a lumped-state, double-stream circulation model in the weld pool, integrated with a Gaussian heat input and calibrated experimentally through butt joint GMAW tests on plain steel plates. This model serves as the basis of an in-process thermal control system employing feedbackmore » of part surface temperatures measured by infrared pyrometry; and real-time identification of the model parameters with a multivariable adaptive control strategy. Multiple heat inputs and continuous power distributions are implemented by a single time-multiplexed torch, scanning the weld surface to ensure independent, decoupled control of several thermal characteristics. Their regulation is experimentally obtained in longitudinal GTAW of stainless steel pipes, despite the presence of several geometrical, thermal and process condition disturbances of arc welding.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, V P; Komyshan, A O; Smaliuk, A P
2013-12-31
It is shown that reabsorption of the luminescence radiation in the range of its overlapping with the absorption spectrum and the following reemission to a long-wavelength range may noticeably affect the process of stimulated Raman scattering (SRS) in polymethine dyes in multiple scattering media (MSM). This is related to the fact that SRS in such media occurs jointly with the random lasing (RL), which favors SRS and makes up with it a united nonlinear process. Reemission into the long-wavelength spectrum range amplified in MSM causes the RL spectrum to shift to longer wavelengths and initiates the long-wavelength band of RL,more » in which a main part of the lasing energy is concentrated. This weakens or completely stops the SRS if the band is beyond the range of possible spectral localisation of Stokes lines. This process depends on the efficiency of light scattering, dye concentration, temperature and pump intensity; hence, there exist optimal values of these parameters for obtaining SRS in MSM. (nonlinear optical phenomena)« less
Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie-Eve; Snow, Debra; Mettetal, Jerome T; Bialecki, Russell A
2016-10-01
While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen-mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular-relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Han-Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg(-1) , p.o.) at 4 h intervals and baclofen-mediated changes in parameters recorded. A pharmacokinetic-pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. The systems pharmacology model developed fits baclofen-mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. © 2016 The British Pharmacological Society.
Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie‐Eve; Snow, Debra
2016-01-01
Background and Purpose While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen‐mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular‐relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Experimental Approach Han‐Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg−1, p.o.) at 4 h intervals and baclofen‐mediated changes in parameters recorded. A pharmacokinetic–pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Key Results Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. Conclusions and Implications The systems pharmacology model developed fits baclofen‐mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. PMID:27448216
Fast dictionary generation and searching for magnetic resonance fingerprinting.
Jun Xie; Mengye Lyu; Jian Zhang; Hui, Edward S; Wu, Ed X; Ze Wang
2017-07-01
A super-fast dictionary generation and searching (DGS) algorithm was developed for MR parameter quantification using magnetic resonance fingerprinting (MRF). MRF is a new technique for simultaneously quantifying multiple MR parameters using one temporally resolved MR scan. But it has a multiplicative computation complexity, resulting in a big burden of dictionary generating, saving, and retrieving, which can easily be intractable for any state-of-art computers. Based on retrospective analysis of the dictionary matching object function, a multi-scale ZOOM like DGS algorithm, dubbed as MRF-ZOOM, was proposed. MRF ZOOM is quasi-parameter-separable so the multiplicative computation complexity is broken into additive one. Evaluations showed that MRF ZOOM was hundreds or thousands of times faster than the original MRF parameter quantification method even without counting the dictionary generation time in. Using real data, it yielded nearly the same results as produced by the original method. MRF ZOOM provides a super-fast solution for MR parameter quantification.
Chen, Xiaocheng; Cao, Gang; Jiang, Jianping
2014-01-01
Objective: The present study examined the pharmacokinetic profiles of two iridoid glycosides named morroniside and loganin in rat plasma after oral administration of crude and processed Cornus officinals. Materials and Methods: A rapid, selective and specific high-performance liquid chromatography/electrospray ionization tandem mass spectrometry with multiple reactions monitoring mode was developed to simultaneously investigate the pharmacokinetic profiles of morroniside and loganin in rat plasma after oral administration of crude C. officinals and its jiuzhipin. Results: The morroniside and loganin in crude and processed C. officinals could be simultaneously determined within 7.4 min. Linear calibration curves were obtained over the concentration ranges of 45.45-4800 ng/mL for all the analytes. The intra-and inter-day precisions relative standard deviation was lesser than 2.84% and 4.12%, respectively. Conclusion: The pharmacokinetic parameters of two iridoid glucosides were also compared systematically between crude and processed C. officinals. This paper provides the theoretical proofs for further explaining the processing mechanism of Traditional Chinese Medicines. PMID:24914290
NASA Astrophysics Data System (ADS)
Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong
2018-06-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
Zhang, Wei; Shmuylovich, Leonid; Kovacs, Sandor J
2009-01-01
Using a simple harmonic oscillator model (PDF formalism), every early filling E-wave can be uniquely described by a set of parameters, (x(0), c, and k). Parameter c in the PDF formalism is a damping or relaxation parameter that measures the energy loss during the filling process. Based on Bernoulli's equation and kinematic modeling, we derived a causal correlation between the relaxation parameter c in the PDF formalism and a feature of the pressure contour during filling - the pressure recovery ratio defined by the left ventricular pressure difference between diastasis and minimum pressure, normalized to the pressure difference between a fiducial pressure and minimum pressure [PRR = (P(Diastasis)-P(Min))/(P(Fiducial)-P(Min))]. We analyzed multiple heart beats from one human subject to validate the correlation. Further validation among more patients is warranted. PRR is the invasive causal analogue of the noninvasive E-wave relaxation parameter c. PRR has the potential to be calculated using automated methodology in the catheterization lab in real time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qian; University of the Chinese Academy of Sciences, Beijing 100039; Li, Bincheng, E-mail: bcli@uestc.ac.cn
2015-12-07
In this paper, photocarrier radiometry (PCR) technique with multiple pump beam sizes is employed to determine simultaneously the electronic transport parameters (the carrier lifetime, the carrier diffusion coefficient, and the front surface recombination velocity) of silicon wafers. By employing the multiple pump beam sizes, the influence of instrumental frequency response on the multi-parameter estimation is totally eliminated. A nonlinear PCR model is developed to interpret the PCR signal. Theoretical simulations are performed to investigate the uncertainties of the estimated parameter values by investigating the dependence of a mean square variance on the corresponding transport parameters and compared to that obtainedmore » by the conventional frequency-scan method, in which only the frequency dependences of the PCR amplitude and phase are recorded at single pump beam size. Simulation results show that the proposed multiple-pump-beam-size method can improve significantly the accuracy of the determination of the electronic transport parameters. Comparative experiments with a p-type silicon wafer with resistivity 0.1–0.2 Ω·cm are performed, and the electronic transport properties are determined simultaneously. The estimated uncertainties of the carrier lifetime, diffusion coefficient, and front surface recombination velocity are approximately ±10.7%, ±8.6%, and ±35.4% by the proposed multiple-pump-beam-size method, which is much improved than ±15.9%, ±29.1%, and >±50% by the conventional frequency-scan method. The transport parameters determined by the proposed multiple-pump-beam-size PCR method are in good agreement with that obtained by a steady-state PCR imaging technique.« less
Plasma effect on fast-electron-impact-ionization from 2p state of hydrogen-like ions
NASA Astrophysics Data System (ADS)
Qi, Y. Y.; Ning, L. N.; Wang, J. G.; Qu, Y. Z.
2013-12-01
Plasma effects on the high-energy electron-impact ionization process from 2p orbital of Hydrogen-like ions embedded in weakly coupled plasmas are investigated in the first Born approximation. The plasma screening of the Coulomb interaction between charged particles is represented by the Debye Hückel model. The screening of Coulomb interactions decreases the ionization energies and varies the wave functions for not only the bound orbital but also the continuum; the number of the summation for the angular-momentum states in the generalized oscillator strength densities is reduced with the plasma screening stronger when the ratio of ɛ /I2p (I2p is the ionization energy of 2p state and ɛ is the energy of the continuum electron) is kept, and then the contribution from the lower-angular-momentum states dominates the generalized oscillator strength densities, so the threshold phenomenon in the generalized oscillator strength densities and the double differential cross sections are remarkable: The accessional minima, the outstanding enhancement, and the resonance peaks emerge a certain energy region, whose energy position and width are related to the vicinity between δ and the critical value δnlc, corresponding to the special plasma condition when the bound state |nl⟩ just enters the continuum; the multiple virtual-state enhancement and the multiple shape resonances in a certain energy domain also appear in the single differential cross section whenever the plasma screening parameter passes through a critical value δnlc, which is similar to the photo-ionization process but different from it, where the dipole transition only happens, but multi-pole transition will occur in the electron-impact ionization process, so its multiple virtual-state enhancements and the multiple shape resonances appear more frequently than the photo-ionization process.
Solar Data Mining at Georgia State University
NASA Astrophysics Data System (ADS)
Angryk, R.; Martens, P. C.; Schuh, M.; Aydin, B.; Kempton, D.; Banda, J.; Ma, R.; Naduvil-Vadukootu, S.; Akkineni, V.; Küçük, A.; Filali Boubrahimi, S.; Hamdi, S. M.
2016-12-01
In this talk we give an overview of research projects related to solar data analysis that are conducted at Georgia State University. We will provide update on multiple advances made by our research team on the analysis of image parameters, spatio-temporal patterns mining, temporal data analysis and our experiences with big, heterogeneous solar data visualization, analysis, processing and storage. We will talk about up-to-date data mining methodologies, and their importance for big data-driven solar physics research.
Consensus Classification Using Non-Optimized Classifiers.
Brownfield, Brett; Lemos, Tony; Kalivas, John H
2018-04-03
Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.
NASA Astrophysics Data System (ADS)
Peng, Lanfang; Liu, Paiyu; Feng, Xionghan; Wang, Zimeng; Cheng, Tao; Liang, Yuzhen; Lin, Zhang; Shi, Zhenqing
2018-03-01
Predicting the kinetics of heavy metal adsorption and desorption in soil requires consideration of multiple heterogeneous soil binding sites and variations of reaction chemistry conditions. Although chemical speciation models have been developed for predicting the equilibrium of metal adsorption on soil organic matter (SOM) and important mineral phases (e.g. Fe and Al (hydr)oxides), there is still a lack of modeling tools for predicting the kinetics of metal adsorption and desorption reactions in soil. In this study, we developed a unified model for the kinetics of heavy metal adsorption and desorption in soil based on the equilibrium models WHAM 7 and CD-MUSIC, which specifically consider metal kinetic reactions with multiple binding sites of SOM and soil minerals simultaneously. For each specific binding site, metal adsorption and desorption rate coefficients were constrained by the local equilibrium partition coefficients predicted by WHAM 7 or CD-MUSIC, and, for each metal, the desorption rate coefficients of various binding sites were constrained by their metal binding constants with those sites. The model had only one fitting parameter for each soil binding phase, and all other parameters were derived from WHAM 7 and CD-MUSIC. A stirred-flow method was used to study the kinetics of Cd, Cu, Ni, Pb, and Zn adsorption and desorption in multiple soils under various pH and metal concentrations, and the model successfully reproduced most of the kinetic data. We quantitatively elucidated the significance of different soil components and important soil binding sites during the adsorption and desorption kinetic processes. Our model has provided a theoretical framework to predict metal adsorption and desorption kinetics, which can be further used to predict the dynamic behavior of heavy metals in soil under various natural conditions by coupling other important soil processes.
Image Tiling for Profiling Large Objects
NASA Technical Reports Server (NTRS)
Venkataraman, Ajit; Schock, Harold; Mercer, Carolyn R.
1992-01-01
Three dimensional surface measurements of large objects arc required in a variety of industrial processes. The nature of these measurements is changing as optical instruments arc beginning to replace conventional contact probes scanned over the objects. A common characteristic of the optical surface profilers is the trade off between measurement accuracy and field of view. In order to measure a large object with high accuracy, multiple views arc required. An accurate transformation between the different views is needed to bring about their registration. In this paper, we demonstrate how the transformation parameters can be obtained precisely by choosing control points which lie in the overlapping regions of the images. A good starting point for the transformation parameters is obtained by having a knowledge of the scanner position. The selection of the control points arc independent of the object geometry. By successively recording multiple views and obtaining transformation with respect to a single coordinate system, a complete physical model of an object can be obtained. Since all data arc in the same coordinate system, it can thus be used for building automatic models for free form surfaces.
Drop impact into a deep pool: vortex shedding and jet formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agbaglah, G.; Thoraval, M. -J.; Thoroddsen, S. T.
2015-02-01
One of the simplest splashing scenarios results from the impact of a single drop on a deep pool. The traditional understanding of this process is that the impact generates an axisymmetric sheet-like jet that later breaks up into secondary droplets. Recently it was shown that even this simplest of scenarios is more complicated than expected because multiple jets can be generated from a single impact event and there are transitions in the multiplicity of jets as the experimental parameters are varied. Here, we use experiments and numerical simulations of a single drop impacting on a deep pool to examine themore » transition from impacts that produce a single jet to those that produce two jets. Using high-speed X-ray imaging methods we show that vortex separation within the drop leads to the formation of a second jet long after the formation of the ejecta sheet. Using numerical simulations we develop a phase diagram for this transition and show that the capillary number is the most appropriate order parameter for the transition.« less
Automated carbon dioxide cleaning system
NASA Technical Reports Server (NTRS)
Hoppe, David T.
1991-01-01
Solidified CO2 pellets are an effective blast media for the cleaning of a variety of materials. CO2 is obtained from the waste gas streams generated from other manufacturing processes and therefore does not contribute to the greenhouse effect, depletion of the ozone layer, or the environmental burden of hazardous waste disposal. The system is capable of removing as much as 90 percent of the contamination from a surface in one pass or to a high cleanliness level after multiple passes. Although the system is packaged and designed for manual hand held cleaning processes, the nozzle can easily be attached to the end effector of a robot for automated cleaning of predefined and known geometries. Specific tailoring of cleaning parameters are required to optimize the process for each individual geometry. Using optimum cleaning parameters the CO2 systems were shown to be capable of cleaning to molecular levels below 0.7 mg/sq ft. The systems were effective for removing a variety of contaminants such as lubricating oils, cutting oils, grease, alcohol residue, biological films, and silicone. The system was effective on steel, aluminum, and carbon phenolic substrates.
Khalil, Mohamed H.; Shebl, Mostafa K.; Kosba, Mohamed A.; El-Sabrout, Karim; Zaki, Nesma
2016-01-01
Aim: This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens’ eggs. Materials and Methods: Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. Results: The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. Conclusion: A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens. PMID:27651666
NASA Astrophysics Data System (ADS)
Wang, Hong; Ren, Bao-Cang; Alzahrani, Faris; Hobiny, Aatef; Deng, Fu-Guo
2017-10-01
Hyperentanglement has significant applications in quantum information processing. Here we present an efficient hyperentanglement concentration protocol (hyper-ECP) for partially hyperentangled Bell states simultaneously entangled in polarization, spatial-mode and time-bin degrees of freedom (DOFs) with the parameter-splitting method, where the parameters of the partially hyperentangled Bell states are known to the remote parties. In this hyper-ECP, only one remote party is required to perform some local operations on the three DOFs of a photon, only the linear optical elements are considered, and the success probability can achieve the maximal value. Our hyper-ECP can be easily generalized to concentrate the N-photon partially hyperentangled Greenberger-Horne-Zeilinger states with known parameters, where the multiple DOFs have largely improved the channel capacity of long-distance quantum communication. All of these make our hyper-ECP more practical and useful in high-capacity long-distance quantum communication.
NASA Astrophysics Data System (ADS)
Bíró, Gábor; Barnaföldi, Gergely Gábor; Biró, Tamás Sándor; Shen, Keming
2018-02-01
The latest, high-accuracy identified hadron spectra measurements in highenergy nuclear collisions led us to the investigation of the strongly interacting particles and collective effects in small systems. Since microscopical processes result in a statistical Tsallis - Pareto distribution, the fit parameters q and T are well suited for identifying system size scalings and initial conditions. Moreover, parameter values provide information on the deviation from the extensive, Boltzmann - Gibbs statistics in finite-volumes. We apply here the fit procedure developed in our earlier study for proton-proton collisions [1, 2]. The observed mass and center-of-mass energy trends in the hadron production are compared to RHIC dAu and LHC pPb data in different centrality/multiplicity classes. Here we present new results on mass hierarchy in pp and pA from light to heavy hadrons.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
McGrory, Sarah; Taylor, Adele M; Kirin, Mirna; Corley, Janie; Pattie, Alison; Cox, Simon R; Dhillon, Baljean; Wardlaw, Joanna M; Doubal, Fergus N; Starr, John M; Trucco, Emanuele; MacGillivray, Thomas J; Deary, Ian J
2017-01-01
Aim To examine the relationship between retinal vascular morphology and cognitive abilities in a narrow-age cohort of community-dwelling older people. Methods Digital retinal images taken at age ∼73 years from 683 participants of the Lothian Birth Cohort 1936 (LBC1936) were analysed with Singapore I Vessel Assessment (SIVA) software. Multiple regression models were applied to determine cross-sectional associations between retinal vascular parameters and general cognitive ability (g), memory, processing speed, visuospatial ability, crystallised cognitive ability and change in IQ from childhood to older age. Results After adjustment for cognitive ability at age 11 years and cardiovascular risk factors, venular length-to-diameter ratio was nominally significantly associated with processing speed (β=−0.116, p=0.01) and g (β=−0.079, p=0.04). Arteriolar length-to-diameter ratio was associated with visuospatial ability (β=0.092, p=0.04). Decreased arteriolar junctional exponent deviation and increased arteriolar branching coefficient values were associated with less relative decline in IQ between childhood and older age (arteriolar junctional exponent deviation: β=−0.101, p=0.02; arteriolar branching coefficient: β=0.089, p=0.04). Data are presented as standardised β coefficients (β) reflecting change in cognitive domain score associated with an increase of 1 SD unit in retinal parameter. None of these nominally significant associations remained significant after correction for multiple statistical testing. Conclusions Retinal parameters contributed <1% of the variance in the majority of associations observed. Whereas retinal analysis may have potential for early detection of some types of age-related cognitive decline and dementia, our results present little evidence that retinal vascular features are associated with non-pathological cognitive ageing. PMID:28400371
TVA-based assessment of visual attentional functions in developmental dyslexia
Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca
2014-01-01
There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129
Phosphatidylcholine Membrane Fusion Is pH-Dependent.
Akimov, Sergey A; Polynkin, Michael A; Jiménez-Munguía, Irene; Pavlov, Konstantin V; Batishchev, Oleg V
2018-05-03
Membrane fusion mediates multiple vital processes in cell life. Specialized proteins mediate the fusion process, and a substantial part of their energy is used for topological rearrangement of the membrane lipid matrix. Therefore, the elastic parameters of lipid bilayers are of crucial importance for fusion processes and for determination of the energy barriers that have to be crossed for the process to take place. In the case of fusion of enveloped viruses (e.g., influenza) with endosomal membrane, the interacting membranes are in an acidic environment, which can affect the membrane's mechanical properties. This factor is often neglected in the analysis of virus-induced membrane fusion. In the present work, we demonstrate that even for membranes composed of zwitterionic lipids, changes of the environmental pH in the physiologically relevant range of 4.0 to 7.5 can affect the rate of the membrane fusion notably. Using a continual model, we demonstrated that the key factor defining the height of the energy barrier is the spontaneous curvature of the lipid monolayer. Changes of this parameter are likely to be caused by rearrangements of the polar part of lipid molecules in response to changes of the pH of the aqueous solution bathing the membrane.
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
A theoretical study of electron multiplication coefficient in a cold-cathode Penning ion generator
NASA Astrophysics Data System (ADS)
Noori, H.; Ranjbar, A. H.; Rahmanipour, R.
2017-11-01
The discharge mechanism of a Penning ion generator (PIG) is seriously influenced by the electron ionization process. A theoretical approach has been proposed to formulate the electron multiplication coefficient, M, of a PIG as a function of the axial magnetic field and the applied voltage. A numerical simulation was used to adjust the free parameters of expression M. Using the coefficient M, the values of the effective secondary electron emission coefficient, γeff, were obtained to be from 0.09 to 0.22. In comparison to the experimental results, the average value of γeff differs from the secondary coefficient of clean and dirty metals by the factors 1.4 and 0.5, respectively.
Manufacture of Sparse-Spectrum Optical Microresonators
NASA Technical Reports Server (NTRS)
Savchenkov, Anatoliy; Iltchenko, Vladimir; Maleki, Lute; Kossakovski, Dimitri
2006-01-01
An alternative design for dielectric optical microresonators and a relatively simple process to fabricate them have been proposed. The proposed microresonators would exploit the same basic physical phenomena as those of microtorus optical resonators and of the microsphere optical resonators described elsewhere. The resonances in such devices are associated with the propagation of electromagnetic waves along circumferential paths in "whispering-gallery" modes. The main advantage afforded by the proposal is that the design and the fabrication process are expected to be amenable to production of multiple microresonators having reproducible spectral parameters -- including, most notably, high values of the resonance quality factor (Q) and reproducible resonance frequencies.
FAST TRACK COMMUNICATION: Attosecond correlation dynamics during electron tunnelling from molecules
NASA Astrophysics Data System (ADS)
Walters, Zachary B.; Smirnova, Olga
2010-08-01
In this communication, we present an analytical theory of strong-field ionization of molecules, which takes into account the rearrangement of multiple interacting electrons during the ionization process. We show that such rearrangement offers an alternative pathway to the ionization of orbitals more deeply bound than the highest occupied molecular orbital. This pathway is not subject to the full exponential suppression characteristic of direct tunnel ionization from the deeper orbitals. The departing electron produces an 'attosecond correlation pulse' which controls the rearrangement during the tunnelling process. The shape and duration of this pulse are determined by the electronic structure of the relevant states, molecular orientation and laser parameters.
Systems Analyze Water Quality in Real Time
NASA Technical Reports Server (NTRS)
2010-01-01
A water analyzer developed under Small Business Innovation Research (SBIR) contracts with Kennedy Space Center now monitors treatment processes at water and wastewater facilities around the world. Originally designed to provide real-time detection of nutrient levels in hydroponic solutions for growing plants in space, the ChemScan analyzer, produced by ASA Analytics Inc., of Waukesha, Wisconsin, utilizes spectrometry and chemometric algorithms to automatically analyze multiple parameters in the water treatment process with little need for maintenance, calibration, or operator intervention. The company has experienced a compound annual growth rate of 40 percent over its 15-year history as a direct result of the technology's success.
Aungkulanon, Pasura; Luangpaiboon, Pongchanun
2016-01-01
Response surface methods via the first or second order models are important in manufacturing processes. This study, however, proposes different structured mechanisms of the vertical transportation systems or VTS embedded on a shuffled frog leaping-based approach. There are three VTS scenarios, a motion reaching a normal operating velocity, and both reaching and not reaching transitional motion. These variants were performed to simultaneously inspect multiple responses affected by machining parameters in multi-pass turning processes. The numerical results of two machining optimisation problems demonstrated the high performance measures of the proposed methods, when compared to other optimisation algorithms for an actual deep cut design.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
2004-01-01
The systems engineering description of a wideband communications channel is provided which is based upon the fundamental propagation aspects of the problem. In particular, the well known time variant description of a channel is formulated from the basic multiple scattering processes that occur in a random propagation medium. Such a connection is required if optimal processing methods are to be applied to mitigate the deleterious random fading and multipathing of the channel. An example is given which demonstrates how the effective bandwidth of the channel is diminished due to atmospheric propagation impairments.
Processing circuit with asymmetry corrector and convolutional encoder for digital data
NASA Technical Reports Server (NTRS)
Pfiffner, Harold J. (Inventor)
1987-01-01
A processing circuit is provided for correcting for input parameter variations, such as data and clock signal symmetry, phase offset and jitter, noise and signal amplitude, in incoming data signals. An asymmetry corrector circuit performs the correcting function and furnishes the corrected data signals to a convolutional encoder circuit. The corrector circuit further forms a regenerated clock signal from clock pulses in the incoming data signals and another clock signal at a multiple of the incoming clock signal. These clock signals are furnished to the encoder circuit so that encoded data may be furnished to a modulator at a high data rate for transmission.
Analysis of translucent and opaque photocathodes.
Sizelove, J R; Love Iii, J A
1966-09-01
By an analysis of the photodetection process, the response of photodetectors to wide band, noncoherent light and guidelines for its improvement are determined. In this paper, the phenomenon of multiple reflections within the emitter of a reflecting-translucent and a reflecting-opaque photocathode is analyzed. Geometrical and optical configurations and solid state parameters are evaluated in terms of their effect on the photodetection process. The quantum yield, the percent of incident light absorbed, and the collection efficiency are determined as functions of the thickness of the emitting layer. These results are then employed to suggest areas of improvement in the use of state-of-the-art photocathodes.
NASA Astrophysics Data System (ADS)
An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen
2018-05-01
In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.
An extended harmonic balance method based on incremental nonlinear control parameters
NASA Astrophysics Data System (ADS)
Khodaparast, Hamed Haddad; Madinei, Hadi; Friswell, Michael I.; Adhikari, Sondipon; Coggon, Simon; Cooper, Jonathan E.
2017-02-01
A new formulation for calculating the steady-state responses of multiple-degree-of-freedom (MDOF) non-linear dynamic systems due to harmonic excitation is developed. This is aimed at solving multi-dimensional nonlinear systems using linear equations. Nonlinearity is parameterised by a set of 'non-linear control parameters' such that the dynamic system is effectively linear for zero values of these parameters and nonlinearity increases with increasing values of these parameters. Two sets of linear equations which are formed from a first-order truncated Taylor series expansion are developed. The first set of linear equations provides the summation of sensitivities of linear system responses with respect to non-linear control parameters and the second set are recursive equations that use the previous responses to update the sensitivities. The obtained sensitivities of steady-state responses are then used to calculate the steady state responses of non-linear dynamic systems in an iterative process. The application and verification of the method are illustrated using a non-linear Micro-Electro-Mechanical System (MEMS) subject to a base harmonic excitation. The non-linear control parameters in these examples are the DC voltages that are applied to the electrodes of the MEMS devices.
Multiple frequency method for operating electrochemical sensors
Martin, Louis P [San Ramon, CA
2012-05-15
A multiple frequency method for the operation of a sensor to measure a parameter of interest using calibration information including the steps of exciting the sensor at a first frequency providing a first sensor response, exciting the sensor at a second frequency providing a second sensor response, using the second sensor response at the second frequency and the calibration information to produce a calculated concentration of the interfering parameters, using the first sensor response at the first frequency, the calculated concentration of the interfering parameters, and the calibration information to measure the parameter of interest.
Digital Detection and Processing of Multiple Quadrature Harmonics for EPR Spectroscopy
Ahmad, R.; Som, S.; Kesselring, E.; Kuppusamy, P.; Zweier, J.L.; Potter, L.C.
2010-01-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. PMID:20971667
Digital detection and processing of multiple quadrature harmonics for EPR spectroscopy.
Ahmad, R; Som, S; Kesselring, E; Kuppusamy, P; Zweier, J L; Potter, L C
2010-12-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. Copyright © 2010 Elsevier Inc. All rights reserved.
Nie, Yifan; Liang, Chaoping; Cha, Pil-Ryung; Colombo, Luigi; Wallace, Robert M; Cho, Kyeongjae
2017-06-07
Controlled growth of crystalline solids is critical for device applications, and atomistic modeling methods have been developed for bulk crystalline solids. Kinetic Monte Carlo (KMC) simulation method provides detailed atomic scale processes during a solid growth over realistic time scales, but its application to the growth modeling of van der Waals (vdW) heterostructures has not yet been developed. Specifically, the growth of single-layered transition metal dichalcogenides (TMDs) is currently facing tremendous challenges, and a detailed understanding based on KMC simulations would provide critical guidance to enable controlled growth of vdW heterostructures. In this work, a KMC simulation method is developed for the growth modeling on the vdW epitaxy of TMDs. The KMC method has introduced full material parameters for TMDs in bottom-up synthesis: metal and chalcogen adsorption/desorption/diffusion on substrate and grown TMD surface, TMD stacking sequence, chalcogen/metal ratio, flake edge diffusion and vacancy diffusion. The KMC processes result in multiple kinetic behaviors associated with various growth behaviors observed in experiments. Different phenomena observed during vdW epitaxy process are analysed in terms of complex competitions among multiple kinetic processes. The KMC method is used in the investigation and prediction of growth mechanisms, which provide qualitative suggestions to guide experimental study.
Predicting dual-task performance with the Multiple Resources Questionnaire (MRQ).
Boles, David B; Bursk, Jonathan H; Phillips, Jeffrey B; Perdelwitz, Jason R
2007-02-01
The objective was to assess the validity of the Multiple Resources Questionnaire (MRQ) in predicting dual-task interference. Subjective workload measures such as the Subjective Workload Assessment Technique (SWAT) and NASA Task Load Index are sensitive to single-task parameters and dual-task loads but have not attempted to measure workload in particular mental processes. An alternative is the MRQ. In Experiment 1, participants completed simple laboratory tasks and the MRQ after each. Interference between tasks was then correlated to three different task similarity metrics: profile similarity, based on r(2) between ratings; overlap similarity, based on summed minima; and overall demand, based on summed ratings. Experiment 2 used similar methods but more complex computer-based games. In Experiment 1 the MRQ moderately predicted interference (r = +.37), with no significant difference between metrics. In Experiment 2 the metric effect was significant, with overlap similarity excelling in predicting interference (r = +.83). Mean ratings showed high diagnosticity in identifying specific mental processing bottlenecks. The MRQ shows considerable promise as a cognitive-process-sensitive workload measure. Potential applications of the MRQ include the identification of dual-processing bottlenecks as well as process overloads in single tasks, preparatory to redesign in areas such as air traffic management, advanced flight displays, and medical imaging.
NASA Astrophysics Data System (ADS)
Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah
2017-04-01
Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
Lawson, Daniel J; Holtrop, Grietje; Flint, Harry
2011-07-01
Process models specified by non-linear dynamic differential equations contain many parameters, which often must be inferred from a limited amount of data. We discuss a hierarchical Bayesian approach combining data from multiple related experiments in a meaningful way, which permits more powerful inference than treating each experiment as independent. The approach is illustrated with a simulation study and example data from experiments replicating the aspects of the human gut microbial ecosystem. A predictive model is obtained that contains prediction uncertainty caused by uncertainty in the parameters, and we extend the model to capture situations of interest that cannot easily be studied experimentally. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Dralle, D.; Karst, N.; Thompson, S. E.
2015-12-01
Multiple competing theories suggest that power law behavior governs the observed first-order dynamics of streamflow recessions - the important process by which catchments dry-out via the stream network, altering the availability of surface water resources and in-stream habitat. Frequently modeled as: dq/dt = -aqb, recessions typically exhibit a high degree of variability, even within a single catchment, as revealed by significant shifts in the values of "a" and "b" across recession events. One potential source of this variability lies in underlying, hard-to-observe fluctuations in how catchment water storage is partitioned amongst distinct storage elements, each having different discharge behaviors. Testing this and competing hypotheses with widely available streamflow timeseries, however, has been hindered by a power law scaling artifact that obscures meaningful covariation between the recession parameters, "a" and "b". Here we briefly outline a technique that removes this artifact, revealing intriguing new patterns in the joint distribution of recession parameters. Using long-term flow data from catchments in Northern California, we explore temporal variations, and find that the "a" parameter varies strongly with catchment wetness. Then we explore how the "b" parameter changes with "a", and find that measures of its variation are maximized at intermediate "a" values. We propose an interpretation of this pattern based on statistical mechanics, meaning "b" can be viewed as an indicator of the catchment "microstate" - i.e. the partitioning of storage - and "a" as a measure of the catchment macrostate (i.e. the total storage). In statistical mechanics, entropy (i.e. microstate variance, that is the variance of "b") is maximized for intermediate values of extensive variables (i.e. wetness, "a"), as observed in the recession data. This interpretation of "a" and "b" was supported by model runs using a multiple-reservoir catchment toy model, and lends support to the hypothesis that power law streamflow recession dynamics, and their variations, have their origin in the multiple modalities of storage partitioning.
Targeted versus statistical approaches to selecting parameters for modelling sediment provenance
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick
2017-04-01
One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.
Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment
2013-10-01
presented below. Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in...multiple parameters . Most samples have been mechanically tested and data extracted for multiple parameters . Histological evaluation of subset of...Sumner, D. R. Saline Irrigation Does Not Affect Bone Formation or Fixation Strength of Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model
Briand, Catherine; Sablier, Juliette; Therrien, Julie-Anne; Charbonneau, Karine; Pelletier, Jean-François; Weiss-Lambrou, Rhoda
2018-07-01
This study aimed to test the feasibility of using a mobile device (Apple technology: iPodTouch®, iPhone® or iPad®) among people with severe mental illness (SMI) in a rehabilitation and recovery process and to document the parameters to be taken into account and the issues involved in implementing this technology in living environments and mental health care settings. A qualitative multiple case study design and multiple data sources were used to understand each case in depth. A clinical and comprehensive analysis of 11 cases was conducted with exploratory and descriptive aims (and the beginnings of explanation building). The multiple-case analysis brought out four typical profiles to illustrate the extent of integration of a personal digital assistant (PDA) as a tool to support mental health rehabilitation and recovery. Each profile highlights four categories of variables identified as determining factors in this process: (1) state of health and related difficulties (cognitive or functional); (2) relationship between comfort level with technology, motivation and personal effort deployed; (3) relationship between support required and support received; and (4) the living environment and follow-up context. This study allowed us to consider the contexts and conditions to be put in place for the successful integration of mobile technology in a mental health rehabilitation and recovery process.
Valdés, Pablo A.; Kim, Anthony; Leblond, Frederic; Conde, Olga M.; Harris, Brent T.; Paulsen, Keith D.; Wilson, Brian C.; Roberts, David W.
2011-01-01
Biomarkers are indicators of biological processes and hold promise for the diagnosis and treatment of disease. Gliomas represent a heterogeneous group of brain tumors with marked intra- and inter-tumor variability. The extent of surgical resection is a significant factor influencing post-surgical recurrence and prognosis. Here, we used fluorescence and reflectance spectral signatures for in vivo quantification of multiple biomarkers during glioma surgery, with fluorescence contrast provided by exogenously-induced protoporphyrin IX (PpIX) following administration of 5-aminolevulinic acid. We performed light-transport modeling to quantify multiple biomarkers indicative of tumor biological processes, including the local concentration of PpIX and associated photoproducts, total hemoglobin concentration, oxygen saturation, and optical scattering parameters. We developed a diagnostic algorithm for intra-operative tissue delineation that accounts for the combined tumor-specific predictive capabilities of these quantitative biomarkers. Tumor tissue delineation achieved accuracies of up to 94% (specificity = 94%, sensitivity = 94%) across a range of glioma histologies beyond current state-of-the-art optical approaches, including state-of-the-art fluorescence image guidance. This multiple biomarker strategy opens the door to optical methods for surgical guidance that use quantification of well-established neoplastic processes. Future work would seek to validate the predictive power of this proof-of-concept study in a separate larger cohort of patients. PMID:22112112
NASA Astrophysics Data System (ADS)
Valdés, Pablo A.; Kim, Anthony; Leblond, Frederic; Conde, Olga M.; Harris, Brent T.; Paulsen, Keith D.; Wilson, Brian C.; Roberts, David W.
2011-11-01
Biomarkers are indicators of biological processes and hold promise for the diagnosis and treatment of disease. Gliomas represent a heterogeneous group of brain tumors with marked intra- and inter-tumor variability. The extent of surgical resection is a significant factor influencing post-surgical recurrence and prognosis. Here, we used fluorescence and reflectance spectral signatures for in vivo quantification of multiple biomarkers during glioma surgery, with fluorescence contrast provided by exogenously-induced protoporphyrin IX (PpIX) following administration of 5-aminolevulinic acid. We performed light-transport modeling to quantify multiple biomarkers indicative of tumor biological processes, including the local concentration of PpIX and associated photoproducts, total hemoglobin concentration, oxygen saturation, and optical scattering parameters. We developed a diagnostic algorithm for intra-operative tissue delineation that accounts for the combined tumor-specific predictive capabilities of these quantitative biomarkers. Tumor tissue delineation achieved accuracies of up to 94% (specificity = 94%, sensitivity = 94%) across a range of glioma histologies beyond current state-of-the-art optical approaches, including state-of-the-art fluorescence image guidance. This multiple biomarker strategy opens the door to optical methods for surgical guidance that use quantification of well-established neoplastic processes. Future work would seek to validate the predictive power of this proof-of-concept study in a separate larger cohort of patients.
NASA Technical Reports Server (NTRS)
Allen, C. P.; Martin, C. F.
1977-01-01
The SEAHT program is designed to process multiple passes of altimeter data with intersecting ground tracks, with the estimation of corrections for orbital errors to each pass such that the data has the best overall agreement at the crossover points. Orbit error for each pass is modeled as a polynomial in time, with optional orders of 0, 1, or 2. One or more passes may be constrained in the adjustment process, thus allowing passes with the best orbits to provide the overall level and orientation of the estimated sea surface heights. Intersections which disagree by more than an input edit level are not used in the error parameter estimation. In the program implementation, passes are grouped into South-North passes and North-South passes, with the North-South passes partitioned out for the estimation of orbit error parameters. Computer core utilization is thus dependent on the number of parameters estimated for the set of South-North arcs, but is independent on the number of North-South passes. Estimated corrections for each pass are applied to the data at its input data rate and an output tape is written which contains the corrected data.
Decomposing ADHD-Related Effects in Response Speed and Variability
Karalunas, Sarah L.; Huang-Pollock, Cynthia L.; Nigg, Joel T.
2012-01-01
Objective Slow and variable reaction times (RTs) on fast tasks are such a prominent feature of Attention Deficit Hyperactivity Disorder (ADHD) that any theory must account for them. However, this has proven difficult because the cognitive mechanisms responsible for this effect remain unexplained. Although speed and variability are typically correlated, it is unclear whether single or multiple mechanisms are responsible for group differences in each. RTs are a result of several semi-independent processes, including stimulus encoding, rate of information processing, speed-accuracy trade-offs, and motor response, which have not been previously well characterized. Method A diffusion model was applied to RTs from a forced-choice RT paradigm in two large, independent case-control samples (NCohort 1= 214 and N Cohort 2=172). The decomposition measured three validated parameters that account for the full RT distribution, and assessed reproducibility of ADHD effects. Results In both samples, group differences in traditional RT variables were explained by slow information processing speed, and unrelated to speed-accuracy trade-offs or non-decisional processes (e.g. encoding, motor response). Conclusions RT speed and variability in ADHD may be explained by a single information processing parameter, potentially simplifying explanations that assume different mechanisms are required to account for group differences in the mean and variability of RTs. PMID:23106115
NASA Astrophysics Data System (ADS)
Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun
2016-03-01
Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.
Design and experimental verification for optical module of optical vector-matrix multiplier.
Zhu, Weiwei; Zhang, Lei; Lu, Yangyang; Zhou, Ping; Yang, Lin
2013-06-20
Optical computing is a new method to implement signal processing functions. The multiplication between a vector and a matrix is an important arithmetic algorithm in the signal processing domain. The optical vector-matrix multiplier (OVMM) is an optoelectronic system to carry out this operation, which consists of an electronic module and an optical module. In this paper, we propose an optical module for OVMM. To eliminate the cross talk and make full use of the optical elements, an elaborately designed structure that involves spherical lenses and cylindrical lenses is utilized in this optical system. The optical design software package ZEMAX is used to optimize the parameters and simulate the whole system. Finally, experimental data is obtained through experiments to evaluate the overall performance of the system. The results of both simulation and experiment indicate that the system constructed can implement the multiplication between a matrix with dimensions of 16 by 16 and a vector with a dimension of 16 successfully.
NASA Astrophysics Data System (ADS)
Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.
2014-07-01
In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.
The aging process of optical couplers by gamma irradiation
NASA Astrophysics Data System (ADS)
Bednarek, Lukas; Marcinka, Ondrej; Perecar, Frantisek; Papes, Martin; Hajek, Lukas; Nedoma, Jan; Vasinek, Vladimir
2015-08-01
Scientists have recently discovered that the ageing process of optical elements is faster than it was originally anticipated. It is mostly due to the multiple increases of the optical power in optical components, the introduction of wavelength division multiplexers and, overall, the increased flow of traffic in optical communications. This article examines the ageing process of optical couplers and it focuses on their performance parameters. It describes the measurement procedure followed by the evaluation of the measurement results. To accelerate the ageing process, gamma irradiation from 60Co was used. The results of the measurements of the optical coupler with one input and eight outputs (1:8) were summarized. The results gained by measuring of the optical coupler with one input and four outputs (1:4) as well as of the optical couplers with one input and two outputs (1:2) with different split ratios were also processed. The optical powers were measured on the input and the outputs of each branch of each optical coupler at the wavelengths of 1310 nm and 1550 nm. The parameters of the optical couplers were subsequently calculated according to the appropriate formulas. These parameters were the insertion loss of the individual branches, split ratio, total losses, homogeneity of the losses and directionalities alias cross-talk between the individual output branches. The gathered data were summarized before and after the first irradiation when the configuration of the couplers was 1:8 and 1:4. The data were summarized after the third irradiation when the configuration of the couplers was 1:2.
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
Explanation of the cw operation of the Er3+ 3-μm crystal laser
NASA Astrophysics Data System (ADS)
Pollnau, M.; Graf, Th.; Balmer, J. E.; Lüthy, W.; Weber, H. P.
1994-05-01
A computer simulation of the Er3+ 3-μm crystal laser considering the full rate-equation scheme up to the 4F7/2 level has been performed. The influence of the important system parameters on lasing and the interaction of these parameters has been clarified with multiple-parameter variations. Stimulated emission is fed mainly by up-conversion from the lower laser level and in many cases is reduced by the quenching of the lifetime of this level. However, also without up-conversion a set of parameters can be found that allows lasing. Up-conversion from the upper laser level is detrimental to stimulated emission but may be compensated by cross relaxation from the 4S3/2 level. For a typical experimental situation we started with the parameters of Er3+:LiYF4. In addition, the host materials Y3Al5O12 (YAG), YAlO3, Y3Sc2Al3O12 (YSGG), and BaY2F8, as well as the possibilities of codoping, are discussed. In view of the consideration of all excited levels up to 4F7/2, all lifetimes and branching ratios, ground-state depletion, excited-state absorption, three up-conversion processes as well as their inverse processes, stimulated emission, and a realistic resonator design, this is, to our knowledge, the most detailed investigation of the Er3+ 3-μm crystal laser performed so far.
A method for computing ion energy distributions for multifrequency capacitive discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Alan C. F.; Lieberman, M. A.; Verboncoeur, J. P.
2007-03-01
The ion energy distribution (IED) at a surface is an important parameter for processing in multiple radio frequency driven capacitive discharges. An analytical model is developed for the IED in a low pressure discharge based on a linear transfer function that relates the time-varying sheath voltage to the time-varying ion energy response at the surface. This model is in good agreement with particle-in-cell simulations over a wide range of single, dual, and triple frequency driven capacitive discharge excitations.
2015-02-01
Right of Canada as represented by the Minister of National Defence, 2015 c© Sa Majesté la Reine (en droit du Canada), telle que représentée par le...References [1] Chiu, S. (2010), Moving target parameter estimation for RADARSAT-2 Moving Object Detection EXperiment (MODEX), International Journal of...of multiple sinusoids in noise, In Proceedings. (ICASSP ’01). 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5
2008-10-01
and UTCHEM (Clement et al., 1998). While all four of these software packages use conservation of mass as the basic principle for tracking NAPL...simulate dissolution of a single NAPL component. UTCHEM can be used to simulate dissolution of a multiple NAPL components using either linear or first...parameters. No UTCHEM a/ 3D model, general purpose NAPL simulator. Yes Virulo a/ Probabilistic model for predicting leaching of viruses in unsaturated
Stages of physical dependence in New Zealand smokers: Prevalence and correlates.
Walton, Darren; Newcombe, Rhiannon; Li, Judy; Tu, Danny; DiFranza, Joseph R
2016-12-01
Physically dependent smokers experience symptoms of wanting, craving or needing to smoke when too much time has passed since the last cigarette. There is interest in whether wanting, craving and needing represent variations in the intensity of a single physiological parameter or whether multiple physiological processes may be involved in the developmental progression of physical dependence. Our aim was to determine how a population of cigarette smokers is distributed across the wanting, craving and needing stages of physical dependence. A nationwide survey of 2594 New Zealanders aged 15years and over was conducted in 2014. The stage of physical dependence was assessed using the Levels of Physical Dependence measure. Ordinal logistic regression analysis was used to assess relations between physical dependence and other variables. Among 590 current smokers (weighted 16.2% of the sample), 22.3% had no physical dependence, 23.5% were in the Wanting stage, 14.4% in the Craving stage, and 39.8% in the Needing stage. The stage of physical dependence was predicted by daily cigarette consumption, and the time to first cigarette, but not by age, gender, ethnicity or socioeconomic status. Fewer individuals were in the craving stage than either the wanting or needing stages. The resulting inverted U-shaped curve with concentrations at either extreme is difficult to explain as a variation of a single biological parameter. The data support an interpretation that progression through the stages of wanting, craving and needing may involve more than one physiological process. Physical dependence to tobacco develops through a characteristic sequence of wanting, craving and needing which correspond to changes in addiction pathways in the brain. It is important to neuroscience research to determine if the development of physical dependence involves changes in a single brain process, or multiple processes. Our data suggests that more than one physiologic process is involved in the progression of physical dependence. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dasgupta, S.; Mukherjee, S.
2016-09-01
One of the most significant factors in metal cutting is tool life. In this research work, the effects of machining parameters on tool under wet machining environment were studied. Tool life characteristics of brazed carbide cutting tool machined against mild steel and optimization of machining parameters based on Taguchi design of experiments were examined. The experiments were conducted using three factors, spindle speed, feed rate and depth of cut each having three levels. Nine experiments were performed on a high speed semi-automatic precision central lathe. ANOVA was used to determine the level of importance of the machining parameters on tool life. The optimum machining parameter combination was obtained by the analysis of S/N ratio. A mathematical model based on multiple regression analysis was developed to predict the tool life. Taguchi's orthogonal array analysis revealed the optimal combination of parameters at lower levels of spindle speed, feed rate and depth of cut which are 550 rpm, 0.2 mm/rev and 0.5mm respectively. The Main Effects plot reiterated the same. The variation of tool life with different process parameters has been plotted. Feed rate has the most significant effect on tool life followed by spindle speed and depth of cut.
A trust-based recommendation method using network diffusion processes
NASA Astrophysics Data System (ADS)
Chen, Ling-Jiao; Gao, Jian
2018-09-01
A variety of rating-based recommendation methods have been extensively studied including the well-known collaborative filtering approaches and some network diffusion-based methods, however, social trust relations are not sufficiently considered when making recommendations. In this paper, we contribute to the literature by proposing a trust-based recommendation method, named CosRA+T, after integrating the information of trust relations into the resource-redistribution process. Specifically, a tunable parameter is used to scale the resources received by trusted users before the redistribution back to the objects. Interestingly, we find an optimal scaling parameter for the proposed CosRA+T method to achieve its best recommendation accuracy, and the optimal value seems to be universal under several evaluation metrics across different datasets. Moreover, results of extensive experiments on the two real-world rating datasets with trust relations, Epinions and FriendFeed, suggest that CosRA+T has a remarkable improvement in overall accuracy, diversity and novelty. Our work takes a step towards designing better recommendation algorithms by employing multiple resources of social network information.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2011-12-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2012-03-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
Limited Bandwidth Recognition of Collective Behaviors in Bio-Inspired Swarms
2014-05-09
collective? Some swarm models exhibit multiple emergent behaviors from the same parameters. This provides increased expressivity at the cost of...swarms, namely, how do you know what the swarm is doing if you can’t ob- serve every agent in the collective? Some swarm models exhibit multiple ...flocking [15, 21, 12] or cyclic behavior [8, 7], and in some cases can exhibit multiple group behaviors depending on the model parameters used [6, 3, 17
a Web-Based Interactive Platform for Co-Clustering Spatio-Temporal Data
NASA Astrophysics Data System (ADS)
Wu, X.; Poorthuis, A.; Zurita-Milla, R.; Kraak, M.-J.
2017-09-01
Since current studies on clustering analysis mainly focus on exploring spatial or temporal patterns separately, a co-clustering algorithm is utilized in this study to enable the concurrent analysis of spatio-temporal patterns. To allow users to adopt and adapt the algorithm for their own analysis, it is integrated within the server side of an interactive web-based platform. The client side of the platform, running within any modern browser, is a graphical user interface (GUI) with multiple linked visualizations that facilitates the understanding, exploration and interpretation of the raw dataset and co-clustering results. Users can also upload their own datasets and adjust clustering parameters within the platform. To illustrate the use of this platform, an annual temperature dataset from 28 weather stations over 20 years in the Netherlands is used. After the dataset is loaded, it is visualized in a set of linked visualizations: a geographical map, a timeline and a heatmap. This aids the user in understanding the nature of their dataset and the appropriate selection of co-clustering parameters. Once the dataset is processed by the co-clustering algorithm, the results are visualized in the small multiples, a heatmap and a timeline to provide various views for better understanding and also further interpretation. Since the visualization and analysis are integrated in a seamless platform, the user can explore different sets of co-clustering parameters and instantly view the results in order to do iterative, exploratory data analysis. As such, this interactive web-based platform allows users to analyze spatio-temporal data using the co-clustering method and also helps the understanding of the results using multiple linked visualizations.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
Maggio, Marcello; De Vita, Francesca; Fisichella, Alberto; Lauretani, Fulvio; Ticinesi, Andrea; Ceresini, Graziano; Cappola, Anne; Ferrucci, Luigi; Ceda, Gian Paolo
2015-01-01
Anemia is a multifactorial condition whose prevalence increases in both sexes after the fifth decade of life. It is a highly represented phenomenon in older adults and in one-third of cases is “unexplained.” Ageing process is also characterized by a “multiple hormonal dysregulation” with disruption in gonadal, adrenal, and somatotropic axes. Experimental studies suggest that anabolic hormones such as testosterone, IGF-1, and thyroid hormones are able to increase erythroid mass, erythropoietin synthesis, and iron bioavailability, underlining a potential role of multiple hormonal changes in the anemia of aging. Epidemiological data more consistently support an association between lower testosterone and anemia in adult-older individuals. Low IGF-1 has been especially associated with anemia in the pediatric population and in a wide range of disorders. There is also evidence of an association between thyroid hormones and abnormalities in hematological parameters under overt thyroid and euthyroid conditions, with limited data on subclinical statuses. Although RCTs have shown beneficial effects, stronger for testosterone and the GH-IGF-1 axis and less evident for thyroid hormones, in improving different hematological parameters, there is no clear evidence for the usefulness of hormonal treatment in improving anemia in older subjects. Thus, more clinical and research efforts are needed to investigate the hormonal contribution to anemia in the older individuals. PMID:26779261
Maggio, Marcello; De Vita, Francesca; Fisichella, Alberto; Lauretani, Fulvio; Ticinesi, Andrea; Ceresini, Graziano; Cappola, Anne; Ferrucci, Luigi; Ceda, Gian Paolo
2015-01-01
Anemia is a multifactorial condition whose prevalence increases in both sexes after the fifth decade of life. It is a highly represented phenomenon in older adults and in one-third of cases is "unexplained." Ageing process is also characterized by a "multiple hormonal dysregulation" with disruption in gonadal, adrenal, and somatotropic axes. Experimental studies suggest that anabolic hormones such as testosterone, IGF-1, and thyroid hormones are able to increase erythroid mass, erythropoietin synthesis, and iron bioavailability, underlining a potential role of multiple hormonal changes in the anemia of aging. Epidemiological data more consistently support an association between lower testosterone and anemia in adult-older individuals. Low IGF-1 has been especially associated with anemia in the pediatric population and in a wide range of disorders. There is also evidence of an association between thyroid hormones and abnormalities in hematological parameters under overt thyroid and euthyroid conditions, with limited data on subclinical statuses. Although RCTs have shown beneficial effects, stronger for testosterone and the GH-IGF-1 axis and less evident for thyroid hormones, in improving different hematological parameters, there is no clear evidence for the usefulness of hormonal treatment in improving anemia in older subjects. Thus, more clinical and research efforts are needed to investigate the hormonal contribution to anemia in the older individuals.
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander
2017-01-01
Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination (R2) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R2=0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD. PMID:28176905
Kazemi, Pezhman; Khalid, Mohammad Hassan; Pérez Gago, Ana; Kleinebudde, Peter; Jachowicz, Renata; Szlęk, Jakub; Mendyk, Aleksander
2017-01-01
Dry granulation using roll compaction is a typical unit operation for producing solid dosage forms in the pharmaceutical industry. Dry granulation is commonly used if the powder mixture is sensitive to heat and moisture and has poor flow properties. The output of roll compaction is compacted ribbons that exhibit different properties based on the adjusted process parameters. These ribbons are then milled into granules and finally compressed into tablets. The properties of the ribbons directly affect the granule size distribution (GSD) and the quality of final products; thus, it is imperative to study the effect of roll compaction process parameters on GSD. The understanding of how the roll compactor process parameters and material properties interact with each other will allow accurate control of the process, leading to the implementation of quality by design practices. Computational intelligence (CI) methods have a great potential for being used within the scope of quality by design approach. The main objective of this study was to show how the computational intelligence techniques can be useful to predict the GSD by using different process conditions of roll compaction and material properties. Different techniques such as multiple linear regression, artificial neural networks, random forest, Cubist and k-nearest neighbors algorithm assisted by sevenfold cross-validation were used to present generalized models for the prediction of GSD based on roll compaction process setting and material properties. The normalized root-mean-squared error and the coefficient of determination ( R 2 ) were used for model assessment. The best fit was obtained by Cubist model (normalized root-mean-squared error =3.22%, R 2 =0.95). Based on the results, it was confirmed that the material properties (true density) followed by compaction force have the most significant effect on GSD.
NASA Technical Reports Server (NTRS)
Perez-Peraza, J.; Alvarez, M.; Laville, A.; Gallegos, A.
1985-01-01
The study of charge changing cross sections of fast ions colliding with matter provides the fundamental basis for the analysis of the charge states produced in such interactions. Given the high degree of complexity of the phenomena, there is no theoretical treatment able to give a comprehensive description. In fact, the involved processes are very dependent on the basic parameters of the projectile, such as velocity charge state, and atomic number, and on the target parameters, the physical state (molecular, atomic or ionized matter) and density. The target velocity, may have also incidence on the process, through the temperature of the traversed medium. In addition, multiple electron transfer in single collisions intrincates more the phenomena. Though, in simplified cases, such as protons moving through atomic hydrogen, considerable agreement has been obtained between theory and experiments However, in general the available theoretical approaches have only limited validity in restricted regions of the basic parameters. Since most measurements of charge changing cross sections are performed in atomic matter at ambient temperature, models are commonly based on the assumption of targets at rest, however at Astrophysical scales, temperature displays a wide range in atomic and ionized matter. Therefore, due to the lack of experimental data , an attempt is made here to quantify temperature dependent cross sections on basis to somewhat arbitrary, but physically reasonable assumptions.
NASA Astrophysics Data System (ADS)
Naderi, D.; Pahlavani, M. R.; Alavi, S. A.
2013-05-01
Using the Langevin dynamical approach, the neutron multiplicity and the anisotropy of angular distribution of fission fragments in heavy ion fusion-fission reactions were calculated. We applied one- and two-dimensional Langevin equations to study the decay of a hot excited compound nucleus. The influence of the level-density parameter on neutron multiplicity and anisotropy of angular distribution of fission fragments was investigated. We used the level-density parameter based on the liquid drop model with two different values of the Bartel approach and Pomorska approach. Our calculations show that the anisotropy and neutron multiplicity are affected by level-density parameter and neck thickness. The calculations were performed on the 16O+208Pb and 20Ne+209Bi reactions. Obtained results in the case of the two-dimensional Langevin with a level-density parameter based on Bartel and co-workers approach are in better agreement with experimental data.
Fast automated analysis of strong gravitational lenses with convolutional neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
NASA Astrophysics Data System (ADS)
Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias
2015-07-01
Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.
Fast automated analysis of strong gravitational lenses with convolutional neural networks
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-30
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
NASA Astrophysics Data System (ADS)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-01
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Yang, Hong; Xue, Xuejia; Li, Huan; Tay-Chan, Su Chin; Ong, Seng Poon; Tian, Edmund Feng
2017-08-15
In this work, we established a new methodology to simultaneously assess the relative reaction rates of multiple antioxidant compounds in one experimental set-up. This new methodology hypothesizes that the competition among antioxidant compounds towards limiting amount of free radical (in this article, DPPH) would reflect their relative reaction rates. In contrast with the conventional detection of DPPH decrease at 515nm on a spectrophotometer, depletion of antioxidant compounds treated by a series of DPPH concentrations was monitored instead using liquid chromatography coupled with quadrupole time-of-flight (LC-QTOF). A new parameter, namely relative antioxidant activity (RAA), has been proposed to rank these antioxidants according to their reaction rate constants. We have investigated the applicability of RAA using pre-mixed standard phenolic compounds, and also extended this application to two food products, i.e. red wine and green tea. It has been found that RAA correlates well with the reported k values. This new parameter, RAA, provides a new perspective in evaluating antioxidant compounds present in food and herbal matrices. It not only realistically reflects the antioxidant activity of compounds when co-existing with competitive constituents; and it could also quicken up the discovery process in the search for potent yet rare antioxidants from many herbs of food/medicinal origins. Copyright © 2017 Elsevier Ltd. All rights reserved.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.
2016-03-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.
2015-11-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Estimation of forest biomass using remote sensing
NASA Astrophysics Data System (ADS)
Sarker, Md. Latifur Rahman
Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation
Structured plant metabolomics for the simultaneous exploration of multiple factors.
Vasilev, Nikolay; Boccard, Julien; Lang, Gerhard; Grömping, Ulrike; Fischer, Rainer; Goepfert, Simon; Rudaz, Serge; Schillberg, Stefan
2016-11-17
Multiple factors act simultaneously on plants to establish complex interaction networks involving nutrients, elicitors and metabolites. Metabolomics offers a better understanding of complex biological systems, but evaluating the simultaneous impact of different parameters on metabolic pathways that have many components is a challenging task. We therefore developed a novel approach that combines experimental design, untargeted metabolic profiling based on multiple chromatography systems and ionization modes, and multiblock data analysis, facilitating the systematic analysis of metabolic changes in plants caused by different factors acting at the same time. Using this method, target geraniol compounds produced in transgenic tobacco cell cultures were grouped into clusters based on their response to different factors. We hypothesized that our novel approach may provide more robust data for process optimization in plant cell cultures producing any target secondary metabolite, based on the simultaneous exploration of multiple factors rather than varying one factor each time. The suitability of our approach was verified by confirming several previously reported examples of elicitor-metabolite crosstalk. However, unravelling all factor-metabolite networks remains challenging because it requires the identification of all biochemically significant metabolites in the metabolomics dataset.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
Impact of processing parameters on the haemocompatibility of Bombyx mori silk films.
Seib, F Philipp; Maitz, Manfred F; Hu, Xiao; Werner, Carsten; Kaplan, David L
2012-02-01
Silk has traditionally been used for surgical sutures due to its lasting strength and durability; however, the use of purified silk proteins as a scaffold material for vascular tissue engineering goes beyond traditional use and requires application-orientated biocompatibility testing. For this study, a library of Bombyx mori silk films was generated and exposed to various solvents and treatment conditions to reflect current silk processing techniques. The films, along with clinically relevant reference materials, were exposed to human whole blood to determine silk blood compatibility. All substrates showed an initial inflammatory response comparable to polylactide-co-glycolide (PLGA), and a low to moderate haemostasis response similar to polytetrafluoroethylene (PTFE) substrates. In particular, samples that were water annealed at 25 °C for 6 h demonstrated the best blood compatibility based on haemostasis parameters (e.g. platelet decay, thrombin-antithrombin complex, platelet factor 4, granulocytes-platelet conjugates) and inflammatory parameters (e.g. C3b, C5a, CD11b, surface-associated leukocytes). Multiple factors such as treatment temperature and solvent influenced the biological response, though no single physical parameter such as β-sheet content, isoelectric point or contact angle accurately predicted blood compatibility. These findings, when combined with prior in vivo data on silk, support a viable future for silk-based vascular grafts. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ferguson, B G; Lo, K W
2000-10-01
Flight parameter estimation methods for an airborne acoustic source can be divided into two categories, depending on whether the narrow-band lines or the broadband component of the received signal spectrum is processed to estimate the flight parameters. This paper provides a common framework for the formulation and test of two flight parameter estimation methods: one narrow band, the other broadband. The performances of the two methods are evaluated by applying them to the same acoustic data set, which is recorded by a planar array of passive acoustic sensors during multiple transits of a turboprop fixed-wing aircraft and two types of rotary-wing aircraft. The narrow-band method, which is based on a kinematic model that assumes the source travels in a straight line at constant speed and altitude, requires time-frequency analysis of the acoustic signal received by a single sensor during each aircraft transit. The broadband method is based on the same kinematic model, but requires observing the temporal variation of the differential time of arrival of the acoustic signal at each pair of sensors that comprises the planar array. Generalized cross correlation of each pair of sensor outputs using a cross-spectral phase transform prefilter provides instantaneous estimates of the differential times of arrival of the signal as the acoustic wavefront traverses the array.
Statistical Modeling of Single Target Cell Encapsulation
Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548
Mask patterning process using the negative tone chemically amplified resist TOK OEBR-CAN024
NASA Astrophysics Data System (ADS)
Irmscher, Mathias; Beyer, Dirk; Butschke, Joerg; Hudek, Peter; Koepernik, Corinna; Plumhoff, Jason; Rausa, Emmanuel; Sato, Mitsuru; Voehringer, Peter
2004-08-01
Optimized process parameters using the TOK OEBR-CAN024 resist for high chrome load patterning have been determined. A tight linearity tolerance for opaque and clear features, independent on the local pattern density, was the goal of our process integration work. For this purpose we evaluated a new correction method taking into account electron scattering and process influences. The method is based on matching of measured pattern geometry by iterative back-simulation using multiple Gauss and/or exponential functions. The obtained control function acts as input for the proximity correction software PROXECCO. Approaches with different pattern oversize and two Cr thicknesses were accomplished and the results have been reported. Isolated opaque and clear lines could be realized in a very tight linearity range. The increasing line width of small dense lines, induced by the etching process, could be corrected only partially.
NASA Astrophysics Data System (ADS)
Siegert, Stefan
2017-04-01
Initialised climate forecasts on seasonal time scales, run several months or even years ahead, are now an integral part of the battery of products offered by climate services world-wide. The availability of seasonal climate forecasts from various modeling centres gives rise to multi-model ensemble forecasts. Post-processing such seasonal-to-decadal multi-model forecasts is challenging 1) because the cross-correlation structure between multiple models and observations can be complicated, 2) because the amount of training data to fit the post-processing parameters is very limited, and 3) because the forecast skill of numerical models tends to be low on seasonal time scales. In this talk I will review new statistical post-processing frameworks for multi-model ensembles. I will focus particularly on Bayesian hierarchical modelling approaches, which are flexible enough to capture commonly made assumptions about collective and model-specific biases of multi-model ensembles. Despite the advances in statistical methodology, it turns out to be very difficult to out-perform the simplest post-processing method, which just recalibrates the multi-model ensemble mean by linear regression. I will discuss reasons for this, which are closely linked to the specific characteristics of seasonal multi-model forecasts. I explore possible directions for improvements, for example using informative priors on the post-processing parameters, and jointly modelling forecasts and observations.
Fabrication of large area woodpile structure in polymer
NASA Astrophysics Data System (ADS)
Gupta, Jaya Prakash; Dutta, Neilanjan; Yao, Peng; Sharkawy, Ahmed S.; Prather, Dennis W.
2009-02-01
A fabrication process of three-dimensional Woodpile photonic crystals based on multilayer photolithography from commercially available photo resist SU8 have been demonstrated. A 6-layer, 2 mm × 2mm woodpile has been fabricated. Different factors that influence the spin thickness on multiple resist application have been studied. The fabrication method used removes, the problem of intermixing, and is more repeatable and robust than the multilayer fabrication techniques for three dimensional photonic crystal structures that have been previously reported. Each layer is developed before next layer photo resist spin, instead of developing the whole structure in the final step as used in multilayer process. The desired thickness for each layer is achieved by the calibration of spin speed and use of different photo resist compositions. Deep UV exposure confinement has been the defining parameter in this process. Layer uniformity for every layer is independent of the previous developed layers and depends on the photo resist planarizing capability, spin parameters and baking conditions. The intermixing problem, which results from the previous layers left uncrossed linked photo resist, is completely removed in this process as the previous layers are fully developed, avoiding any intermixing between the newly spun and previous layers. Also this process gives the freedom to redo every spin any number of times without affecting the previously made structure, which is not possible in other multilayer process where intermediate developing is not performed.
Patwardhan, Ketaki; Asgarzadeh, Firouz; Dassinger, Thomas; Albers, Jessica; Repka, Michael A
2015-05-01
In this study, the principles of quality by design (QbD) have been uniquely applied to a pharmaceutical melt extrusion process for an immediate release formulation with a low melting model drug, ibuprofen. Two qualitative risk assessment tools - Fishbone diagram and failure mode effect analysis - were utilized to strategically narrow down the most influential parameters. Selected variables were further assessed using a Plackett-Burman screening study, which was upgraded to a response surface design consisting of the critical factors to study the interactions between the study variables. In process torque, glass transition temperature (Tg ) of the extrudates, assay, dissolution and phase change were measured as responses to evaluate the critical quality attributes (CQAs) of the extrudates. The effect of each study variable on the measured responses was analysed using multiple regression for the screening design and partial least squares for the optimization design. Experimental limits for formulation and process parameters to attain optimum processing have been outlined. A design space plot describing the domain of experimental variables within which the CQAs remained unchanged was developed. A comprehensive approach for melt extrusion product development based on the QbD methodology has been demonstrated. Drug loading concentrations between 40- 48%w/w and extrusion temperature in the range of 90-130°C were found to be the most optimum. © 2015 Royal Pharmaceutical Society.
Simplex GPS and InSAR Inversion Software
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Pierce, Marlon E.
2012-01-01
Changes in the shape of the Earth's surface can be routinely measured with precisions better than centimeters. Processes below the surface often drive these changes and as a result, investigators require models with inversion methods to characterize the sources. Simplex inverts any combination of GPS (global positioning system), UAVSAR (uninhabited aerial vehicle synthetic aperture radar), and InSAR (interferometric synthetic aperture radar) data simultaneously for elastic response from fault and fluid motions. It can be used to solve for multiple faults and parameters, all of which can be specified or allowed to vary. The software can be used to study long-term tectonic motions and the faults responsible for those motions, or can be used to invert for co-seismic slip from earthquakes. Solutions involving estimation of fault motion and changes in fluid reservoirs such as magma or water are possible. Any arbitrary number of faults or parameters can be considered. Simplex specifically solves for any of location, geometry, fault slip, and expansion/contraction of a single or multiple faults. It inverts GPS and InSAR data for elastic dislocations in a half-space. Slip parameters include strike slip, dip slip, and tensile dislocations. It includes a map interface for both setting up the models and viewing the results. Results, including faults, and observed, computed, and residual displacements, are output in text format, a map interface, and can be exported to KML. The software interfaces with the QuakeTables database allowing a user to select existing fault parameters or data. Simplex can be accessed through the QuakeSim portal graphical user interface or run from a UNIX command line.
W/O/W multiple emulsions with diclofenac sodium.
Lindenstruth, Kai; Müller, Bernd W
2004-11-01
The disperse oil droplets of W/O/W multiple emulsions contain small water droplets, in which drugs could be incorporated, but the structure of these emulsions is also the reason for possible instability. Due to the middle oil phase which acts as a 'semipermeable' membrane the passage of water across the oil phase can take place. However, the emulsions have been produced in a two-step-production process so not only the leakage of encapsulated drug molecules out of the inner water phase during storage but also a production-induced reduction of the encapsulation rate should be considered. The aim of this study was to ascertain how far the production-induced reduction of the encapsulation rate relates to the size of inner water droplets and to evaluate the relevance of multiple emulsions as drug carrier for diclofenac sodium. Therefore multiple emulsions were produced according to a central composite design. During the second production step it was observed that the parameters pressure and temperature have an influence on the size of the oil droplets in the W/O/W multiple emulsions. Further experiments with different W/O emulsions resulted in W/O/W multiple emulsions with different encapsulation rates of diclofenac sodium, due to the different sizes of the inner water droplets, which were obtained in the first production step.
Effects of multiple scattering on time- and depth-resolved signals in airborne lidar systems
NASA Technical Reports Server (NTRS)
Punjabi, A.; Venable, D. D.
1986-01-01
A semianalytic Monte Carlo radiative transfer model (SALMON) is employed to probe the effects of multiple-scattering events on the time- and depth-resolved lidar signals from homogeneous aqueous media. The effective total attenuation coefficients in the single-scattering approximation are determined as functions of dimensionless parameters characterizing the lidar system and the medium. Results show that single-scattering events dominate when these parameters are close to their lower bounds and that when their values exceed unity multiple-scattering events dominate.
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Ranjan, Suman; Mandal, Sanjoy
2017-12-01
Modeling of triple asymmetrical optical micro ring resonator (TAOMRR) in z-domain with 2 × 2 input-output system with detailed design of its waveguide configuration using finite-difference time-domain (FDTD) method is presented. Transfer function in z-domain using delay-line signal processing technique of the proposed TAOMRR is determined for different input and output ports. The frequency response analysis is carried out using MATLAB software. Group delay and dispersion characteristics are also determined in MATLAB. The electric field analysis is done using FDTD. The method proposes a new methodology to design and draw multiple configurations of coupled ring resonators having multiple in and out ports. Various important parameters such as coupling coefficients and FSR are also determined.
NASA Astrophysics Data System (ADS)
Ranjan, Suman; Mandal, Sanjoy
2018-02-01
Modeling of triple asymmetrical optical micro ring resonator (TAOMRR) in z-domain with 2 × 2 input-output system with detailed design of its waveguide configuration using finite-difference time-domain (FDTD) method is presented. Transfer function in z-domain using delay-line signal processing technique of the proposed TAOMRR is determined for different input and output ports. The frequency response analysis is carried out using MATLAB software. Group delay and dispersion characteristics are also determined in MATLAB. The electric field analysis is done using FDTD. The method proposes a new methodology to design and draw multiple configurations of coupled ring resonators having multiple in and out ports. Various important parameters such as coupling coefficients and FSR are also determined.
Wang, Xu; Le, Anh-Thu; Yu, Chao; Lucchese, R. R.; Lin, C. D.
2016-01-01
We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. A simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method. PMID:27025410
NASA Astrophysics Data System (ADS)
Wang, Xu; Le, Anh-Thu; Yu, Chao; Lucchese, R. R.; Lin, C. D.
2016-03-01
We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. A simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method.
NASA Astrophysics Data System (ADS)
Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Jansen, Florian; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk
2017-02-01
The matchless properties of ultrashort laser pulses, such as the enabling of cold processing and non-linear absorption, pave the way to numerous novel applications. Ultrafast lasers arrived in the last decade at a level of reliability suitable for the industrial environment.1 Within the next years many industrial manufacturing processes in several markets will be replaced by laser-based processes due to their well-known benefits: These are non-contact wear-free processing, higher process accuracy or an increase of processing speed and often improved economic efficiency compared to conventional processes. Furthermore, new processes will arise with novel sources, addressing previously unsolved challenges. One technical requirement for these exciting new applications will be to optimize the large number of available parameters to the requirements of the application. In this work we present an ultrafast laser system distinguished by its capability to combine high flexibility and real time process-inherent adjustments of the parameters with industry-ready reliability. This industry-ready reliability is ensured by a long experience in designing and building ultrashort-pulse lasers in combination with rigorous optimization of the mechanical construction, optical components and the entire laser head for continuous performance. By introducing a new generation of mechanical design in the last few years, TRUMPF enabled its ultrashort-laser platforms to fulfill the very demanding requirements for passively coupling high-energy single-mode radiation into a hollow-core transport fiber. The laser architecture presented here is based on the all fiber MOPA (master oscillator power amplifier) CPA (chirped pulse amplification) technology. The pulses are generated in a high repetition rate mode-locked fiber oscillator also enabling flexible pulse bursts (groups of multiple pulses) with 20 ns intra-burst pulse separation. An external acousto-optic modulator (XAOM) enables linearization and multi-level quad-loop stabilization of the output power of the laser.2 In addition to the well-established platform latest developments addressed single-pulse energies up to 50 μJ and made femtosecond pulse durations available for the TruMicro Series 2000. Beyond these stabilization aspects this laser architecture together with other optical modules and combined with smart laser control software enables process-driven adjustments of the parameters (e. g. repetition rate, multi-pulse functionalities, pulse energy, pulse duration) by external signals, which will be presented in this work.
NASA Astrophysics Data System (ADS)
Malekabadi, Ali; Paoloni, Claudio
2016-09-01
A microfabrication process based on UV LIGA (German acronym of lithography, electroplating and molding) is proposed for the fabrication of relatively high aspect ratio sub-terahertz (100-1000 GHz) metal waveguides, to be used as a slow wave structure in sub-THz vacuum electron devices. The high accuracy and tight tolerances required to properly support frequencies in the sub-THz range can be only achieved by a stable process with full parameter control. The proposed process, based on SU-8 photoresist, has been developed to satisfy high planar surface requirements for metal sub-THz waveguides. It will be demonstrated that, for a given thickness, it is more effective to stack a number of layers of SU-8 with lower thickness rather than using a single thick layer obtained at lower spin rate. The multiple layer approach provides the planarity and the surface quality required for electroforming of ground planes or assembly surfaces and for assuring low ohmic losses of waveguides. A systematic procedure is provided to calculate soft and post-bake times to produce high homogeneity SU-8 multiple layer coating as a mold for very high quality metal waveguides. A double corrugated waveguide designed for 0.3 THz operating frequency, to be used in vacuum electronic devices, was fabricated as test structure. The proposed process based on UV LIGA will enable low cost production of high accuracy sub-THz 3D waveguides. This is fundamental for producing a new generation of affordable sub-THz vacuum electron devices, to fill the technological gap that still prevents a wide diffusion of numerous applications based on THz radiation.
NASA Astrophysics Data System (ADS)
Melnikova, I.; Mukai, S.; Vasilyev, A.
Data of remote measurements of reflected radiance with the POLDER instrument on board of ADEOS satellite are used for retrieval of the optical thickness, single scattering albedo and phase function parameter of cloudy and clear atmosphere. The method of perceptron neural network that from input values of multiangle radiance and Solar incident angle allows to obtain surface albedo, the optical thickness, single scattering albedo and phase function parameter in case of clear sky. Two last parameters are determined as optical average for atmospheric column. The calculation of solar radiance with using the MODTRAN-3 code with taking into account multiple scattering is accomplished for neural network learning. All mentioned parameters were randomly varied on the base of statistical models of possible measured parameters variation. Results of processing one frame of remote observation that consists from 150,000 pixels are presented. The methodology elaborated allows operative determining optical characteristics as cloudy as clear atmosphere. Further interpretation of these results gives the possibility to extract the information about total contents of atmospheric aerosols and absorbing gases in the atmosphere and create models of the real cloudiness An analytical method of interpretation that based on asymptotic formulas of multiple scattering theory is applied to remote observations of reflected radiance in case of cloudy pixel. Details of the methodology and error analysis were published and discussed earlier. Here we present results of data processing of pixel size 6x6 km In many studies the optical thickness is evaluated earlier in the assumption of the conservative scattering. But in case of true absorption in clouds the large errors in parameter obtained are possible. The simultaneous retrieval of two parameters at every wavelength independently is the advantage comparing with earlier studies. The analytical methodology is based on the transfer theory asymptotic formula inversion for optically thick stratus clouds. The model of horizontally infinite layer is considered. The slight horizontal heterogeneity is approximately taken into account. Formulas containing only the measured values of two-direction radiance and functions of solar and view angles were derived earlier. The 6 azimuth harmonics of reflection function are taken into account. The simple approximation of the cloud top boarder heterogeneity is used. The clouds, projecting upper the cloud top plane causes the increase of diffuse radiation in the incident flux. It is essential for calculation of radiative characteristics, which depends on lighting conditions. Escape and reflection functions describe this dependence for reflected radiance and local albedo of semi-infinite medium - for irradiance. Thus the functions depending on solar incident angle is to replace by their modifications. Firstly optical thickness of every pixel is obtained with simple formula assuming conservative scattering for all available view directions. Deviations between obtained values may be taken as a measure of the cloud top deviation from the plane. The special parameter is obtained, which takes into account the shadowing effect. Then single scattering albedo and optical thickness (with the true absorption assuming) are obtained for pairs of view directions with equal optical thickness. After that the averaging of values obtained and relative error evaluation is accomplished for all viewing directions of every pixel. The procedure is repeated for all wavelengths and pixels independently.
Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity
Englehardt, James D.
2015-01-01
Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263
Van Schuerbeek, Peter; Baeken, Chris; De Mey, Johan
2016-01-01
Concerns are raising about the large variability in reported correlations between gray matter morphology and affective personality traits as ‘Harm Avoidance’ (HA). A recent review study (Mincic 2015) stipulated that this variability could come from methodological differences between studies. In order to achieve more robust results by standardizing the data processing procedure, as a first step, we repeatedly analyzed data from healthy females while changing the processing settings (voxel-based morphology (VBM) or region-of-interest (ROI) labeling, smoothing filter width, nuisance parameters included in the regression model, brain atlas and multiple comparisons correction method). The heterogeneity in the obtained results clearly illustrate the dependency of the study outcome to the opted analysis settings. Based on our results and the existing literature, we recommended the use of VBM over ROI labeling for whole brain analyses with a small or intermediate smoothing filter (5-8mm) and a model variable selection step included in the processing procedure. Additionally, it is recommended that ROI labeling should only be used in combination with a clear hypothesis and that authors are encouraged to report their results uncorrected for multiple comparisons as supplementary material to aid review studies. PMID:27096608
Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.
Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua
2018-01-13
The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs
Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua
2018-01-01
The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter (EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters. PMID:29342856
The potential of multiparametric MRI of the breast
Pinker, Katja; Helbich, Thomas H
2017-01-01
MRI is an essential tool in breast imaging, with multiple established indications. Dynamic contrast-enhanced MRI (DCE-MRI) is the backbone of any breast MRI protocol and has an excellent sensitivity and good specificity for breast cancer diagnosis. DCE-MRI provides high-resolution morphological information, as well as some functional information about neoangiogenesis as a tumour-specific feature. To overcome limitations in specificity, several other functional MRI parameters have been investigated and the application of these combined parameters is defined as multiparametric MRI (mpMRI) of the breast. MpMRI of the breast can be performed at different field strengths (1.5–7 T) and includes both established (diffusion-weighted imaging, MR spectroscopic imaging) and novel MRI parameters (sodium imaging, chemical exchange saturation transfer imaging, blood oxygen level-dependent MRI), as well as hybrid imaging with positron emission tomography (PET)/MRI and different radiotracers. Available data suggest that multiparametric imaging using different functional MRI and PET parameters can provide detailed information about the underlying oncogenic processes of cancer development and progression and can provide additional specificity. This article will review the current and emerging functional parameters for mpMRI of the breast for improved diagnostic accuracy in breast cancer. PMID:27805423
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ammon Williams; Supathorn Phongikaroon; Michael Simpson
A parametric study has been conducted to identify the effects of several parameters on the separation of CsCl from molten LiCl-KCl salt via a melt crystallization process. A reverse vertical Bridgman technique was used to grow the salt crystals. The investigated parameters were: (1) the advancement rate, (2) the crucible lid configuration, (3) the amount of salt mixture, (4) the initial composition of CsCl, and (5) the temperature difference between the high and low furnace zones. From each grown crystal, samples were taken axially and analyzed using inductively coupled plasma mass spectrometry (ICP-MS). Results show that CsCl concentrations at themore » top of the crystals were low and increased to a maximum at the bottom of the salt. Salt (LiCl-KCl) recycle percentages for the experiments ranged from 50% to 75% and the CsCl composition in the waste salt was low. To increase the recycle percentage and the concentration of CsCl in the waste form, the possibility of using multiple crystallization stages was explored to further optimize the process. Results show that multiple crystallization stages are practical and the optimal experimental conditions should be operated at 5.0 mm/hr rate with a lid configuration and temperature difference of 200 °C for a total of five crystallization stages. Under these conditions, up to 88% of the salt can be recycled.« less
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Biogeochemical and hydrological constraints on concentration-discharge curves
NASA Astrophysics Data System (ADS)
Moatar, Florentina; Abbott, Ben; Minaudo, Camille; Curie, Florence; Pinay, Gilles
2017-04-01
The relationship between concentration and discharge (C-Q) can give insight into the location, abundance, rate of production or consumption, and transport dynamics of elements in coupled terrestrial-aquatic ecosystems. Consequently, the investigation of C-Q relationships for multiple elements at multiple spatial and temporal scales can be a powerful tool to address three of ecohydrology's fundamental questions: where does water comes from, how long does it stay, and what happens to the solutes and particulates it carries along the way. We analyzed long-term water quality data from 300 monitoring stations covering nearly half of France to investigate how elemental properties, catchment characteristics, and hydrological parameters influence C-Q. Based on previous work, we segmented the hydrograph, calculating independent C-Q slopes for flows above and below the median discharge. We found that most elements only expressed two of the nine possible C-Q modalities, indicating strong elemental control of C-Q shape. Catchment characteristics including land use and human population had a strong impact on concentration but typically did not influence the C-Q slopes, also suggesting inherent constraints on elemental production and transport. Biological processes appeared to regulate C-Q slope at low flows for biologically-reactive elements, but at high flows, these processes became unimportant, and most parameters expressed chemostatic behavior. This study provides a robust description of possible C-Q shapes for a wide variety of catchments and elements and demonstrates the value of low-frequency, long-term data collected by water quality agencies.
Cloud and Radiation Mission with Active and Passive Sensing from the Space Station
NASA Technical Reports Server (NTRS)
Spinhirne, James D.
1998-01-01
A cloud and aerosol radiative forcing and physical process study involving active laser and radar profiling with a combination of passive radiometric sounders and imagers would use the space station as an observation platform. The objectives are to observe the full three dimensional cloud and aerosol structure and the associated physical parameters leading to a complete measurement of radiation forcing processes. The instruments would include specialized radar and lidar for cloud and aerosol profiling, visible, infrared and microwave imaging radiometers with comprehensive channels for cloud and aerosol observation and specialized sounders. The low altitude,. available power and servicing capability of the space station are significant advantages for the active sensors and multiple passive instruments.
A method for determining the conversion efficiency of multiple-cell photovoltaic devices
NASA Astrophysics Data System (ADS)
Glatfelter, Troy; Burdick, Joseph
A method for accurately determining the conversion efficiency of any multiple-cell photovoltaic device under any arbitrary reference spectrum is presented. This method makes it possible to obtain not only the short-circuit current, but also the fill factor, the open-circuit voltage, and hence the conversion efficiency of a multiple-cell device under any reference spectrum. Results are presented which allow a comparison of the I-V parameters of two-terminal, two- and three-cell tandem devices measured under a multiple-source simulator with the same parameters measured under different reference spectra. It is determined that the uncertainty in the conversion efficiency of a multiple-cell photovoltaic device obtained with this method is less than +/-3 percent.
Morrison, Kathryn T; Shaddick, Gavin; Henderson, Sarah B; Buckeridge, David L
2016-08-15
This paper outlines a latent process model for forecasting multiple health outcomes arising from a common environmental exposure. Traditionally, surveillance models in environmental health do not link health outcome measures, such as morbidity or mortality counts, to measures of exposure, such as air pollution. Moreover, different measures of health outcomes are treated as independent, while it is known that they are correlated with one another over time as they arise in part from a common underlying exposure. We propose modelling an environmental exposure as a latent process, and we describe the implementation of such a model within a hierarchical Bayesian framework and its efficient computation using integrated nested Laplace approximations. Through a simulation study, we compare distinct univariate models for each health outcome with a bivariate approach. The bivariate model outperforms the univariate models in bias and coverage of parameter estimation, in forecast accuracy and in computational efficiency. The methods are illustrated with a case study using healthcare utilization and air pollution data from British Columbia, Canada, 2003-2011, where seasonal wildfires produce high levels of air pollution, significantly impacting population health. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Circuit QED with qutrits: Coupling three or more atoms via virtual-photon exchange
NASA Astrophysics Data System (ADS)
Zhao, Peng; Tan, Xinsheng; Yu, Haifeng; Zhu, Shi-Liang; Yu, Yang
2017-10-01
We present a model to describe a generic circuit QED system which consists of multiple artificial three-level atoms, namely, qutrits, strongly coupled to a cavity mode. When the state transition of the atoms disobeys the selection rules the process that does not conserve the number of excitations can happen determinatively. Therefore, we can realize coherent exchange interaction among three or more atoms mediated by the exchange of virtual photons. In addition, we generalize the one-cavity-mode mediated interactions to the multicavity situation, providing a method to entangle atoms located in different cavities. Using experimentally feasible parameters, we investigate the dynamics of the model including three cyclic-transition three-level atoms, for which the two lowest energy levels can be treated as qubits. Hence, we have found that two qubits can jointly exchange excitation with one qubit in a coherent and reversible way. In the whole process, the population in the third level of atoms is negligible and the cavity photon number is far smaller than 1. Our model provides a feasible scheme to couple multiple distant atoms together, which may find applications in quantum information processing.
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Using AVIRIS data and multiple-masking techniques to map urban forest trees species
Q. Xiao; S.L. Ustin; E.G. McPherson
2004-01-01
Tree type and species information are critical parameters for urban forest management, benefit cost analysis and urban planning. However, traditionally, these parameters have been derived based on limited field samples in urban forest management practice. In this study we used high-resolution Airborne Visible Infrared Imaging Spectrometer (AVIRIS) data and multiple-...
Unciti-Broceta, Juan D; Cano-Cortés, Victoria; Altea-Manzano, Patricia; Pernagallo, Salvatore; Díaz-Mochón, Juan J; Sánchez-Martín, Rosario M
2015-05-15
Engineered nanoparticles (eNPs) for biological and biomedical applications are produced from functionalised nanoparticles (NPs) after undergoing multiple handling steps, giving rise to an inevitable loss of NPs. Herein we present a practical method to quantify nanoparticles (NPs) number per volume in an aqueous suspension using standard spectrophotometers and minute amounts of the suspensions (up to 1 μL). This method allows, for the first time, to analyse cellular uptake by reporting NPs number added per cell, as opposed to current methods which are related to solid content (w/V) of NPs. In analogy to the parameter used in viral infective assays (multiplicity of infection), we propose to name this novel parameter as multiplicity of nanofection.
Impact of dynamic distribution of floc particles on flocculation effect.
Nan, Jun; He, Weipeng; Song, Xinin; Li, Guibai
2009-01-01
Polyaluminum chloride (PAC) was used as coagulant and suspended particles in kaolin water. Online instruments including turbidimeter and particle counter were used to monitor the flocculation process. An evaluation model for demonstrating the impact on the flocculation effect was established based on the multiple linear regression analysis method. The parameter of the index weight of channels quantitatively described how the variation of floc particle population in different size ranges cause the decrement of turbidity. The study showed that the floc particles in different size ranges contributed differently to the decrease of turbidity and that the index weight of channel could excellently indicate the impact degree of floc particles dynamic distribution on flocculation effect. Therefore, the parameter may significantly benefit the development of coagulation and sedimentation techniques as well as the optimal coagulant selection.
Variability Search in GALFACTS
NASA Astrophysics Data System (ADS)
Kania, Joseph; Wenger, Trey; Ghosh, Tapasi; Salter, Christopher J.
2015-01-01
The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo-sky survey using the seven-beam Arecibo L-band Feed Array (ALFA). The Survey is centered at 1.375 GHz with 300-MHz bandwidth, and measures all four Stokes parameters. We are looking for compact sources that vary in intensity or polarization on timescales of about a month via intra-survey comparisons and long term variations through comparisons with the NRAO VLA Sky Survey. Data processing includes locating and rejecting radio frequency interference, recognizing sources, two-dimensional Gaussian fitting to multiple cuts through the same source, and gain corrections. Our Python code is being used on the calibrations sources observed in conjunction with the survey measurements to determine the calibration parameters that will then be applied to data for the main field.
Bistability and delay-induced stability switches in a cancer network with the regulation of microRNA
NASA Astrophysics Data System (ADS)
Song, Yongli; Cao, Xin; Zhang, Tonghua
2018-01-01
In this paper, we are concerned with a cancer network including a protein module and a corresponding microRNA cluster that inhibits the synthesis of proteins. The existence of multiple steady states and their stability depending on the parameters are firstly determined. Bistability and dependency on the parameters, Hopf bifurcations and the corresponding properties like direction and stability of Hopf bifurcations are determined by computing the normal form on the center manifold. Then, the role of the delay in the process of synthesis of the protein is investigated. We show that the delay can stabilize the unstable equilibrium and destabilize the stable equilibrium. Some simulations are carried out to numerically illustrate the obtained theoretical results. Finally, the biological interpretation of the theoretical results is discussed.
Tan, Xiao-Fei; Liu, Shao-Bo; Liu, Yun-Guo; Gu, Yan-Ling; Zeng, Guang-Ming; Hu, Xin-Jiang; Wang, Xin; Liu, Shao-Heng; Jiang, Lu-Hua
2017-03-01
There is a growing interest of the scientific community on production of activated carbon using biochar as potential sustainable precursors pyrolyzed from biomass wastes. Physical activation and chemical activation are the main methods applied in the activation process. These methods could have significantly beneficial effects on biochar chemical/physical properties, which make it suitable for multiple applications including water pollution treatment, CO 2 capture, and energy storage. The feedstock with different compositions, pyrolysis conditions and activation parameters of biochar have significant influences on the properties of resultant activated carbon. Compared with traditional activated carbon, activated biochar appears to be a new potential cost-effective and environmentally-friendly carbon materials with great application prospect in many fields. This review not only summarizes information from the current analysis of activated biochar and their multiple applications for further optimization and understanding, but also offers new directions for development of activated biochar. Copyright © 2016 Elsevier Ltd. All rights reserved.
Forest Attributes from Radar Interferometric Structure and its Fusion with Optical Remote Sensing
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.; Law, Beverly E.; Asner, Gregory P.
2004-01-01
The possibility of global, three-dimensional remote sensing of forest structure with interferometric synthetic aperture radar (InSAR) bears on important forest ecological processes, particularly the carbon cycle. InSAR supplements two-dimensional remote sensing with information in the vertical dimension. Its strengths in potential for global coverage complement those of lidar (light detecting and ranging), which has the potential for high-accuracy vertical profiles over small areas. InSAR derives its sensitivity to forest vertical structure from the differences in signals received by two, spatially separate radar receivers. Estimation of parameters describing vertical structure requires multiple-polarization, multiple-frequency, or multiple-baseline InSAR. Combining InSAR with complementary remote sensing techniques, such as hyperspectral optical imaging and lidar, can enhance vertical-structure estimates and consequent biophysical quantities of importance to ecologists, such as biomass. Future InSAR experiments will supplement recent airborne and spaceborne demonstrations, and together with inputs from ecologists regarding structure, they will suggest designs for future spaceborne strategies for measuring global vegetation structure.
Development of a low-cost multiple diode PIV laser for high-speed flow visualization
NASA Astrophysics Data System (ADS)
Bhakta, Raj; Hargather, Michael
2017-11-01
Particle imaging velocimetry (PIV) is an optical visualization technique that typically incorporates a single high-powered laser to illuminate seeded particles in a fluid flow. Standard PIV lasers are extremely costly and have low frequencies that severely limit its capability in high speed, time-resolved imaging. The development of a multiple diode laser system consisting of continuous lasers allows for flexible high-speed imaging with a wider range of test parameters. The developed laser system was fabricated with off-the-shelf parts for approximately 500. A series of experimental tests were conducted to compare the laser apparatus to a standard Nd:YAG double-pulsed PIV laser. Steady and unsteady flows were processed to compare the two systems and validate the accuracy of the multiple laser design. PIV results indicate good correlation between the two laser systems and verifies the construction of a precise laser instrument. The key technical obstacle to this approach was laser calibration and positioning which will be discussed. HDTRA1-14-1-0070.
NASA Astrophysics Data System (ADS)
Zou, X. J.; Zheng, G. G.; Chen, Y. Y.; Xu, L. H.; Lai, M.
2018-04-01
A multi-band absorber constructed from prism-incorporated one-dimensional photonic crystal (1D-PhC) containing graphene defects is achieved theoretically in the visible and near-infrared (vis-NIR) spectral range. By means of the transfer matrix method (TMM), the effect of structural parameters on the optical response of the structure has been investigated. It is possible to achieve multi-peak and complete optical absorption. The simulations reveal that the light intensity is enhanced at the graphene plane, and the resonant wavelength and the absorption intensity can also be tuned by tilting the incidence angle of the impinging light. In particular, multiple graphene sheets are embedded in the arrays, without any demand of manufacture process to cut them into periodic patterns. The proposed concept can be extended to other two-dimensional (2D) materials and engineered for promising applications, including selective or multiplex filters, multiple channel sensors, and photodetectors.
Induced subgraph searching for geometric model fitting
NASA Astrophysics Data System (ADS)
Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi
2017-11-01
In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.
NASA Astrophysics Data System (ADS)
Eleiwi, Fadi; Laleg-Kirati, Taous Meriem
2018-06-01
An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.
Nesbitt, Gene H; Freeman, Lisa M; Hannah, Steven S
2004-01-01
Seventy-two pruritic dogs were fed one of four diets controlled for n-6:n-3 fatty acid ratios and total dietary intake of fatty acids. Multiple parameters were evaluated, including clinical and cytological findings, aeroallergen testing, microbial sampling techniques, and effects of an anti-fungal/antibacterial shampoo and ear cleanser. Significant correlations were observed between many clinical parameters, anatomical sampling sites, and microbial counts when data from the diet groups was combined. There were no statistically significant differences between individual diets for any of the clinical parameters. The importance of total clinical management in the control of pruritus was demonstrated.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2015-10-01
The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-07-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Rate decline curves analysis of multiple-fractured horizontal wells in heterogeneous reservoirs
NASA Astrophysics Data System (ADS)
Wang, Jiahang; Wang, Xiaodong; Dong, Wenxiu
2017-10-01
In heterogeneous reservoir with multiple-fractured horizontal wells (MFHWs), due to the high density network of artificial hydraulic fractures, the fluid flow around fracture tips behaves like non-linear flow. Moreover, the production behaviors of different artificial hydraulic fractures are also different. A rigorous semi-analytical model for MFHWs in heterogeneous reservoirs is presented by combining source function with boundary element method. The model are first validated by both analytical model and simulation model. Then new Blasingame type curves are established. Finally, the effects of critical parameters on the rate decline characteristics of MFHWs are discussed. The results show that heterogeneity has significant influence on the rate decline characteristics of MFHWs; the parameters related to the MFHWs, such as fracture conductivity and length also can affect the rate characteristics of MFHWs. One novelty of this model is to consider the elliptical flow around artificial hydraulic fracture tips. Therefore, our model can be used to predict rate performance more accurately for MFHWs in heterogeneous reservoir. The other novelty is the ability to model the different production behavior at different fracture stages. Compared to numerical and analytic methods, this model can not only reduce extensive computing processing but also show high accuracy.
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Das, Raibatak; Cairo, Christopher W.; Coombs, Daniel
2009-01-01
The extraction of hidden information from complex trajectories is a continuing problem in single-particle and single-molecule experiments. Particle trajectories are the result of multiple phenomena, and new methods for revealing changes in molecular processes are needed. We have developed a practical technique that is capable of identifying multiple states of diffusion within experimental trajectories. We model single particle tracks for a membrane-associated protein interacting with a homogeneously distributed binding partner and show that, with certain simplifying assumptions, particle trajectories can be regarded as the outcome of a two-state hidden Markov model. Using simulated trajectories, we demonstrate that this model can be used to identify the key biophysical parameters for such a system, namely the diffusion coefficients of the underlying states, and the rates of transition between them. We use a stochastic optimization scheme to compute maximum likelihood estimates of these parameters. We have applied this analysis to single-particle trajectories of the integrin receptor lymphocyte function-associated antigen-1 (LFA-1) on live T cells. Our analysis reveals that the diffusion of LFA-1 is indeed approximately two-state, and is characterized by large changes in cytoskeletal interactions upon cellular activation. PMID:19893741
NASA Astrophysics Data System (ADS)
Rivera-López, F.; Babu, P.; Basavapoornima, Ch.; Jayasankar, C. K.; Lavín, V.
2011-06-01
Efficient Nd3+→Yb3+ resonant and phonon-assisted energy transfer processes have been observed in phosphate glasses and have been studied using steady-state and time-resolved optical spectroscopies. Results indicate that the energy transfer occurs via nonradiative electric dipole-dipole processes and is enhanced with the concentration of Yb3+ acceptor ions, having an efficiency higher than 75% for the glass doped with 1 mol% of Nd2O3 and 4 mol% of Yb2O3. The luminescence decay curves show a nonexponential character and the energy transfer microscopic parameter calculated with the Inokuti-Hirayama model gives a value of 240 × 10-40 cm6 s-1, being one of the highest reported in the literature for Nd3+-Yb3+ co-doped matrices. From the steady-state experimental absorption and emission cross-sections, a general expression for estimating the microscopic energy transfer parameter is proposed based upon the theoretical methods developed by Miyakawa and Dexter and Tarelho et al. This expression takes into account all the resonant mechanisms involved in an energy transfer processes together with other phonon-assisted nonvanishing overlaps. The value of the Nd3+→Yb3+ energy transfer microscopic parameter has been calculated to be 200 × 10-40 cm6 s-1, which is in good agreement with that obtained from the Inokuti-Hirayama fitting. These results show the importance of the nonresonant phonon-assisted Nd3+→Yb3+ energy transfer processes and the great potential of these glasses as active matrices in the development of multiple-pump-channel Yb3+ lasers.
Study of magnetized accretion flow with variable Γ equation of state
NASA Astrophysics Data System (ADS)
Singh, Kuldeep; Chattopadhyay, Indranil
2018-05-01
We present here the solutions of magnetized accretion flow on to a compact object with hard surface such as neutron stars. The magnetic field of the central star is assumed dipolar and the magnetic axis is assumed to be aligned with the rotation axis of the star. We have used an equation of state for the accreting fluid in which the adiabatic index is dependent on temperature and composition of the flow. We have also included cooling processes like bremsstrahlung and cyclotron processes in the accretion flow. We found all possible accretion solutions. All accretion solutions terminate with a shock very near to the star surface and the height of this primary shock does not vary much with either the spin period or the Bernoulli parameter of the flow, although the strength of the shock may vary with the period. For moderately rotating central star, there is possible formation of multiple sonic points in the flow and therefore, a second shock far away from the star surface may also form. However, the second shock is much weaker than the primary one near the surface. We found that if rotation period is below a certain value (P*), then multiple critical points or multiple shocks are not possible and P* depends upon the composition of the flow. We also found that cooling effect dominates after the shock and that the cyclotron and the bremsstrahlung cooling processes should be considered to obtain a consistent accretion solution.
NASA Astrophysics Data System (ADS)
Johnson, Kristina Mary
In 1973 the computerized tomography (CT) scanner revolutionized medical imaging. This machine can isolate and display in two-dimensional cross-sections, internal lesions and organs previously impossible to visualize. The possibility of three-dimensional imaging however is not yet exploited by present tomographic systems. Using multiple-exposure holography, three-dimensional displays can be synthesizing from two-dimensional CT cross -sections. A multiple-exposure hologram is an incoherent superposition of many individual holograms. Intuitively it is expected that holograms recorded with equal energy will reconstruct images with equal brightness. It is found however, that holograms recorded first are brighter than holograms recorded later in the superposition. This phenomena is called Holographic Reciprocity Law Failure (HRLF). Computer simulations of latent image formation in multiple-exposure holography are one of the methods used to investigate HRLF. These simulations indicate that it is the time between individual exposures in the multiple -exposure hologram that is responsible for HRLF. This physical parameter introduces an asymmetry into the latent image formation process that favors the signal of previously recorded holograms over holograms recorded later in the superposition. The origin of this asymmetry lies in the dynamics of latent image formation, and in particular in the decay of single-atom latent image specks, which have lifetimes that are short compared to typical times between exposures. An analytical model is developed for a double exposure hologram that predicts a decrease in the brightness of the second exposure as compared to the first exposure as the time between exposures increases. These results are consistent with the computer simulations. Experiments investigating the influence of this parameter on the diffraction efficiency of reconstructed images in a double exposure hologram are also found to be consistent with the computer simulations and analytical results. From this information, two techniques are presented that correct for HRLF, and succeed in reconstructing multiple holographic images of CT cross-sections with equal brightness. The multiple multiple-exposure hologram is a new hologram that increases the number of equally bright images that can be superimposed on one photographic plate.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan
2016-04-01
Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the user can choose at the end due to the specific needs. The sequential single-objective parameter screening was employed prior to the calibrations reducing the number of parameters by at least 50% in the different catchments and for the different single objectives. The single-objective calibrations led to a faster convergence of the objectives and are hence beneficial when using a DDS on single-objectives. The above mentioned parameter screening technique is generalized for multi-objectives and applied before calibration using the PA-DDS algorithm. Two different alternatives of this MO-screening are tested. The comparison of the calibration results using all parameters and using only screened parameters shows for both alternatives that the PA-DDS algorithm does not profit in terms of trade-off size and function evaluations required to achieve converged pareto fronts. This is because the PA-DDS algorithm automatically reduces search space with progress of the calibration run. This automatic reduction should be different for other search algorithms. It is therefore hypothesized that prior screening can but must not be beneficial for parameter estimation dependent on the chosen optimization algorithm.
Implementation of quality by design toward processing of food products.
Rathore, Anurag S; Kapoor, Gautam
2017-05-28
Quality by design (QbD) is a systematic approach that begins with predefined objectives and emphasizes product and process understanding and process control. It is an approach based on principles of sound science and quality risk management. As the food processing industry continues to embrace the idea of in-line, online, and/or at-line sensors and real-time characterization for process monitoring and control, the existing gaps with regard to our ability to monitor multiple parameters/variables associated with the manufacturing process will be alleviated over time. Investments made for development of tools and approaches that facilitate high-throughput analytical and process development, process analytical technology, design of experiments, risk analysis, knowledge management, and enhancement of process/product understanding would pave way for operational and economic benefits later in the commercialization process and across other product pipelines. This article aims to achieve two major objectives. First, to review the progress that has been made in the recent years on the topic of QbD implementation in processing of food products and second, present a case study that illustrates benefits of such QbD implementation.
Star formation history: Modeling of visual binaries
NASA Astrophysics Data System (ADS)
Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.
2018-05-01
Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.