Sample records for experimentally estimated efficiency

  1. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  2. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure

    PubMed Central

    Richards, V. M.; Dai, W.

    2014-01-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given. PMID:24671826

  3. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  4. On the Efficiency of Particle Injection into the Damping Ring of the Budker Institute of Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Balakin, V. V.; Vorobev, N. S.; Berkaev, D. V.; Glukhov, S. A.; Gornostaev, P. B.; Dorokhov, V. L.; Chao, Ma Xiao; Meshkov, O. I.; Nikiforov, D. A.; Shashkov, E. V.; Emanov, F. A.; Astrelina, K. V.; Blinov, M. F.; Borin, V. M.

    2018-03-01

    The efficiency of injection from a linear accelerator into the damping ring of the BINP injection complex has been experimentally studied. The estimations of the injection efficiency are in good agreement with the experimental results. Our method of increasing the capture efficiency can enhance the productivity of the injection complex by a factor of 1.5-2.

  5. Real-Time PCR Quantification Using A Variable Reaction Efficiency Model

    PubMed Central

    Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.

    2008-01-01

    Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886

  6. A new approach to estimate the geometrical factors, solid angle approximation, geometrical efficiency and their use in basic interaction cross section measurements

    NASA Astrophysics Data System (ADS)

    Rao, D. V.; Cesareo, R.; Brunetti, A.; Gigante, G. E.; Takeda, T.; Itai, Y.; Akatsuka, T.

    2002-10-01

    A new approach is developed to estimate the geometrical factors, solid angle approximation and geometrical efficiency for a system with experimental arrangements using X-ray tube and secondary target as an excitation source in order to produce the nearly monoenergetic Kα radiation to excite the sample. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work.

  7. Experimental investigation of the influence of internal frames on the vibroacoustic behavior of a stiffened cylindrical shell using wavenumber analysis

    NASA Astrophysics Data System (ADS)

    Meyer, V.; Maxit, L.; Renou, Y.; Audoly, C.

    2017-09-01

    The understanding of the influence of non-axisymmetric internal frames on the vibroacoustic behavior of a stiffened cylindrical shell is of high interest for the naval or aeronautic industries. Several numerical studies have shown that the non-axisymmetric internal frame can increase the radiation efficiency significantly in the case of a mechanical point force. However, less attention has been paid to the experimental verification of this statement. That is why this paper proposes to compare the radiation efficiency estimated experimentally for a stiffened cylindrical shell with and without internal frames. The experimental process is based on scanning laser vibrometer measurements of the vibrations on the surface of the shell. A transform of the vibratory field in the wavenumber domain is then performed. It allows estimating the far-field radiated pressure with the stationary phase theorem. An increase of the radiation efficiency is observed in the low frequencies. Analysis of the velocity field in the physical and wavenumber spaces allows highlighting the coupling of the circumferential orders at the origin of the increase in the radiation efficiency.

  8. Stated Choice design comparison in a developing country: recall and attribute nonattendance

    PubMed Central

    2014-01-01

    Background Experimental designs constitute a vital component of all Stated Choice (aka discrete choice experiment) studies. However, there exists limited empirical evaluation of the statistical benefits of Stated Choice (SC) experimental designs that employ non-zero prior estimates in constructing non-orthogonal constrained designs. This paper statistically compares the performance of contrasting SC experimental designs. In so doing, the effect of respondent literacy on patterns of Attribute non-Attendance (ANA) across fractional factorial orthogonal and efficient designs is also evaluated. The study uses a ‘real’ SC design to model consumer choice of primary health care providers in rural north India. A total of 623 respondents were sampled across four villages in Uttar Pradesh, India. Methods Comparison of orthogonal and efficient SC experimental designs is based on several measures. Appropriate comparison of each design’s respective efficiency measure is made using D-error results. Standardised Akaike Information Criteria are compared between designs and across recall periods. Comparisons control for stated and inferred ANA. Coefficient and standard error estimates are also compared. Results The added complexity of the efficient SC design, theorised elsewhere, is reflected in higher estimated amounts of ANA among illiterate respondents. However, controlling for ANA using stated and inferred methods consistently shows that the efficient design performs statistically better. Modelling SC data from the orthogonal and efficient design shows that model-fit of the efficient design outperform the orthogonal design when using a 14-day recall period. The performance of the orthogonal design, with respect to standardised AIC model-fit, is better when longer recall periods of 30-days, 6-months and 12-months are used. Conclusions The effect of the efficient design’s cognitive demand is apparent among literate and illiterate respondents, although, more pronounced among illiterate respondents. This study empirically confirms that relaxing the orthogonality constraint of SC experimental designs increases the information collected in choice tasks, subject to the accuracy of the non-zero priors in the design and the correct specification of a ‘real’ SC recall period. PMID:25386388

  9. Efficient implementation of a real-time estimation system for thalamocortical hidden Parkinsonian properties

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.

    2017-01-01

    Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.

  10. On Patarin's Attack against the lIC Scheme

    NASA Astrophysics Data System (ADS)

    Ogura, Naoki; Uchiyama, Shigenori

    In 2007, Ding et al. proposed an attractive scheme, which is called the l-Invertible Cycles (lIC) scheme. lIC is one of the most efficient multivariate public-key cryptosystems (MPKC); these schemes would be suitable for using under limited computational resources. In 2008, an efficient attack against lIC using Gröbner basis algorithms was proposed by Fouque et al. However, they only estimated the complexity of their attack based on their experimental results. On the other hand, Patarin had proposed an efficient attack against some multivariate public-key cryptosystems. We call this attack Patarin's attack. The complexity of Patarin's attack can be estimated by finding relations corresponding to each scheme. In this paper, we propose an another practical attack against the lIC encryption/signature scheme. We estimate the complexity of our attack (not experimentally) by adapting Patarin's attack. The attack can be also applied to the lIC- scheme. Moreover, we show some experimental results of a practical attack against the lIC/lIC- schemes. This is the first implementation of both our proposed attack and an attack based on Gröbner basis algorithm for the even case, that is, a parameter l is even.

  11. Increase in the thermodynamic efficiency of the working process of spark-ignited engines on natural gas with the addition of hydrogen

    NASA Astrophysics Data System (ADS)

    Mikhailovna Smolenskaya, Natalia; Vladimirovich Smolenskii, Victor; Vladimirovich Korneev, Nicholas

    2018-02-01

    The work is devoted to the substantiation and practical implementation of a new approach for estimating the change in internal energy by pressure and volume. The pressure is measured with a calibrated sensor. The change in volume inside the cylinder is determined by changing the position of the piston. The position of the piston is precisely determined by the angle of rotation of the crankshaft. On the basis of the proposed approach, the thermodynamic efficiency of the working process of spark ignition engines on natural gas with the addition of hydrogen was estimated. Experimental studies were carried out on a single-cylinder unit UIT-85. Their analysis showed an increase in the thermodynamic efficiency of the working process with the addition of hydrogen in a compressed natural gas (CNG).The results obtained make it possible to determine the characteristic of heat release from the analysis of experimental data. The effect of hydrogen addition on the CNG combustion process is estimated.

  12. Computation of full energy peak efficiency for nuclear power plant radioactive plume using remote scintillation gamma-ray spectrometry.

    PubMed

    Grozdov, D S; Kolotov, V P; Lavrukhin, Yu E

    2016-04-01

    A method of full energy peak efficiency estimation in the space around scintillation detector, including the presence of a collimator, has been developed. It is based on a mathematical convolution of the experimental results with the following data extrapolation. The efficiency data showed the average uncertainty less than 10%. Software to calculate integral efficiency for nuclear power plant plume was elaborated. The paper also provides results of nuclear power plant plume height estimation by analysis of the spectral data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  14. Drainage estimation to aquifer and water use irrigation efficiency in semi-arid zone for a long period of time

    NASA Astrophysics Data System (ADS)

    Jiménez-Martínez, J.; Molinero-Huguet, J.; Candela, L.

    2009-04-01

    Water requirements for different crop types according to soil type and climate conditions play not only an important role in agricultural efficiency production, though also for water resources management and control of pollutants in drainage water. The key issue to attain these objectives is the irrigation efficiency. Application of computer codes for irrigation simulation constitutes a fast and inexpensive approach to study optimal agricultural management practices. To simulate daily water balance in the soil, vadose zone and aquifer the VisualBALAN V. 2.0 code was applied to an experimental area under irrigation characterized by its aridity. The test was carried out in three experimental plots for annual row crops (lettuce and melon), perennial vegetables (artichoke), and fruit trees (citrus) under common agricultural practices in open air for October 1999-September 2008. Drip irrigation was applied to crops production due to the scarcity of water resources and the need for water conservation. Water level change was monitored in the top unconfined aquifer for each experimental plot. Results of water balance modelling show a good agreement between observed and estimated water level values. For the study period, mean drainage obtained values were 343 mm, 261 mm and 205 mm for lettuce and melon, artichoke and citrus respectively. Assessment of water use efficiency was based on the IE indicator proposed by the ASCE Task Committee. For the modelled period, water use efficiency was estimated as 73, 71 and 78 % of the applied dose (irrigation + precipitation) for lettuce and melon, artichoke and citrus, respectively.

  15. Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.

    PubMed

    Houpt, Joseph W; Bittner, Jennifer L

    2018-07-01

    Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  17. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  18. Measuring landscape esthetics: the scenic beauty estimation method

    Treesearch

    Terry C. Daniel; Ron S. Boster

    1976-01-01

    The Scenic Beauty Estimation Method (SBE) provides quantitative measures of esthetic preferences for alternative wildland management systems. Extensive experimentation and testing with user, interest, and professional groups validated the method. SBE shows promise as an efficient and objective means for assessing the scenic beauty of public forests and wildlands, and...

  19. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  20. Classes of Split-Plot Response Surface Designs for Equivalent Estimation

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2006-01-01

    When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.

  1. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  2. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  3. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    USGS Publications Warehouse

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  4. Estimation of the Thermodynamic Efficiency of a Solid-State Cooler Based on the Multicaloric Effect

    NASA Astrophysics Data System (ADS)

    Starkov, A. S.; Pakhomov, O. V.; Rodionov, V. V.; Amirov, A. A.; Starkov, I. A.

    2018-03-01

    The thermodynamic efficiency of using the multicaloric effect (μCE) in solid-state cooler systems has been studied in comparison to single-component caloric effects. This approach is illustrated by example of the Brayton cycle for μCE and magnetocaloric effect (MCE). Based on the results of experiments with Fe48Rh52-PbZr0.53Ti0.47O3 two-layer ferroic composite, the temperature dependence of the relative efficiency is determined and the temperature range is estimated in which the μCE is advantageous to MCE. The proposed theory of μCE is compared to experimental data.

  5. DOSESCREEN: a computer program to aid dose placement

    Treesearch

    Kimberly C. Smith; Jacqueline L. Robertson

    1984-01-01

    Careful selection of an experimental design for a bioassay substantially improves the precision of effective dose (ED) estimates. Design considerations typically include determination of sample size, dose selection, and allocation of subjects to doses. DOSESCREEN is a computer program written to help investigators select an efficient design for the estimation of an...

  6. Optimal designs for copula models

    PubMed Central

    Perrone, E.; Müller, W.G.

    2016-01-01

    Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well. PMID:27453616

  7. The method of constant stimuli is inefficient

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Fitzhugh, Andrew

    1990-01-01

    Simpson (1988) has argued that the method of constant stimuli is as efficient as adaptive methods of threshold estimation and has supported this claim with simulations. It is shown that Simpson's simulations are not a reasonable model of the experimental process and that more plausible simulations confirm that adaptive methods are much more efficient that the method of constant stimuli.

  8. Measurements of Atomic Rayleigh Scattering Cross-Sections: A New Approach Based on Solid Angle Approximation and Geometrical Efficiency

    NASA Astrophysics Data System (ADS)

    Rao, D. V.; Takeda, T.; Itai, Y.; Akatsuka, T.; Seltzer, S. M.; Hubbell, J. H.; Cesareo, R.; Brunetti, A.; Gigante, G. E.

    Atomic Rayleigh scattering cross-sections for low, medium and high Z atoms are measured in vacuum using X-ray tube with a secondary target as an excitation source instead of radioisotopes. Monoenergetic Kα radiation emitted from the secondary target and monoenergetic radiation produced using two secondary targets with filters coupled to an X-ray tube are compared. The Kα radiation from the second target of the system is used to excite the sample. The background has been reduced considerably and the monochromacy is improved. Elastic scattering of Kα X-ray line energies of the secondary target by the sample is recorded with Hp Ge and Si (Li) detectors. A new approach is developed to estimate the solid angle approximation and geometrical efficiency for a system with experimental arrangement using X-ray tube and secondary target. The variation of the solid angle is studied by changing the radius and length of the collimators towards and away from the source and sample. From these values the variation of the total solid angle and geometrical efficiency is deduced and the optimum value is used for the experimental work. The efficiency is larger because the X-ray fluorescent source acts as a converter. Experimental results based on this system are compared with theoretical estimates and good agreement is observed in between them.

  9. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  10. Plant disease severity assessment - How rater bias, assessment method and experimental design affect hypothesis testing and resource use efficiency

    USDA-ARS?s Scientific Manuscript database

    The impact of rater bias and assessment method on hypothesis testing was studied for different experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed ‘balanced’, and those ...

  11. A comparison of two experimental design approaches in applying conjoint analysis in patient-centered outcomes research: a randomized trial.

    PubMed

    Kinter, Elizabeth T; Prior, Thomas J; Carswell, Christopher I; Bridges, John F P

    2012-01-01

    While the application of conjoint analysis and discrete-choice experiments in health are now widely accepted, a healthy debate exists around competing approaches to experimental design. There remains, however, a paucity of experimental evidence comparing competing design approaches and their impact on the application of these methods in patient-centered outcomes research. Our objectives were to directly compare the choice-model parameters and predictions of an orthogonal and a D-efficient experimental design using a randomized trial (i.e., an experiment on experiments) within an application of conjoint analysis studying patient-centered outcomes among outpatients diagnosed with schizophrenia in Germany. Outpatients diagnosed with schizophrenia were surveyed and randomized to receive choice tasks developed using either an orthogonal or a D-efficient experimental design. The choice tasks elicited judgments from the respondents as to which of two patient profiles (varying across seven outcomes and process attributes) was preferable from their own perspective. The results from the two survey designs were analyzed using the multinomial logit model, and the resulting parameter estimates and their robust standard errors were compared across the two arms of the study (i.e., the orthogonal and D-efficient designs). The predictive performances of the two resulting models were also compared by computing their percentage of survey responses classified correctly, and the potential for variation in scale between the two designs of the experiments was tested statistically and explored graphically. The results of the two models were statistically identical. No difference was found using an overall chi-squared test of equality for the seven parameters (p = 0.69) or via uncorrected pairwise comparisons of the parameter estimates (p-values ranged from 0.30 to 0.98). The D-efficient design resulted in directionally smaller standard errors for six of the seven parameters, of which only two were statistically significant, and no differences were found in the observed D-efficiencies of their standard errors (p = 0.62). The D-efficient design resulted in poorer predictive performance, but this was not significant (p = 0.73); there was some evidence that the parameters of the D-efficient design were biased marginally towards the null. While no statistical difference in scale was detected between the two designs (p = 0.74), the D-efficient design had a higher relative scale (1.06). This could be observed when the parameters were explored graphically, as the D-efficient parameters were lower. Our results indicate that orthogonal and D-efficient experimental designs have produced results that are statistically equivalent. This said, we have identified several qualitative findings that speak to the potential differences in these results that may have been statistically identified in a larger sample. While more comparative studies focused on the statistical efficiency of competing design strategies are needed, a more pressing research problem is to document the impact the experimental design has on respondent efficiency.

  12. Use of multi-temporal UAV-derived imagery for estimating individual tree growth in Pinus pinea stands

    Treesearch

    Juan Guerra-Hernández; Eduardo González-Ferreiro; Vicente Monleon; Sonia Faias; Margarida Tomé; Ramón Díaz-Varela

    2017-01-01

    High spatial resolution imagery provided by unmanned aerial vehicles (UAVs) can yield accurate and efficient estimation of tree dimensions and canopy structural variables at the local scale. We flew a low-cost, lightweight UAV over an experimental Pinus pinea L. plantation (290 trees distributed over 16 ha with different fertirrigation treatments)...

  13. Damage classification and estimation in experimental structures using time series analysis and pattern recognition

    NASA Astrophysics Data System (ADS)

    de Lautour, Oliver R.; Omenzetter, Piotr

    2010-07-01

    Developed for studying long sequences of regularly sampled data, time series analysis methods are being increasingly investigated for the use of Structural Health Monitoring (SHM). In this research, Autoregressive (AR) models were used to fit the acceleration time histories obtained from two experimental structures: a 3-storey bookshelf structure and the ASCE Phase II Experimental SHM Benchmark Structure, in undamaged and limited number of damaged states. The coefficients of the AR models were considered to be damage-sensitive features and used as input into an Artificial Neural Network (ANN). The ANN was trained to classify damage cases or estimate remaining structural stiffness. The results showed that the combination of AR models and ANNs are efficient tools for damage classification and estimation, and perform well using small number of damage-sensitive features and limited sensors.

  14. PREMIX: PRivacy-preserving EstiMation of Individual admiXture.

    PubMed

    Chen, Feng; Dow, Michelle; Ding, Sijie; Lu, Yao; Jiang, Xiaoqian; Tang, Hua; Wang, Shuang

    2016-01-01

    In this paper we proposed a framework: PRivacy-preserving EstiMation of Individual admiXture (PREMIX) using Intel software guard extensions (SGX). SGX is a suite of software and hardware architectures to enable efficient and secure computation over confidential data. PREMIX enables multiple sites to securely collaborate on estimating individual admixture within a secure enclave inside Intel SGX. We implemented a feature selection module to identify most discriminative Single Nucleotide Polymorphism (SNP) based on informativeness and an Expectation Maximization (EM)-based Maximum Likelihood estimator to identify the individual admixture. Experimental results based on both simulation and 1000 genome data demonstrated the efficiency and accuracy of the proposed framework. PREMIX ensures a high level of security as all operations on sensitive genomic data are conducted within a secure enclave using SGX.

  15. Energy Losses Estimation During Pulsed-Laser Seam Welding

    NASA Astrophysics Data System (ADS)

    Sebestova, Hana; Havelkova, Martina; Chmelickova, Hana

    2014-06-01

    The finite-element tool SYSWELD (ESI Group, Paris, France) was adapted to simulate pulsed-laser seam welding. Besides temperature field distribution, one of the possible outputs of the welding simulation is the amount of absorbed power necessary to melt the required material volume including energy losses. Comparing absorbed or melting energy with applied laser energy, welding efficiencies can be calculated. This article presents achieved results of welding efficiency estimation based on the assimilation both experimental and simulation output data of the pulsed Nd:YAG laser bead on plate welding of 0.6-mm-thick AISI 304 stainless steel sheets using different beam powers.

  16. Local deformation for soft tissue simulation

    PubMed Central

    Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2016-01-01

    ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482

  17. Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data

    NASA Astrophysics Data System (ADS)

    Stegmeir, Matthew; Kassen, Dan

    2016-11-01

    As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.

  18. 3D shape reconstruction of specular surfaces by using phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan

    2016-10-01

    The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.

  19. Experimental demonstration of OFDM/OQAM transmission with DFT-based channel estimation for visible laser light communications

    NASA Astrophysics Data System (ADS)

    He, Jing; Shi, Jin; Deng, Rui; Chen, Lin

    2017-08-01

    Recently, visible light communication (VLC) based on light-emitting diodes (LEDs) is considered as a candidate technology for fifth-generation (5G) communications, VLC is free of electromagnetic interference and it can simplify the integration of VLC into heterogeneous wireless networks. Due to the data rates of VLC system limited by the low pumping efficiency, small output power and narrow modulation bandwidth, visible laser light communication (VLLC) system with laser diode (LD) has paid more attention. In addition, orthogonal frequency division multiplexing/offset quadrature amplitude modulation (OFDM/OQAM) is currently attracting attention in optical communications. Due to the non-requirement of cyclic prefix (CP) and time-frequency domain well-localized pulse shapes, it can achieve high spectral efficiency. Moreover, OFDM/OQAM has lower out-of-band power leakage so that it increases the system robustness against inter-carrier interference (ICI) and frequency offset. In this paper, a Discrete Fourier Transform (DFT)-based channel estimation scheme combined with the interference approximation method (IAM) is proposed and experimentally demonstrated for VLLC OFDM/OQAM system. The performance of VLLC OFDM/OQAM system with and without DFT-based channel estimation is investigated. Moreover, the proposed DFT-based channel estimation scheme and the intra-symbol frequency-domain averaging (ISFA)-based method are also compared for the VLLC OFDM/OQAM system. The experimental results show that, the performance of EVM using the DFT-based channel estimation scheme is improved about 3dB compared with the conventional IAM method. In addition, the DFT-based channel estimation scheme can resist the channel noise effectively than that of the ISFA-based method.

  20. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  1. Congestion estimation technique in the optical network unit registration process.

    PubMed

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  2. Estimating the number of people in crowded scenes

    NASA Astrophysics Data System (ADS)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  3. Estimation of some transducer parameters in a broadband piezoelectric transmitter by using an artificial intelligence technique.

    PubMed

    Ruíz, A; Ramos, A; San Emeterio, J L

    2004-04-01

    An estimation procedure to efficiently find approximate values of internal parameters in ultrasonic transducers intended for broadband operation would be a valuable tool to discover internal construction data. This information is necessary in the modelling and simulation of acoustic and electrical behaviour related to ultrasonic systems containing commercial transducers. There is not a general solution for this generic problem of parameter estimation in the case of broadband piezoelectric probes. In this paper, this general problem is briefly analysed for broadband conditions. The viability of application in this field of an artificial intelligence technique supported on the modelling of the transducer internal components is studied. A genetic algorithm (GA) procedure is presented and applied to the estimation of different parameters, related to two transducers which are working as pulsed transmitters. The efficiency of this GA technique is studied, considering the influence of the number and variation range of the estimated parameters. Estimation results are experimentally ratified.

  4. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  5. Results of design studies and wind tunnel tests of an advanced high lift system for an Energy Efficient Transport

    NASA Technical Reports Server (NTRS)

    Oliver, W. R.

    1980-01-01

    The development of an advanced technology high lift system for an energy efficient transport incorporating a high aspect ratio supercritical wing is described. This development is based on the results of trade studies to select the high lift system, analysis techniques utilized to design the high lift system, and results of a wind tunnel test program. The program included the first experimental low speed, high Reynolds number wind tunnel test for this class of aircraft. The experimental results include the effects on low speed aerodynamic characteristics of various leading and trailing edge devices, nacelles and pylons, aileron, spoilers, and Mach and Reynolds numbers. Results are discussed and compared with the experimental data and the various aerodynamic characteristics are estimated.

  6. Internal quantum efficiency enhancement of GaInN/GaN quantum-well structures using Ag nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iida, Daisuke; Department of Photonics Engineering, Technical University of Denmark, 2800 Lyngby; Faculty of Science and Technology, Meijo University, 1-501 Shiogamaguchi Tempaku, 468-8502 Nagoya

    2015-09-15

    We report internal quantum efficiency enhancement of thin p-GaN green quantum-well structure using self-assembled Ag nanoparticles. Temperature dependent photoluminescence measurements are conducted to determine the internal quantum efficiency. The impact of excitation power density on the enhancement factor is investigated. We obtain an internal quantum efficiency enhancement by a factor of 2.3 at 756 W/cm{sup 2}, and a factor of 8.1 at 1 W/cm{sup 2}. A Purcell enhancement up to a factor of 26 is estimated by fitting the experimental results to a theoretical model for the efficiency enhancement factor.

  7. Web application for automatic prediction of gene translation elongation efficiency.

    PubMed

    Sokolov, Vladimir; Zuraev, Bulat; Lashin, Sergei; Matushkin, Yury

    2015-09-03

    Expression efficiency is one of the major characteristics describing genes in various modern investigations. Expression efficiency of genes is regulated at various stages: transcription, translation, posttranslational protein modification and others. In this study, a special EloE (Elongation Efficiency) web application is described. The EloE sorts the organism's genes in a descend order on their theoretical rate of the elongation stage of translation based on the analysis of their nucleotide sequences. Obtained theoretical data have a significant correlation with available experimental data of gene expression in various organisms. In addition, the program identifies preferential codons in organism's genes and defines distribution of potential secondary structures energy in 5´ and 3´ regions of mRNA. The EloE can be useful in preliminary estimation of translation elongation efficiency for genes for which experimental data are not available yet. Some results can be used, for instance, in other programs modeling artificial genetic structures in genetically engineered experiments.

  8. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    PubMed

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.

  9. Adaptive correlation filter-based video stabilization without accumulative global motion estimation

    NASA Astrophysics Data System (ADS)

    Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil

    2014-12-01

    We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.

  10. Mortality estimation from carcass searches using the R-package carcass: a tutorial

    USGS Publications Warehouse

    Korner-Nievergelt, Fränzi; Behr, Oliver; Brinkmann, Robert; Etterson, Matthew A.; Huso, Manuela M. P.; Dalthorp, Daniel; Korner-Nievergelt, Pius; Roth, Tobias; Niermann, Ivo

    2015-01-01

    This article is a tutorial for the R-package carcass. It starts with a short overview of common methods used to estimate mortality based on carcass searches. Then, it guides step by step through a simple example. First, the proportion of animals that fall into the search area is estimated. Second, carcass persistence time is estimated based on experimental data. Third, searcher efficiency is estimated. Fourth, these three estimated parameters are combined to obtain the probability that an animal killed is found by an observer. Finally, this probability is used together with the observed number of carcasses found to obtain an estimate for the total number of killed animals together with a credible interval.

  11. Link-state-estimation-based transmission power control in wireless body area networks.

    PubMed

    Kim, Seungku; Eom, Doo-Seop

    2014-07-01

    This paper presents a novel transmission power control protocol to extend the lifetime of sensor nodes and to increase the link reliability in wireless body area networks (WBANs). We first experimentally investigate the properties of the link states using the received signal strength indicator (RSSI). We then propose a practical transmission power control protocol based on both short- and long-term link-state estimations. Both the short- and long-term link-state estimations enable the transceiver to adapt the transmission power level and target the RSSI threshold range, respectively, to simultaneously satisfy the requirements of energy efficiency and link reliability. Finally, the performance of the proposed protocol is experimentally evaluated in two experimental scenarios-body posture change and dynamic body motion-and compared with the typical WBAN transmission power control protocols, a real-time reactive scheme, and a dynamic postural position inference mechanism. From the experimental results, it is found that the proposed protocol increases the lifetime of the sensor nodes by a maximum of 9.86% and enhances the link reliability by reducing the packet loss by a maximum of 3.02%.

  12. Relative Navigation for Formation Flying of Spacecraft

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Du, Ju-Young; Hughes, Declan; Junkins, John L.; Crassidis, John L.

    2001-01-01

    This paper presents a robust and efficient approach for relative navigation and attitude estimation of spacecraft flying in formation. This approach uses measurements from a new optical sensor that provides a line of sight vector from the master spacecraft to the secondary satellite. The overall system provides a novel, reliable, and autonomous relative navigation and attitude determination system, employing relatively simple electronic circuits with modest digital signal processing requirements and is fully independent of any external systems. Experimental calibration results are presented, which are used to achieve accurate line of sight measurements. State estimation for formation flying is achieved through an optimal observer design. Also, because the rotational and translational motions are coupled through the observation vectors, three approaches are suggested to separate both signals just for stability analysis. Simulation and experimental results indicate that the combined sensor/estimator approach provides accurate relative position and attitude estimates.

  13. Estimating the distance separating fluorescent protein FRET pairs

    PubMed Central

    van der Meer, B. Wieb; Blank, Paul S.

    2014-01-01

    Förster resonance energy transfer (FRET) describes a physical phenomenon widely applied in biomedical research to estimate separations between biological molecules. Routinely, genetic engineering is used to incorporate spectral variants of the green fluorescent protein (GFPs), into cellular expressed proteins. The transfer efficiency or rate of energy transfer between donor and acceptor FPs is then assayed. As appreciable FRET occurs only when donors and acceptors are in close proximity (1–10 nm), the presence of FRET may indicate that the engineered proteins associate as interacting species. For a homogeneous population of FRET pairs the separations between FRET donors and acceptors can be estimated from a measured FRET efficiency if it is assumed that donors and acceptors are randomly oriented and rotate extensively during their excited state (dynamic regime). Unlike typical organic fluorophores, the rotational correlation-times of FPs are typically much longer than their fluorescence lifetime; accordingly FPs are virtually static during their excited state. Thus, estimating separations between FP FRET pairs is problematic. To overcome this obstacle, we present here a simple method for estimating separations between FPs using the experimentally measured average FRET efficiency. This approach assumes that donor and acceptor fluorophores are randomly oriented, but do not rotate during their excited state (static regime). This approach utilizes a Monte-Carlo simulation generated look-up table that allows one to estimate the separation, normalized to the Förster distance, from the average FRET efficiency. Assuming a dynamic regime overestimates the separation significantly (by 10% near 0.5 and 30% near 0.75 efficiencies) compared to assuming a static regime, which is more appropriate for estimates of separations between FPs. PMID:23811334

  14. Impact of reduced marker set estimation of genomic relationship matrices on genomic selection for feed efficiency in Angus cattle.

    PubMed

    Rolf, Megan M; Taylor, Jeremy F; Schnabel, Robert D; McKay, Stephanie D; McClure, Matthew C; Northcutt, Sally L; Kerley, Monty S; Weaber, Robert L

    2010-04-19

    Molecular estimates of breeding value are expected to increase selection response due to improvements in the accuracy of selection and a reduction in generation interval, particularly for traits that are difficult or expensive to record or are measured late in life. Several statistical methods for incorporating molecular data into breeding value estimation have been proposed, however, most studies have utilized simulated data in which the generated linkage disequilibrium may not represent the targeted livestock population. A genomic relationship matrix was developed for 698 Angus steers and 1,707 Angus sires using 41,028 single nucleotide polymorphisms and breeding values were estimated using feed efficiency phenotypes (average daily feed intake, residual feed intake, and average daily gain) recorded on the steers. The number of SNPs needed to accurately estimate a genomic relationship matrix was evaluated in this population. Results were compared to estimates produced from pedigree-based mixed model analysis of 862 Angus steers with 34,864 identified paternal relatives but no female ancestors. Estimates of additive genetic variance and breeding value accuracies were similar for AFI and RFI using the numerator and genomic relationship matrices despite fewer animals in the genomic analysis. Bootstrap analyses indicated that 2,500-10,000 markers are required for robust estimation of genomic relationship matrices in cattle. This research shows that breeding values and their accuracies may be estimated for commercially important sires for traits recorded in experimental populations without the need for pedigree data to establish identity by descent between members of the commercial and experimental populations when at least 2,500 SNPs are available for the generation of a genomic relationship matrix.

  15. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  16. Simulated maximum likelihood method for estimating kinetic rates in gene expression.

    PubMed

    Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin

    2007-01-01

    Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.

  17. Assessment of inlet efficiency through a 3D simulation: numerical and experimental comparison.

    PubMed

    Gómez, Manuel; Recasens, Joan; Russo, Beniamino; Martínez-Gomariz, Eduardo

    2016-10-01

    Inlet efficiency is a requirement for characterizing the flow transfers between surface and sewer flow during rain events. The dual drainage approach is based on the joint analysis of both upper and lower drainage levels, and the flow transfer is one of the relevant elements to define properly this joint behaviour. This paper presents the results of an experimental and numerical investigation about the inlet efficiency definition. A full scale (1:1) test platform located in the Technical University of Catalonia (UPC) reproduces both the runoff process in streets and the water entering the inlet. Data from tests performed on this platform allow the inlet efficiency to be estimated as a function of significant hydraulic and geometrical parameters. A reproduction of these tests through a numerical three-dimensional code (Flow-3D) has been carried out simulating this type of flow by solving the RANS equations. The aim of the work was to reproduce the hydraulic performance of a previously tested grated inlet under several flow and geometric conditions using Flow-3D as a virtual laboratory. This will allow inlet efficiencies to be obtained without previous experimental tests. Moreover, the 3D model allows a better understanding of the hydraulics of the flow interception and the flow patterns approaching the inlet.

  18. Efficient design and inference for multistage randomized trials of individualized treatment policies.

    PubMed

    Dawson, Ree; Lavori, Philip W

    2012-01-01

    Clinical demand for individualized "adaptive" treatment policies in diverse fields has spawned development of clinical trial methodology for their experimental evaluation via multistage designs, building upon methods intended for the analysis of naturalistically observed strategies. Because often there is no need to parametrically smooth multistage trial data (in contrast to observational data for adaptive strategies), it is possible to establish direct connections among different methodological approaches. We show by algebraic proof that the maximum likelihood (ML) and optimal semiparametric (SP) estimators of the population mean of the outcome of a treatment policy and its standard error are equal under certain experimental conditions. This result is used to develop a unified and efficient approach to design and inference for multistage trials of policies that adapt treatment according to discrete responses. We derive a sample size formula expressed in terms of a parametric version of the optimal SP population variance. Nonparametric (sample-based) ML estimation performed well in simulation studies, in terms of achieved power, for scenarios most likely to occur in real studies, even though sample sizes were based on the parametric formula. ML outperformed the SP estimator; differences in achieved power predominately reflected differences in their estimates of the population mean (rather than estimated standard errors). Neither methodology could mitigate the potential for overestimated sample sizes when strong nonlinearity was purposely simulated for certain discrete outcomes; however, such departures from linearity may not be an issue for many clinical contexts that make evaluation of competitive treatment policies meaningful.

  19. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  20. Experimental Clocking of Nanomagnets with Strain for Ultralow Power Boolean Logic.

    PubMed

    D'Souza, Noel; Salehi Fashami, Mohammad; Bandyopadhyay, Supriyo; Atulasimha, Jayasimha

    2016-02-10

    Nanomagnetic implementations of Boolean logic have attracted attention because of their nonvolatility and the potential for unprecedented overall energy-efficiency. Unfortunately, the large dissipative losses that occur when nanomagnets are switched with a magnetic field or spin-transfer-torque severely compromise the energy-efficiency. Recently, there have been experimental reports of utilizing the Spin Hall effect for switching magnets, and theoretical proposals for strain induced switching of single-domain magnetostrictive nanomagnets, that might reduce the dissipative losses significantly. Here, we experimentally demonstrate, for the first time that strain-induced switching of single-domain magnetostrictive nanomagnets of lateral dimensions ∼200 nm fabricated on a piezoelectric substrate can implement a nanomagnetic Boolean NOT gate and steer bit information unidirectionally in dipole-coupled nanomagnet chains. On the basis of the experimental results with bulk PMN-PT substrates, we estimate that the energy dissipation for logic operations in a reasonably scaled system using thin films will be a mere ∼1 aJ/bit.

  1. Assessment of exposure to composite nanomaterials and development of a personal respiratory deposition sampler for nanoparticles

    NASA Astrophysics Data System (ADS)

    Cena, Lorenzo

    2011-12-01

    The overall goals of this doctoral dissertation are to provide knowledge of workers' exposure to nanomaterials and to assist in the development of standard methods to measure personal exposure to nanomaterials in workplace environments. To achieve the first goal, a field study investigated airborne particles generated from the weighing of bulk carbon nanotubes (CNTs) and the manual sanding of epoxy test samples reinforced with CNTs. This study also evaluated the effectiveness of three local exhaust ventilation (LEV) conditions (no LEV, custom fume hood and biosafety cabinet) for control of exposure to particles generated during sanding of CNT-epoxy nanocomposites. Particle number and respirable mass concentrations were measured with direct-read instruments, and particle morphology was determined by electron microscopy. Sanding of CNT-epoxy nanocomposites released respirable size airborne particles with protruding CNTs very different in morphology from bulk CNTs that tended to remain in clusters (>1mum). Respirable mass concentrations in the operator's breathing zone were significantly greater when sanding took place in the custom hood (p <0.0001) compared to the other LEV conditions. This study found that workers' exposure was to particles containing protruding CNTs rather than to bulk CNT particles. Particular attention should be placed in the design and selection of hoods to minimize exposure. Two laboratory studies were conducted to realize the second goal. Collection efficiency of submicrometer particles was evaluated for nylon mesh screens with three pore sizes (60, 100 and 180 mum) at three flow rates (2.5, 4, and 6 Lpm). Single-fiber efficiency of nylon mesh screens was then calculated and compared to a theoretical estimation expression. The effects of particle morphology on collection efficiency were also experimentally measured. The collection efficiency of the screens was found to vary by less than 4% regardless of particle morphology. Single-fiber efficiency of the screens calculated from experimental data was in good agreement with that estimated from theory for particles between 40 and 150 nm but deviated from theory for particles outside of this range. New coefficients for the single-fiber efficiency model were identified that minimized the sum of square error (SSE) between the experimental values and those estimated with the model. Compared to the original theory, the SSE calculated using the modified theory was at least threefold lower for all screens and flow rates. Since nylon fibers produce no significant spectral interference when ashed for spectrometric examination, the ability to accurately estimate collection efficiency of submicrometer particles makes nylon mesh screens an attractive collection substrate for nanoparticles. In the third study, laboratory experiments were conducted to develop a novel nanoparticle respiratory deposition (NRD) sampler that selectively collects nanoparticles in a worker's breathing zone apart from larger particles. The NRD sampler consists of a respirable cyclone fitted with an impactor and a diffusion stage containing eight nylon-mesh screens. A sampling criterion for nano-particulate matter (NPM) was developed and set as the target for the collection efficiency of the NRD sampler. The sampler operates at 2.5 Lpm and fits on a worker's lapel. The cut-off diameter of the impactor was experimentally measured to be 300 nm with a sharpness of 1.53. Loading at typical workplace levels was found to have no significant effect (2-way ANOVA, p=0.257) on the performance of the impactor. The effective deposition of particles onto the diffusion stage was found to match the NPM criterion, showing that a sample collected with the NRD sampler represents the concentration of nanoparticles deposited in the human respiratory system.

  2. Fatigue Level Estimation of Bill Based on Acoustic Signal Feature by Supervised SOM

    NASA Astrophysics Data System (ADS)

    Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa

    Fatigued bills have harmful influence on daily operation of Automated Teller Machine(ATM). To make the fatigued bills classification more efficient, development of an automatic fatigued bill classification method is desired. We propose a new method to estimate bending rigidity of bill from acoustic signal feature of banking machines. The estimated bending rigidities are used as continuous fatigue level for classification of fatigued bill. By using the supervised Self-Organizing Map(supervised SOM), we estimate the bending rigidity from only the acoustic energy pattern effectively. The experimental result with real bill samples shows the effectiveness of the proposed method.

  3. The impact of interface bonding efficiency on high-burnup spent nuclear fuel dynamic performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hao; Wang, Jy-An John; Wang, Hong

    Finite element analysis (FEA) was used to investigate the impact of interfacial bonding efficiency at pellet-pellet and pellet-clad interfaces of high-burnup (HBU) spent nuclear fuel (SNF) on system dynamic performance. Bending moments M were applied to FEA model to evaluate the system responses. From bending curvature, κ, flexural rigidity EI can be estimated as EI = M/κ. The FEA simulation results were benchmarked with experimental results from cyclic integrated reversal bending fatigue test (CIRFT) of HBR fuel rods. The consequence of interface debonding between fuel pellets and cladding is a redistribution of the loads carried by the fuel pellets tomore » the clad, which results in a reduction in composite rod system flexural rigidity. Furthermore, the interface bonding efficiency at the pellet-pellet and pellet-clad interfaces can significantly dictate the SNF system dynamic performance. With the consideration of interface bonding efficiency, the HBU SNF fuel property was estimated with CIRFT test data.« less

  4. The impact of interface bonding efficiency on high-burnup spent nuclear fuel dynamic performance

    DOE PAGES

    Jiang, Hao; Wang, Jy-An John; Wang, Hong

    2016-09-26

    Finite element analysis (FEA) was used to investigate the impact of interfacial bonding efficiency at pellet-pellet and pellet-clad interfaces of high-burnup (HBU) spent nuclear fuel (SNF) on system dynamic performance. Bending moments M were applied to FEA model to evaluate the system responses. From bending curvature, κ, flexural rigidity EI can be estimated as EI = M/κ. The FEA simulation results were benchmarked with experimental results from cyclic integrated reversal bending fatigue test (CIRFT) of HBR fuel rods. The consequence of interface debonding between fuel pellets and cladding is a redistribution of the loads carried by the fuel pellets tomore » the clad, which results in a reduction in composite rod system flexural rigidity. Furthermore, the interface bonding efficiency at the pellet-pellet and pellet-clad interfaces can significantly dictate the SNF system dynamic performance. With the consideration of interface bonding efficiency, the HBU SNF fuel property was estimated with CIRFT test data.« less

  5. A web application for automatic prediction of gene translation elongation efficiency.

    PubMed

    Sokolov, Vladimir S; Zuraev, Bulat S; Lashin, Sergei A; Matushkin, Yury G

    2015-03-01

    Expression efficiency is one of the major characteristics describing genes in various modern investigations. Expression efficiency of genes is regulated at various stages: transcription, translation, posttranslational protein modification and others. In this study, a special EloE (Elongation Efficiency) web application is described. The EloE sorts the organism's genes in a descend order on their theoretical rate of the elongation stage of translation based on the analysis of their nucleotide sequences. Obtained theoretical data have a significant correlation with available experimental data of gene expression in various organisms. In addition, the program identifies preferential codons in organism's genes and defines distribution of potential secondary structures energy in 5´ and 3´ regions of mRNA. The EloE can be useful in preliminary estimation of translation elongation efficiency for genes for which experimental data are not available yet. Some results can be used, for instance, in other programs modeling artificial genetic structures in genetically engineered experiments. The EloE web application is available at http://www-bionet.sscc.ru:7780/EloE.

  6. Evaluation of flow hydrodynamics in a pilot-scale dissolved air flotation tank: a comparison between CFD and experimental measurements.

    PubMed

    Lakghomi, B; Lawryshyn, Y; Hofmann, R

    2015-01-01

    Computational fluid dynamics (CFD) models of dissolved air flotation (DAF) have shown formation of stratified flow (back and forth horizontal flow layers at the top of the separation zone) and its impact on improved DAF efficiency. However, there has been a lack of experimental validation of CFD predictions, especially in the presence of solid particles. In this work, for the first time, both two-phase (air-water) and three-phase (air-water-solid particles) CFD models were evaluated at pilot scale using measurements of residence time distribution, bubble layer position and bubble-particle contact efficiency. The pilot-scale results confirmed the accuracy of the CFD model for both two-phase and three-phase flows, but showed that the accuracy of the three-phase CFD model would partly depend on the estimation of bubble-particle attachment efficiency.

  7. Resource-Efficient Measurement-Device-Independent Entanglement Witness

    DOE PAGES

    Verbanis, E.; Martin, A.; Rosset, D.; ...

    2016-05-09

    Imperfections in experimental measurement schemes can lead to falsely identifying, or over estimating, entanglement in a quantum system. A recent solution to this is to define schemes that are robust to measurement imperfections—measurement-device-independent entanglement witness (MDI-EW). This approach can be adapted to witness all entangled qubit states for a wide range of physical systems and does not depend on detection efficiencies or classical communication between devices. In this paper, we extend the theory to remove the necessity of prior knowledge about the two-qubit states to be witnessed. Moreover, we tested this model via a novel experimental implementation for MDI-EW thatmore » significantly reduces the experimental complexity. Finally, by applying it to a bipartite Werner state, we demonstrate the robustness of this approach against noise by witnessing entanglement down to an entangled state fraction close to 0.4.« less

  8. Online gas composition estimation in solid oxide fuel cell systems with anode off-gas recycle configuration

    NASA Astrophysics Data System (ADS)

    Dolenc, B.; Vrečko, D.; Juričić, Ð.; Pohjoranta, A.; Pianese, C.

    2017-03-01

    Degradation and poisoning of solid oxide fuel cell (SOFC) stacks are continuously shortening the lifespan of SOFC systems. Poisoning mechanisms, such as carbon deposition, form a coating layer, hence rapidly decreasing the efficiency of the fuel cells. Gas composition of inlet gases is known to have great impact on the rate of coke formation. Therefore, monitoring of these variables can be of great benefit for overall management of SOFCs. Although measuring the gas composition of the gas stream is feasible, it is too costly for commercial applications. This paper proposes three distinct approaches for the design of gas composition estimators of an SOFC system in anode off-gas recycle configuration which are (i.) accurate, and (ii.) easy to implement on a programmable logic controller. Firstly, a classical approach is briefly revisited and problems related to implementation complexity are discussed. Secondly, the model is simplified and adapted for easy implementation. Further, an alternative data-driven approach for gas composition estimation is developed. Finally, a hybrid estimator employing experimental data and 1st-principles is proposed. Despite the structural simplicity of the estimators, the experimental validation shows a high precision for all of the approaches. Experimental validation is performed on a 10 kW SOFC system.

  9. Piezo-optic, photoelastic, and acousto-optic properties of SrB4O7 crystals.

    PubMed

    Mytsyk, Bohdan; Demyanyshyn, Natalia; Martynyuk-Lototska, Irina; Vlokh, Rostyslav

    2011-07-20

    On the basis of studies of the piezo-optic effect, it has been shown that SrB(4)O(7) crystals can be used as efficient acousto-optic materials in the vacuum ultraviolet spectral range. The full matrices of piezo-optic and photoelastic coefficients have been experimentally obtained for these crystals. The acousto-optic figure of merit and the diffraction efficiency have been estimated for both the visible and deep ultraviolet spectral ranges. © 2011 Optical Society of America

  10. Effect of ultrasonication in synthesis of gold nano fluid for thermal applications

    NASA Astrophysics Data System (ADS)

    Nath, G.; Giri, R.

    2018-02-01

    Ultrasonically synthesized nanofluids are efficient coolant and heat exchanger material has demonstrated its potential in various fields and thermal engineering. The computation of different acoustical parameter using the ultrasonic velocity data of gold nanofluids are taken in estimation of thermal conductivity. The computational and experimental measured values of thermal conductivity are well agrees. The results execute ultrasonically synthesized gold nanofluids is an economic and efficient technology for explaining the increase of thermal conductivity of nanofluids in suitable optimum conditions.

  11. Evaluation of the Thorax of Manduca sexta for Flapping Wing Micro Air Vehicle Applications

    DTIC Science & Technology

    2012-03-01

    input (Pi) by the muscle efficiency (Em). Estimates for muscular efficiency in insects are based on measurements of oxygen consumption which can be...34 Effects of Operating Frequency and Temperature on Mechanical Power Output form Moth Flight Muscle." Journal of Experimental Biology 149 (1990): 61...they will teach you, or the birds in the sky, and they will tell you; or speak to the earth, and it will teach you, or let the fish in the sea inform

  12. An efficient algorithm for measurement of retinal vessel diameter from fundus images based on directional filtering

    NASA Astrophysics Data System (ADS)

    Wang, Xuchu; Niu, Yanmin

    2011-02-01

    Automatic measurement of vessels from fundus images is a crucial step for assessing vessel anomalies in ophthalmological community, where the change in retinal vessel diameters is believed to be indicative of the risk level of diabetic retinopathy. In this paper, a new retinal vessel diameter measurement method by combining vessel orientation estimation and filter response is proposed. Its interesting characteristics include: (1) different from the methods that only fit the vessel profiles, the proposed method extracts more stable and accurate vessel diameter by casting this problem as a maximal response problem of a variation of Gabor filter; (2) the proposed method can directly and efficiently estimate the vessel's orientation, which is usually captured by time-consuming multi-orientation fitting techniques in many existing methods. Experimental results shows that the proposed method both retains the computational simplicity and achieves stable and accurate estimation results.

  13. Experimental research of UWB over fiber system employing 128-QAM and ISFA-optimized scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Xiang, Changqing; Long, Fengting; Chen, Zuo

    2018-05-01

    In this paper, an optimized intra-symbol frequency-domain averaging (ISFA) scheme is proposed and experimentally demonstrated in intensity-modulation and direct-detection (IMDD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system. According to the channel responses of three MB-OFDM UWB sub-bands, the optimal ISFA window size for each sub-band is investigated. After 60-km standard single mode fiber (SSMF) transmission, the experimental results show that, at the bit error rate (BER) of 3.8 × 10-3, the receiver sensitivity of 128-quadrature amplitude modulation (QAM) can be improved by 1.9 dB using the proposed enhanced ISFA combined with training sequence (TS)-based channel estimation scheme, compared with the conventional TS-based channel estimation. Moreover, the spectral efficiency (SE) is up to 5.39 bit/s/Hz.

  14. Gene expression programming approach for the estimation of moisture ratio in herbal plants drying with vacuum heat pump dryer

    NASA Astrophysics Data System (ADS)

    Dikmen, Erkan; Ayaz, Mahir; Gül, Doğan; Şahin, Arzu Şencan

    2017-07-01

    The determination of drying behavior of herbal plants is a complex process. In this study, gene expression programming (GEP) model was used to determine drying behavior of herbal plants as fresh sweet basil, parsley and dill leaves. Time and drying temperatures are input parameters for the estimation of moisture ratio of herbal plants. The results of the GEP model are compared with experimental drying data. The statistical values as mean absolute percentage error, root-mean-squared error and R-square are used to calculate the difference between values predicted by the GEP model and the values actually observed from the experimental study. It was found that the results of the GEP model and experimental study are in moderately well agreement. The results have shown that the GEP model can be considered as an efficient modelling technique for the prediction of moisture ratio of herbal plants.

  15. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    PubMed

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  16. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks

    PubMed Central

    2010-01-01

    Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862

  17. Quantifying the effect of experimental design choices for in vitro scratch assays.

    PubMed

    Johnston, Stuart T; Ross, Joshua V; Binder, Benjamin J; Sean McElwain, D L; Haridas, Parvathi; Simpson, Matthew J

    2016-07-07

    Scratch assays are often used to investigate potential drug treatments for chronic wounds and cancer. Interpreting these experiments with a mathematical model allows us to estimate the cell diffusivity, D, and the cell proliferation rate, λ. However, the influence of the experimental design on the estimates of D and λ is unclear. Here we apply an approximate Bayesian computation (ABC) parameter inference method, which produces a posterior distribution of D and λ, to new sets of synthetic data, generated from an idealised mathematical model, and experimental data for a non-adhesive mesenchymal population of fibroblast cells. The posterior distribution allows us to quantify the amount of information obtained about D and λ. We investigate two types of scratch assay, as well as varying the number and timing of the experimental observations captured. Our results show that a scrape assay, involving one cell front, provides more precise estimates of D and λ, and is more computationally efficient to interpret than a wound assay, with two opposingly directed cell fronts. We find that recording two observations, after making the initial observation, is sufficient to estimate D and λ, and that the final observation time should correspond to the time taken for the cell front to move across the field of view. These results provide guidance for estimating D and λ, while simultaneously minimising the time and cost associated with performing and interpreting the experiment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  19. High throughput light absorber discovery, Part 1: An algorithm for automated tauc analysis

    DOE PAGES

    Suram, Santosh K.; Newhouse, Paul F.; Gregoire, John M.

    2016-09-23

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2O 3, Cu 2V 2O 7, and BiVOmore » 4. Here, the applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.« less

  20. Marginal Structural Models with Counterfactual Effect Modifiers.

    PubMed

    Zheng, Wenjing; Luo, Zhehui; van der Laan, Mark J

    2018-06-08

    In health and social sciences, research questions often involve systematic assessment of the modification of treatment causal effect by patient characteristics. In longitudinal settings, time-varying or post-intervention effect modifiers are also of interest. In this work, we investigate the robust and efficient estimation of the Counterfactual-History-Adjusted Marginal Structural Model (van der Laan MJ, Petersen M. Statistical learning of origin-specific statically optimal individualized treatment rules. Int J Biostat. 2007;3), which models the conditional intervention-specific mean outcome given a counterfactual modifier history in an ideal experiment. We establish the semiparametric efficiency theory for these models, and present a substitution-based, semiparametric efficient and doubly robust estimator using the targeted maximum likelihood estimation methodology (TMLE, e.g. van der Laan MJ, Rubin DB. Targeted maximum likelihood learning. Int J Biostat. 2006;2, van der Laan MJ, Rose S. Targeted learning: causal inference for observational and experimental data, 1st ed. Springer Series in Statistics. Springer, 2011). To facilitate implementation in applications where the effect modifier is high dimensional, our third contribution is a projected influence function (and the corresponding projected TMLE estimator), which retains most of the robustness of its efficient peer and can be easily implemented in applications where the use of the efficient influence function becomes taxing. We compare the projected TMLE estimator with an Inverse Probability of Treatment Weighted estimator (e.g. Robins JM. Marginal structural models. In: Proceedings of the American Statistical Association. Section on Bayesian Statistical Science, 1-10. 1997a, Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. 2000;11:561-570), and a non-targeted G-computation estimator (Robins JM. A new approach to causal inference in mortality studies with sustained exposure periods - application to control of the healthy worker survivor effect. Math Modell. 1986;7:1393-1512.). The comparative performance of these estimators is assessed in a simulation study. The use of the projected TMLE estimator is illustrated in a secondary data analysis for the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial where effect modifiers are subject to missing at random.

  1. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  2. Practical experimental certification of computational quantum gates using a twirling procedure.

    PubMed

    Moussa, Osama; da Silva, Marcus P; Ryan, Colm A; Laflamme, Raymond

    2012-08-17

    Because of the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3-qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.

  3. Direct volume estimation without segmentation

    NASA Astrophysics Data System (ADS)

    Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.

    2015-03-01

    Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.

  4. Age and gender estimation using Region-SIFT and multi-layered SVM

    NASA Astrophysics Data System (ADS)

    Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun

    2018-04-01

    In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.

  5. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  6. Contributions of the secondary jet to the maximum tangential velocity and to the collection efficiency of the fixed guide vane type axial flow cyclone dust collector

    NASA Astrophysics Data System (ADS)

    Ogawa, Akira; Anzou, Hideki; Yamamoto, So; Shimagaki, Mituru

    2015-11-01

    In order to control the maximum tangential velocity Vθm(m/s) of the turbulent rotational air flow and the collection efficiency ηc (%) using the fly ash of the mean diameter XR50=5.57 µm, two secondary jet nozzles were installed to the body of the axial flow cyclone dust collector with the body diameter D1=99mm. Then in order to estimate Vθm (m/s), the conservation theory of the angular momentum flux with Ogawa combined vortex model was applied. The comparisons of the estimated results of Vθm(m/s) with the measured results by the cylindrical Pitot-tube were shown in good agreement. And also the estimated collection efficiencies ηcth (%) basing upon the cut-size Xc (µm) which was calculated by using the estimated Vθ m(m/s) and also the particle size distribution R(Xp) were shown a little higher values than the experimental results due to the re-entrainment of the collected dust. The best method for adjustment of ηc (%) related to the contribution of the secondary jet flow is principally to apply the centrifugal effect Φc (1). Above stated results are described in detail.

  7. Sensor-less force-reflecting macro-micro telemanipulation systems by piezoelectric actuators.

    PubMed

    Amini, H; Farzaneh, B; Azimifar, F; Sarhan, A A D

    2016-09-01

    This paper establishes a novel control strategy for a nonlinear bilateral macro-micro teleoperation system with time delay. Besides position and velocity signals, force signals are additionally utilized in the control scheme. This modification significantly improves the poor transparency during contact with the environment. To eliminate external force measurement, a force estimation algorithm is proposed for the master and slave robots. The closed loop stability of the nonlinear micro-micro teleoperation system with the proposed control scheme is investigated employing the Lyapunov theory. Consequently, the experimental results verify the efficiency of the new control scheme in free motion and during collision between the slave robot and the environment of slave robot with environment, and the efficiency of the force estimation algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Balanced Codon Usage Optimizes Eukaryotic Translational Efficiency

    PubMed Central

    Qian, Wenfeng; Yang, Jian-Rong; Pearson, Nathaniel M.; Maclean, Calum; Zhang, Jianzhi

    2012-01-01

    Cellular efficiency in protein translation is an important fitness determinant in rapidly growing organisms. It is widely believed that synonymous codons are translated with unequal speeds and that translational efficiency is maximized by the exclusive use of rapidly translated codons. Here we estimate the in vivo translational speeds of all sense codons from the budding yeast Saccharomyces cerevisiae. Surprisingly, preferentially used codons are not translated faster than unpreferred ones. We hypothesize that this phenomenon is a result of codon usage in proportion to cognate tRNA concentrations, the optimal strategy in enhancing translational efficiency under tRNA shortage. Our predicted codon–tRNA balance is indeed observed from all model eukaryotes examined, and its impact on translational efficiency is further validated experimentally. Our study reveals a previously unsuspected mechanism by which unequal codon usage increases translational efficiency, demonstrates widespread natural selection for translational efficiency, and offers new strategies to improve synthetic biology. PMID:22479199

  9. Experimental procedures characterizing firebrand generation in wildland fires

    Treesearch

    Mohamad El Houssami; Eric Mueller; Alexander Filkov; Jan C Thomas; Nicholas Skowronski; Michael R Gallagher; Ken Clark; Robert Kremens; Albert Simeoni

    2016-01-01

    This study aims to develop a series of robust and efficient methodologies, which can be applied to understand and estimate firebrand generation and to evaluate firebrand showers close to a fire front. A field scale high intensity prescribed fire was conducted in the New Jersey Pine Barrens in March 2013. Vegetation was characterised with field and remotely sensed data...

  10. Applying Propensity Score Methods in Medical Research: Pitfalls and Prospects

    PubMed Central

    Luo, Zhehui; Gardiner, Joseph C.; Bradley, Cathy J.

    2012-01-01

    The authors review experimental and nonexperimental causal inference methods, focusing on assumptions for the validity of instrumental variables and propensity score (PS) methods. They provide guidance in four areas for the analysis and reporting of PS methods in medical research and selectively evaluate mainstream medical journal articles from 2000 to 2005 in the four areas, namely, examination of balance, overlapping support description, use of estimated PS for evaluation of treatment effect, and sensitivity analyses. In spite of the many pitfalls, when appropriately evaluated and applied, PS methods can be powerful tools in assessing average treatment effects in observational studies. Appropriate PS applications can create experimental conditions using observational data when randomized controlled trials are not feasible and, thus, lead researchers to an efficient estimator of the average treatment effect. PMID:20442340

  11. Adaptive vehicle motion estimation and prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  12. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  13. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    PubMed

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking.

  14. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds

    PubMed Central

    Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-01-01

    Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking. PMID:21445339

  15. On the use of LiF:Mg,Ti thermoluminescence dosemeters in space--a critical review.

    PubMed

    Horowitz, Y S; Satinger, D; Fuks, E; Oster, L; Podpalov, L

    2003-01-01

    The use of LiF:Mg,Ti thermoluminescence dosemeters (TLDs) in space radiation fields is reviewed. It is demonstrated in the context of modified track structure theory and microdosimetric track structure theory that there is no unique correlation between the relative thermoluminescence (TL) efficiency of heavy charged particles, neutrons of all energies and linear energy transfer (LET). Many experimental measurements dating back more than two decades also demonstrate the multivalued, non-universal, relationship between relative TL efficiency and LET. It is further demonstrated that the relative intensities of the dosimetric peaks and especially the high-temperature structure are dependent on a large number of variables, some controllable, some not. It is concluded that TL techniques employing the concept of LET (e.g. measurement of total dose, the high-temperature ratio (HTR) methods and other combinations of the relative TL efficiency of the various peaks used to estimate average Q or simulate Q-LET relationships) should be regarded as lacking a sound theoretical basis, highly prone to error and, as well, lack of reproducibility/universality due to the absence of a standardised experimental protocol essential to reliable experimental methodology.

  16. Modeling of submicrometer aerosol penetration through sintered granular membrane filters.

    PubMed

    Marre, Sonia; Palmeri, John; Larbot, André; Bertrand, Marielle

    2004-06-01

    We present a deep-bed aerosol filtration model that can be used to estimate the efficiency of sintered granular membrane filters in the region of the most penetrating particle size. In this region the capture of submicrometer aerosols, much smaller than the filter pore size, takes place mainly via Brownian diffusion and direct interception acting in synergy. By modeling the disordered sintered grain packing of such filters as a simple cubic lattice, and mapping the corresponding 3D connected pore volume onto a discrete cylindrical pore network, the efficiency of a granular filter can be estimated, using new analytical results for the efficiency of cylindrical pores. This model for aerosol penetration in sintered granular filters includes flow slip and the kinetics of particle capture by the pore surface. With a unique choice for two parameters, namely the structural tortuosity and effective kinetic coefficient of particle adsorption, this semiempirical model can account for the experimental efficiency of a new class of "high-efficiency particulate air" ceramic membrane filters as a function of particle size over a wide range of filter thickness and texture (pore size and porosity) and operating conditions (face velocity).

  17. Partition method and experimental validation for impact dynamics of flexible multibody system

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  18. Quantum control and quantum tomography on neutral atom qudits

    NASA Astrophysics Data System (ADS)

    Sosa Martinez, Hector

    Neutral atom systems are an appealing platform for the development and testing of quantum control and measurement techniques. This dissertation presents experimental investigations of control and measurement tools using as a testbed the 16-dimensional hyperfine manifold associated with the electronic ground state of cesium atoms. On the control side, we present an experimental realization of a protocol to implement robust unitary transformations in the presence of static and dynamic perturbations. We also present an experimental realization of inhomogeneous quantum control. Specifically, we demonstrate our ability to perform two different unitary transformations on atoms that see different light shifts from an optical addressing field. On the measurement side, we present experimental realizations of quantum state and process tomography. The state tomography project encompasses a comprehensive evaluation of several measurement strategies and state estimation algorithms. Our experimental results show that in the presence of experimental imperfections, there is a clear tradeoff between accuracy, efficiency and robustness in the reconstruction. The process tomography project involves an experimental demonstration of efficient reconstruction by using a set of intelligent probe states. Experimental results show that we are able to reconstruct unitary maps in Hilbert spaces with dimension ranging from d=4 to d=16. To the best of our knowledge, this is the first time that a unitary process in d=16 is successfully reconstructed in the laboratory.

  19. Estimation of reflectance from camera responses by the regularized local linear model.

    PubMed

    Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye

    2011-10-01

    Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America

  20. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  1. Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.

    PubMed

    Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L

    2018-06-03

    In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.

  2. Application of independent component analysis for speech-music separation using an efficient score function estimation

    NASA Astrophysics Data System (ADS)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  3. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  4. Error vector magnitude based parameter estimation for digital filter back-propagation mitigating SOA distortions in 16-QAM.

    PubMed

    Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A

    2013-08-26

    We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.

  5. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  6. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  7. Quantitative optical imaging and sensing by joint design of point spread functions and estimation algorithms

    NASA Astrophysics Data System (ADS)

    Quirin, Sean Albert

    The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.

  8. Development and experimental validation of downlink multiuser MIMO-OFDM in gigabit wireless LAN systems

    NASA Astrophysics Data System (ADS)

    Ishihara, Koichi; Asai, Yusuke; Kudo, Riichi; Ichikawa, Takeo; Takatori, Yasushi; Mizoguchi, Masato

    2013-12-01

    Multiuser multiple-input multiple-output (MU-MIMO) has been proposed as a means to improve spectrum efficiency for various future wireless communication systems. This paper reports indoor experimental results obtained for a newly developed and implemented downlink (DL) MU-MIMO orthogonal frequency division multiplexing (OFDM) transceiver for gigabit wireless local area network systems in the microwave band. In the transceiver, the channel state information (CSI) is estimated at each user and fed back to an access point (AP) on a real-time basis. At the AP, the estimated CSI is used to calculate the transmit beamforming weight for DL MU-MIMO transmission. This paper also proposes a recursive inverse matrix computation scheme for computing the transmit weight in real time. Experiments with the developed transceiver demonstrate its feasibility in a number of indoor scenarios. The experimental results clarify that DL MU-MIMO-OFDM transmission can achieve a 972-Mbit/s transmission data rate with simple digital signal processing of single-antenna users in an indoor environment.

  9. A new device to estimate abundance of moist-soil plant seeds

    USGS Publications Warehouse

    Penny, E.J.; Kaminski, R.M.; Reinecke, K.J.

    2006-01-01

    Methods to sample the abundance of moist-soil seeds efficiently and accurately are critical for evaluating management practices and determining food availability. We adapted a portable, gasoline-powered vacuum to estimate abundance of seeds on the surface of a moist-soil wetland in east-central Mississippi and evaluated the sampler by simulating conditions that researchers and managers may experience when sampling moist-soil areas for seeds. We measured the percent recovery of known masses of seeds by the vacuum sampler in relation to 4 experimentally controlled factors (i.e., seed-size class, sample mass, soil moisture class, and vacuum time) with 2-4 levels per factor. We also measured processing time of samples in the laboratory. Across all experimental factors, seed recovery averaged 88.4% and varied little (CV = 0.68%, n = 474). Overall, mean time to process a sample was 30.3 ? 2.5 min (SE, n = 417). Our estimate of seed recovery rate (88%) may be used to adjust estimates for incomplete seed recovery, or project-specific correction factors may be developed by investigators. Our device was effective for estimating surface abundance of moist-soil plant seeds after dehiscence and before habitats were flooded.

  10. A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty

    NASA Astrophysics Data System (ADS)

    Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl

    2012-05-01

    The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.

  11. Genetic background in partitioning of metabolizable energy efficiency in dairy cows.

    PubMed

    Mehtiö, T; Negussie, E; Mäntysaari, P; Mäntysaari, E A; Lidauer, M H

    2018-05-01

    The main objective of this study was to assess the genetic differences in metabolizable energy efficiency and efficiency in partitioning metabolizable energy in different pathways: maintenance, milk production, and growth in primiparous dairy cows. Repeatability models for residual energy intake (REI) and metabolizable energy intake (MEI) were compared and the genetic and permanent environmental variations in MEI were partitioned into its energy sinks using random regression models. We proposed 2 new feed efficiency traits: metabolizable energy efficiency (MEE), which is formed by modeling MEI fitting regressions on energy sinks [metabolic body weight (BW 0.75 ), energy-corrected milk, body weight gain, and body weight loss] directly; and partial MEE (pMEE), where the model for MEE is extended with regressions on energy sinks nested within additive genetic and permanent environmental effects. The data used were collected from Luke's experimental farms Rehtijärvi and Minkiö between 1998 and 2014. There were altogether 12,350 weekly MEI records on 495 primiparous Nordic Red dairy cows from wk 2 to 40 of lactation. Heritability estimates for REI and MEE were moderate, 0.33 and 0.26, respectively. The estimate of the residual variance was smaller for MEE than for REI, indicating that analyzing weekly MEI observations simultaneously with energy sinks is preferable. Model validation based on Akaike's information criterion showed that pMEE models fitted the data even better and also resulted in smaller residual variance estimates. However, models that included random regression on BW 0.75 converged slowly. The resulting genetic standard deviation estimate from the pMEE coefficient for milk production was 0.75 MJ of MEI/kg of energy-corrected milk. The derived partial heritabilities for energy efficiency in maintenance, milk production, and growth were 0.02, 0.06, and 0.04, respectively, indicating that some genetic variation may exist in the efficiency of using metabolizable energy for different pathways in dairy cows. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Experimental evaluation of a mathematical model for predicting transfer efficiency of a high volume-low pressure air spray gun.

    PubMed

    Tan, Y M; Flynn, M R

    2000-10-01

    The transfer efficiency of a spray-painting gun is defined as the amount of coating applied to the workpiece divided by the amount sprayed. Characterizing this transfer process allows for accurate estimation of the overspray generation rate, which is important for determining a spray painter's exposure to airborne contaminants. This study presents an experimental evaluation of a mathematical model for predicting the transfer efficiency of a high volume-low pressure spray gun. The effects of gun-to-surface distance and nozzle pressure on the agreement between the transfer efficiency measurement and prediction were examined. Wind tunnel studies and non-volatile vacuum pump oil in place of commercial paint were used to determine transfer efficiency at nine gun-to-surface distances and four nozzle pressure levels. The mathematical model successfully predicts transfer efficiency within the uncertainty limits. The least squares regression between measured and predicted transfer efficiency has a slope of 0.83 and an intercept of 0.12 (R2 = 0.98). Two correction factors were determined to improve the mathematical model. At higher nozzle pressure settings, 6.5 psig and 5.5 psig, the correction factor is a function of both gun-to-surface distance and nozzle pressure level. At lower nozzle pressures, 4 psig and 2.75 psig, gun-to-surface distance slightly influences the correction factor, while nozzle pressure has no discernible effect.

  13. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  14. Final Design and Experimental Validation of the Thermal Performance of the LHC Lattice Cryostats

    NASA Astrophysics Data System (ADS)

    Bourcey, N.; Capatina, O.; Parma, V.; Poncet, A.; Rohmig, P.; Serio, L.; Skoczen, B.; Tock, J.-P.; Williams, L. R.

    2004-06-01

    The recent commissioning and operation of the LHC String 2 have given a first experimental validation of the global thermal performance of the LHC lattice cryostat at nominal cryogenic conditions. The cryostat designed to minimize the heat inleak from ambient temperature, houses under vacuum and thermally protects the cold mass, which contains the LHC twin-aperture superconducting magnets operating at 1.9 K in superfluid helium. Mechanical components linking the cold mass to the vacuum vessel, such as support posts and insulation vacuum barriers are designed with efficient thermalisations for heat interception to minimise heat conduction. Heat inleak by radiation is reduced by employing multilayer insulation (MLI) wrapped around the cold mass and around an aluminium thermal shield cooled to about 60 K. Measurements of the total helium vaporization rate in String 2 gives, after substraction of supplementary heat loads and end effects, an estimate of the total thermal load to a standard LHC cell (107 m) including two Short Straight Sections and six dipole cryomagnets. Temperature sensors installed at critical locations provide a temperature mapping which allows validation of the calculated and estimated thermal performance of the cryostat components, including efficiency of the heat interceptions.

  15. Automatic estimation of voice onset time for word-initial stops by applying random forest to onset detection.

    PubMed

    Lin, Chi-Yueh; Wang, Hsiao-Chuan

    2011-07-01

    The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America

  16. A Drive Method of Permanent Magnet Synchronous Motor Using Torque Angle Estimation without Position Sensor

    NASA Astrophysics Data System (ADS)

    Tanaka, Takuro; Takahashi, Hisashi

    In some motor applications, it is very difficult to attach a position sensor to the motor in housing. One of the examples of such applications is the dental handpiece-motor. In those designs, it is necessary to drive highly efficiency at low speed and variable load condition without a position sensor. We developed a method to control a motor high-efficient and smoothly at low speed without a position sensor. In this paper, the method in which permanent magnet synchronous motor is controlled smoothly and high-efficient by using torque angle control in synchronized operation is shown. The usefulness is confirmed by experimental results. In conclusion, the proposed sensor-less control method has been achieved to be very efficiently and smoothly.

  17. On-site monitoring of atomic density number for an all-optical atomic magnetometer based on atomic spin exchange relaxation.

    PubMed

    Zhang, Hong; Zou, Sheng; Chen, Xiyuan; Ding, Ming; Shan, Guangcun; Hu, Zhaohui; Quan, Wei

    2016-07-25

    We present a method for monitoring the atomic density number on site based on atomic spin exchange relaxation. When the spin polarization P ≪ 1, the atomic density numbers could be estimated by measuring magnetic resonance linewidth in an applied DC magnetic field by using an all-optical atomic magnetometer. The density measurement results showed that the experimental results the theoretical predictions had a good consistency in the investigated temperature range from 413 K to 463 K, while, the experimental results were approximately 1.5 ∼ 2 times less than the theoretical predictions estimated from the saturated vapor pressure curve. These deviations were mainly induced by the radiative heat transfer efficiency, which inevitably leaded to a lower temperature in cell than the setting temperature.

  18. Witnessing eigenstates for quantum simulation of Hamiltonian spectra

    PubMed Central

    Santagati, Raffaele; Wang, Jianwei; Gentile, Antonio A.; Paesani, Stefano; Wiebe, Nathan; McClean, Jarrod R.; Morley-Short, Sam; Shadbolt, Peter J.; Bonneau, Damien; Silverstone, Joshua W.; Tew, David P.; Zhou, Xiaoqi; O’Brien, Jeremy L.; Thompson, Mark G.

    2018-01-01

    The efficient calculation of Hamiltonian spectra, a problem often intractable on classical machines, can find application in many fields, from physics to chemistry. We introduce the concept of an “eigenstate witness” and, through it, provide a new quantum approach that combines variational methods and phase estimation to approximate eigenvalues for both ground and excited states. This protocol is experimentally verified on a programmable silicon quantum photonic chip, a mass-manufacturable platform, which embeds entangled state generation, arbitrary controlled unitary operations, and projective measurements. Both ground and excited states are experimentally found with fidelities >99%, and their eigenvalues are estimated with 32 bits of precision. We also investigate and discuss the scalability of the approach and study its performance through numerical simulations of more complex Hamiltonians. This result shows promising progress toward quantum chemistry on quantum computers. PMID:29387796

  19. Charge Transfer Inefficiency in Pinned Photodiode CMOS image sensors: Simple Montecarlo modeling and experimental measurement based on a pulsed storage-gate method

    NASA Astrophysics Data System (ADS)

    Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart

    2016-11-01

    The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.

  20. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  1. Heterogeneous losses of externally generated I atoms for OIL

    NASA Astrophysics Data System (ADS)

    Torbin, A. P.; Mikheyev, P. A.; Ufimtsev, N. I.; Voronov, A. I.; Azyazov, V. N.

    2012-01-01

    Usage of an external iodine atom generator can improve energy efficiency of the oxygen-iodine laser (OIL) and expand its range of operation parameters. However, a noticeable part of iodine atoms may recombine or undergo chemical bonding during transportation from the generator to the injection point. Experimental results reported in this paper showed that uncoated aluminum surfaces readily bounded iodine atoms, while nickel, stainless steel, Teflon or Plexiglas did not. Estimations based on experimental results had shown that the upper bound of probability of surface iodine atom recombination for materials Teflon, Plexiglas, nickel or stainless steel is γrec <= 10-5.

  2. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  3. Motion estimation in the frequency domain using fuzzy c-planes clustering.

    PubMed

    Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E

    2001-01-01

    A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method.

  4. Real-Time Measurement of Machine Efficiency during Inertia Friction Welding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tung, Daniel Joseph; Mahaffey, David; Senkov, Oleg

    Process efficiency is a crucial parameter for inertia friction welding (IFW) that is largely unknown at the present time. A new method has been developed to determine the transient profile of the IFW process efficiency by comparing the workpiece torque used to heat and deform the joint region to the total torque. Particularly, the former is measured by a torque load cell attached to the non-rotating workpiece while the latter is calculated from the deceleration rate of flywheel rotation. The experimentally-measured process efficiency for IFW of AISI 1018 steel rods is validated independently by the upset length estimated from anmore » analytical equation of heat balance and the flash profile calculated from a finite element based thermal stress model. The transient behaviors of torque and efficiency during IFW are discussed based on the energy loss to machine bearings and the bond formation at the joint interface.« less

  5. CO 2 laser cutting of MDF . 2. Estimation of power distribution

    NASA Astrophysics Data System (ADS)

    Ng, S. L.; Lum, K. C. P.; Black, I.

    2000-02-01

    Part 2 of this paper details an experimentally-based method to evaluate the power distribution for both CW and PM cutting. Variations in power distribution with different cutting speeds, material thickness and pulse ratios are presented. The paper also provides information on both the cutting efficiency and absorptivity index for MDF, and comments on the beam dispersion characteristics after the cutting process.

  6. A millimeter wave quasi-optical mixer and multiplier

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The results of an experimental study of a biconical quasi-optical Schottky barrier diode mount design which could be used for mixing and multiplying in the frequency range 200-1000 Ghz are reported. The biconical mount is described and characteristics measured at 185 Ghz are presented. The use of the mount for quasi-optical frequency doubling from 56 to 112 Ghz is described and efficiency estimates given.

  7. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    PubMed

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  8. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    PubMed Central

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-01

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042

  9. FIBER AND INTEGRATED OPTICS, LASER APPLICATIONS, AND OTHER PROBLEMS IN QUANTUM ELECTRONICS: Optical components for the analysis and formation of the transverse mode composition

    NASA Astrophysics Data System (ADS)

    Golub, M. A.; Sisakyan, I. N.; Soĭfer, V. A.; Uvarov, G. V.

    1989-04-01

    Theoretical and experimental investigations are reported of new mode optical components (elements) which are analogs of sinusoidal phase diffraction gratings with a variable modulation depth. Expressions are derived for nonlinear predistortion and depth of modulation, which are essential for effective operation of amplitude and phase mode optical components in devices used for analysis and formation of the transverse mode composition of coherent radiation. An estimate is obtained of the energy efficiency of phase and amplitude mode optical components, and a comparison is made with the results of an experimental investigation of a set of phase optical components matched to Gauss-Laguerre modes. It is shown that the improvement in the energy efficiency of phase mode components, compared with amplitude components, is the same as the improvement achieved using a phase diifraction grating, compared with amplitude grating with the same depth of modulation.

  10. A Compact Energy Harvesting System for Outdoor Wireless Sensor Nodes Based on a Low-Cost In Situ Photovoltaic Panel Characterization-Modelling Unit

    PubMed Central

    Antolín, Diego; Calvo, Belén; Martínez, Pedro A.

    2017-01-01

    This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units—characterization and modelling—are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions. PMID:28777330

  11. A Compact Energy Harvesting System for Outdoor Wireless Sensor Nodes Based on a Low-Cost In Situ Photovoltaic Panel Characterization-Modelling Unit.

    PubMed

    Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Martínez, Pedro A

    2017-08-04

    This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units-characterization and modelling-are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions.

  12. Perspective on the prospects of a carrier multiplication nanocrystal solar cell.

    PubMed

    Nair, Gautham; Chang, Liang-Yi; Geyer, Scott M; Bawendi, Moungi G

    2011-05-11

    This article presents a perspective on the experimental and theoretical work to date on the efficiency of carrier multiplication (CM) in colloidal semiconductor nanocrystals (NCs). Early reports on CM in NCs suggested large CM efficiency enhancements. However, recent experiments have shown that CM in nanocrystalline samples is not significantly stronger, and often is weaker, than in the parent bulk when compared on an absolute photon energy basis. This finding is supported by theoretical consideration of the CM process and the competing intraband relaxation. We discuss the experimental artifacts that may have led to the apparently strong CM estimated in early reports. The finding of bulklike CM in NCs suggests that the main promise of quantum confinement is to boost the photovoltage at which carriers can be extracted. With this in mind, we discuss research directions that may result in effective use of CM in a solar cell.

  13. BGFit: management and automated fitting of biological growth curves.

    PubMed

    Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana

    2013-09-25

    Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.

  14. Experimental demonstration of a 16.9 Gb/s link for coherent OFDM PON robust to frequency offset and timing error

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Liu, Yu; Xiang, Yuanjiang

    2018-07-01

    Due to its merits of flexible bandwidth allocation and robustness towards fiber transmission impairments, coherent optical orthogonal frequency division multiplexing (CO-OFDM) technology draws a lot of attention for passive optical networks (PON). However, a CO-OFDM system is vulnerable to frequency offsets between modulated optical signals and optical local oscillators (OLO). This is particularly serious for low cost PONs where low cost lasers are used. Thus, it is of great interest to develop efficient algorithms for frequency synchronization in CO-OFDM systems. Usually frequency synchronization proposed in CO-OFDM systems are done by detecting the phase shift in time domain. In such a way, there is a trade-off between estimation accuracy and range. Considering that the integer frequency offset (IFO) contributes to the major frequency offset, a more efficient method to estimate IFO is of demand. By detecting IFO induced circular channel rotation (CCR), the frequency offset can be directly estimated after fast Fourier transforming (FFT). In this paper, circular acquisition offset frequency and timing synchronization (CAO-FTS) scheme is proposed. A specially-designed frequency domain pseudo noise (PN) sequence is used for CCR detection and timing synchronization. Full-range frequency offset compensation and non-plateau timing synchronization are experimentally demonstrated in presence of fiber dispersion. Based on CAO-FTS, 16.9 Gb/s CO-OFDM signal is successfully delivered over a span of 80-km single mode fiber.

  15. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  16. Cell openness manipulation of low density polyurethane foam for efficient sound absorption

    NASA Astrophysics Data System (ADS)

    Hyuk Park, Ju; Suh Minn, Kyung; Rae Lee, Hyeong; Hyun Yang, Sei; Bin Yu, Cheng; Yeol Pak, Seong; Sung Oh, Chi; Seok Song, Young; June Kang, Yeon; Ryoun Youn, Jae

    2017-10-01

    Satisfactory sound absorption using a low mass density foam is an intriguing desire for achieving high fuel efficiency of vehicles. This issue has been dealt with a microcellular geometry manipulation. In this study, we demonstrate the relationship between cell openness of polyurethane (PU) foam and sound absorption behaviors, both theoretically and experimentally. The objective of this work is to mitigate a threshold of mass density by rendering a sound absorber which shows a satisfactory performance. The cell openness, which causes the best sound absorption performance in all cases considered, was estimated as 15% by numerical simulation. Cell openness of PU foam was experimentally manipulated into desired ranges by adjusting rheological properties in a foaming reaction. Microcellular structures of the fabricated PU foams were observed and sound absorption coefficients were measured using a B&K impedance tube. The fabricated PU foam with the best cell openness showed better sound absorption performance than the foam with double mass density. We envisage that this study can help the manufacture of low mass density sound absorbing foams more efficiently and economically.

  17. Aspects of numerical and representational methods related to the finite-difference simulation of advective and dispersive transport of freshwater in a thin brackish aquifer

    USGS Publications Warehouse

    Merritt, M.L.

    1993-01-01

    The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.

  18. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  19. Energy budget for yearling lake trout, Salvelinus namaycush

    USGS Publications Warehouse

    Rottiers, Donald V.

    1993-01-01

    Components of the energy budget of yearling lake trout (Salvelinus namaycush) were derived from data gathered in laboratory growth and metabolism studies; values for energy lost as waste were estimated with previously published equations. Because the total caloric value of food consumed by experimental lake trout was significantly different during the two years in which the studies were done, separate annual energy budgets were formulated. The gross conversion efficiency in yearling lake trout fed ad libitum rations of alewives at 10A?C was 26.6% to 41%. The distribution of energy with temperature was similar for each component of the energy budget. Highest conversion efficiencies were observed in fish fed less than ad libitum rations; fish fed an amount of food equivalent to about 4% of their body weight at 10A?C had a conversion efficiency of 33% to 45.1%. Physiologically useful energy was 76.1-80.1% of the total energy consumed. Estimated growth for age-I and -II lake fish was near that observed for laboratory fish held at lake temperatures and fed reduced rations.

  20. An empirical model of human aspiration in low-velocity air using CFD investigations.

    PubMed

    Anthony, T Renée; Anderson, Kimberly R

    2015-01-01

    Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.

  1. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  2. Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.

    PubMed

    Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves

    2016-09-01

    This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.

  3. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  4. Experimental and theoretical studies of Schiff bases as corrosion inhibitors.

    PubMed

    Jamil, Dalia M; Al-Okbi, Ahmed K; Al-Baghdadi, Shaimaa B; Al-Amiery, Ahmed A; Kadhim, Abdulhadi; Gaaz, Tayser Sumer; Kadhum, Abdul Amir H; Mohamad, Abu Bakar

    2018-02-05

    Relatively inexpensive, stable Schiff bases, namely 3-((4-hydroxybenzylidene)amino)-2-methylquinazolin-4(3H)-one (BZ3) and 3-((4-(dimethylamino)benzylidene)amino)-2-methylquinazolin-4(3H)-one (BZ4), were employed as highly efficient inhibitors of mild steel corrosion by corrosive acid. The inhibition efficiencies were estimated based on weight loss method. Moreover, scanning electron microscopy was used to investigate the inhibition mechanism. The synthesized Schiff bases were characterized by Fourier transform infrared spectroscopy, nuclear magnetic resonance spectroscopy and micro-elemental analysis. The inhibition efficiency depends on three factors: the amount of nitrogen in the inhibitor, the inhibitor concentration and the inhibitor molecular weight. Inhibition efficiencies of 96 and 92% were achieved with BZ4 and BZ3, respectively, at the maximum tested concentration. Density functional theory calculations of BZ3 and BZ4 were performed to compare the effects of hydroxyl and N,N-dimethylamino substituents on the inhibition efficiency, providing insight for designing new molecular structures that exhibit enhanced inhibition efficiencies.

  5. Design of Supersonic Transport Flap Systems for Thrust Recovery at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Mann, Michael J.; Carlson, Harry W.; Domack, Christopher S.

    1999-01-01

    A study of the subsonic aerodynamics of hinged flap systems for supersonic cruise commercial aircraft has been conducted using linear attached-flow theory that has been modified to include an estimate of attainable leading edge thrust and an approximate representation of vortex forces. Comparisons of theoretical predictions with experimental results show that the theory gives a reasonably good and generally conservative estimate of the performance of an efficient flap system and provides a good estimate of the leading and trailing-edge deflection angles necessary for optimum performance. A substantial reduction in the area of the inboard region of the leading edge flap has only a minor effect on the performance and the optimum deflection angles. Changes in the size of the outboard leading-edge flap show that performance is greatest when this flap has a chord equal to approximately 30 percent of the wing chord. A study was also made of the performance of various combinations of individual leading and trailing-edge flaps, and the results show that aerodynamic efficiencies as high as 85 percent of full suction are predicted.

  6. Validation of a pair of computer codes for estimation and optimization of subsonic aerodynamic performance of simple hinged-flap systems for thin swept wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1988-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of linearized theory attached flow methods for the estimation and optimization of the aerodynamic performance of simple hinged flap systems. Use of attached flow methods is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. A variety of swept wing configurations are considered ranging from fighters to supersonic transports, all with leading- and trailing-edge flaps for enhancement of subsonic aerodynamic efficiency. The results indicate that linearized theory attached flow computer code methods provide a rational basis for the estimation and optimization of flap system aerodynamic performance at subsonic speeds. The analysis also indicates that vortex flap design is not an opposing approach but is closely related to attached flow design concepts. The successful vortex flap design actually suppresses the formation of detached vortices to produce a small vortex which is restricted almost entirely to the leading edge flap itself.

  7. Basic Research on Seismic and Infrasonic Monitoring of the European Arctic

    DTIC Science & Technology

    2008-09-01

    characteristics as well as the inherent variability among these signals . We have used available recordings both from the Apatity infrasound array and from...experimentally attempt to generate an infrasonic event bulletin using only the estimated azimuths and detection times of infrasound phases recorded by... detection . Our studies have shown a remarkably efficient wave propagation from events near Novaya Zemlya across the Barents Sea. Significant signal

  8. Aerial Refueling Simulator Validation Using Operational Experimentation and Response Surface Methods with Time Series Responses

    DTIC Science & Technology

    2013-03-21

    10 2.3 Time Series Response Data ................................................................................. 12 2.4 Comparison of Response...to 12 evaluating the efficiency of the parameter estimates. In the past, the most popular form of response surface design used the D-optimality...as well. A model can refer to almost anything in math , statistics, or computer science. It can be any “physical, mathematical, or logical

  9. Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water

    PubMed Central

    Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong

    2017-01-01

    This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442

  10. Fatigue level estimation of monetary bills based on frequency band acoustic signals with feature selection by supervised SOM

    NASA Astrophysics Data System (ADS)

    Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa

    Fatigued monetary bills adversely affect the daily operation of automated teller machines (ATMs). In order to make the classification of fatigued bills more efficient, the development of an automatic fatigued monetary bill classification method is desirable. We propose a new method by which to estimate the fatigue level of monetary bills from the feature-selected frequency band acoustic energy pattern of banking machines. By using a supervised self-organizing map (SOM), we effectively estimate the fatigue level using only the feature-selected frequency band acoustic energy pattern. Furthermore, the feature-selected frequency band acoustic energy pattern improves the estimation accuracy of the fatigue level of monetary bills by adding frequency domain information to the acoustic energy pattern. The experimental results with real monetary bill samples reveal the effectiveness of the proposed method.

  11. NOEC and LOEC as merely concessive expedients: two unambiguous alternatives and some criteria to maximize the efficiency of dose-response experimental designs.

    PubMed

    Murado, M A; Prieto, M A

    2013-09-01

    NOEC and LOEC (no and lowest observed effect concentrations, respectively) are toxicological concepts derived from analysis of variance (ANOVA), a not very sensitive method that produces ambiguous results and does not provide confidence intervals (CI) of its estimates. For a long time, despite the abundant criticism that such concepts have raised, the field of the ecotoxicology is reticent to abandon them (two possible reasons will be discussed), adducing the difficulty of clear alternatives. However, this work proves that a debugged dose-response (DR) modeling, through explicit algebraic equations, enables two simple options to accurately calculate the CI of substantially lower doses than NOEC. Both ANOVA and DR analyses are affected by the experimental error, response profile, number of observations and experimental design. The study of these effects--analytically complex and experimentally unfeasible--was carried out using systematic simulations with realistic data, including different error levels. Results revealed the weakness of NOEC and LOEC notions, confirmed the feasibility of the proposed alternatives and allowed to discuss the--often violated--conditions that minimize the CI of the parametric estimates from DR assays. In addition, a table was developed providing the experimental design that minimizes the parametric CI for a given set of working conditions. This makes possible to reduce the experimental effort and to avoid the inconclusive results that are frequently obtained from intuitive experimental plans. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. A Game Map Complexity Measure Based on Hamming Distance

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Pan; Li, Wenliang

    With the booming of PC game market, Game AI has attracted more and more researches. The interesting and difficulty of a game are relative with the map used in game scenarios. Besides, the path-finding efficiency in a game is also impacted by the complexity of the used map. In this paper, a novel complexity measure based on Hamming distance, called the Hamming complexity, is introduced. This measure is able to estimate the complexity of binary tileworld. We experimentally demonstrated that Hamming complexity is highly relative with the efficiency of A* algorithm, and therefore it is a useful reference to the designer when developing a game map.

  13. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  14. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  15. Experimental investigation of non-planar sheared outboard wing planforms

    NASA Technical Reports Server (NTRS)

    Naik, D. A.; Ostowari, C.

    1988-01-01

    The outboard planforms of wings have been found to be of prime importance in studies of induced drag reduction. This conclusion is based on an experimental and theoretical study of the aerodynamic characteristics of planar and nonplanar outboard wing forms. Six different configurations; baseline rectangular, planar sheared, sheared with dihedral, sheared with anhedral, rising arc, and drooping arc were investigated for two different spans. Span efficiencies as much as 20 percent greater than baseline can be realized with nonplanar wing forms. Optimization studies show that this advantage can be achieved along with a bending moment benefit. Parasite drag and lateral stability estimations were not included in the analysis.

  16. Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.

    PubMed

    Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2014-11-25

    In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.

  17. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  18. Efficient amplitude-modulated pulses for triple- to single-quantum coherence conversion in MQMAS NMR.

    PubMed

    Colaux, Henri; Dawson, Daniel M; Ashbrook, Sharon E

    2014-08-07

    The conversion between multiple- and single-quantum coherences is integral to many nuclear magnetic resonance (NMR) experiments of quadrupolar nuclei. This conversion is relatively inefficient when effected by a single pulse, and many composite pulse schemes have been developed to improve this efficiency. To provide the maximum improvement, such schemes typically require time-consuming experimental optimization. Here, we demonstrate an approach for generating amplitude-modulated pulses to enhance the efficiency of the triple- to single-quantum conversion. The optimization is performed using the SIMPSON and MATLAB packages and results in efficient pulses that can be used without experimental reoptimisation. Most significant signal enhancements are obtained when good estimates of the inherent radio-frequency nutation rate and the magnitude of the quadrupolar coupling are used as input to the optimization, but the pulses appear robust to reasonable variations in either parameter, producing significant enhancements compared to a single-pulse conversion, and also comparable or improved efficiency over other commonly used approaches. In all cases, the ease of implementation of our method is advantageous, particularly for cases with low sensitivity, where the improvement is most needed (e.g., low gyromagnetic ratio or high quadrupolar coupling). Our approach offers the potential to routinely improve the sensitivity of high-resolution NMR spectra of nuclei and systems that would, perhaps, otherwise be deemed "too challenging".

  19. Efficient Amplitude-Modulated Pulses for Triple- to Single-Quantum Coherence Conversion in MQMAS NMR

    PubMed Central

    2014-01-01

    The conversion between multiple- and single-quantum coherences is integral to many nuclear magnetic resonance (NMR) experiments of quadrupolar nuclei. This conversion is relatively inefficient when effected by a single pulse, and many composite pulse schemes have been developed to improve this efficiency. To provide the maximum improvement, such schemes typically require time-consuming experimental optimization. Here, we demonstrate an approach for generating amplitude-modulated pulses to enhance the efficiency of the triple- to single-quantum conversion. The optimization is performed using the SIMPSON and MATLAB packages and results in efficient pulses that can be used without experimental reoptimisation. Most significant signal enhancements are obtained when good estimates of the inherent radio-frequency nutation rate and the magnitude of the quadrupolar coupling are used as input to the optimization, but the pulses appear robust to reasonable variations in either parameter, producing significant enhancements compared to a single-pulse conversion, and also comparable or improved efficiency over other commonly used approaches. In all cases, the ease of implementation of our method is advantageous, particularly for cases with low sensitivity, where the improvement is most needed (e.g., low gyromagnetic ratio or high quadrupolar coupling). Our approach offers the potential to routinely improve the sensitivity of high-resolution NMR spectra of nuclei and systems that would, perhaps, otherwise be deemed “too challenging”. PMID:25047226

  20. Estimation of rate constant for VE excitation of the С2(D1Σ) state in Не-СО-О2 discharge plasma

    NASA Astrophysics Data System (ADS)

    Grigorian, G.; Cenian, Adam

    2013-01-01

    The paper discusses the experimental results pointing to the efficient channel of the CO vibrational to the C2 electronic energy-transfer. The radiation spectra D1Σu - X1Σg , known as Mulliken bands, are investigated and the relation of their kinetics to a vibrational excitation of CO molecules in the He-CO-O2 plasma is discussed. The rate constant for VE process ( CO(v >= 25) + C2 → CO(v - 25) + C2(D1Σu) ) is estimated, kVE ~ 10-14 см3/с.

  1. Modeling air concentration over macro roughness conditions by Artificial Intelligence techniques

    NASA Astrophysics Data System (ADS)

    Roshni, T.; Pagliara, S.

    2018-05-01

    Aeration is improved in rivers by the turbulence created in the flow over macro and intermediate roughness conditions. Macro and intermediate roughness flow conditions are generated by flows over block ramps or rock chutes. The measurements are taken in uniform flow region. Efficacy of soft computing methods in modeling hydraulic parameters are not common so far. In this study, modeling efficiencies of MPMR model and FFNN model are found for estimating the air concentration over block ramps under macro roughness conditions. The experimental data are used for training and testing phases. Potential capability of MPMR and FFNN model in estimating air concentration are proved through this study.

  2. Efficient spares matrix multiplication scheme for the CYBER 203

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.

    1984-01-01

    This work has been directed toward the development of an efficient algorithm for performing this computation on the CYBER-203. The desire to provide software which gives the user the choice between the often conflicting goals of minimizing central processing (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of three types of storage is selected for each diagonal. For each storage type, an initialization sub-routine estimates the CPU and storage requirements based upon results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the resources. The three storage types employed were chosen to be efficient on the CYBER-203 for diagonals which are sparse, moderately sparse, or dense; however, for many densities, no diagonal type is most efficient with respect to both resource requirements. The user-supplied weights dictate the choice.

  3. Experimental Investigation of Pulsed Nanosecond Streamer Discharges for CO2 Reforming

    NASA Astrophysics Data System (ADS)

    Pachuilo, Michael; Levko, Dima; Raja, Laxminarayan; Varghese, Philip

    2016-09-01

    Rapid global industrialization has led to an increase in atmospheric greenhouse gases, specifically carbon dioxide levels. Plasmas present a great potential for efficient reforming of greenhouse gases. There are several plasma discharges which have been reported for reforming process: dielectric barrier discharges (DBD), microwave discharges, and glide-arcs. Microwave discharges have CO2 conversion energy efficiency of up to 40% at atmospheric conditions, while glide-arcs have 43% and DBD 2-10%. In our study, we analyze a single nanosecond pulsed cathode directed streamer discharge in CO2 at atmospheric pressure and temperature. We have conducted time resolved imaging with spectral bandpass filters of a streamer discharge with an applied negative polarity pulse. The image sequences have been correlated to the applied voltage and current pulses. From the spectral filters we can determine where spatially and temporally excited species are formed. In this talk we report on spectroscopic studies of the discharge and estimate plasma properties such as temperature and density of excited species and electrons. Furthermore, we report on the effects of pulse polarity as well as anodic streamer discharges on the CO2 conversion efficiency. Finally, we will focus on the effects of vibrational excitation on carbon dioxide reforming efficiency for streamer discharges. Our experimental results will be compared with an accompanying plasma computational model studies.

  4. Experimental studies and simulations of hydrogen pellet ablation in the stellarator TJ-II

    NASA Astrophysics Data System (ADS)

    Panadero, N.; McCarthy, K. J.; Koechl, F.; Baldzuhn, J.; Velasco, J. L.; Combs, S. K.; de la Cal, E.; García, R.; Hernández Sánchez, J.; Silvagni, D.; Turkin, Y.; TJ-II Team; W7-X Team

    2018-02-01

    Plasma core fuelling is a key issue for the development of steady-state scenarios in large magnetically-confined fusion devices, in particular for helical-type machines. At present, cryogenic pellet injection is the most promising technique for efficient fuelling. Here, pellet ablation and fuelling efficiency experiments, using a compact pellet injector, are carried out in electron cyclotron resonance and neutral beam injection heated plasmas of the stellarator TJ-II. Ablation profiles are reconstructed from light emissions collected by silicon photodiodes and a fast-frame camera system, under the assumptions that such emissions are loosely related to the ablation rate and that pellet radial acceleration is negligible. In addition, pellet particle deposition and fuelling efficiency are determined using density profiles provided by a Thomson scattering system. Furthermore, experimental results are compared with ablation and deposition profiles provided by the HPI2 pellet code, which is adapted here for the stellarators Wendelstein 7-X (W7-X) and TJ-II. Finally, the HPI2 code is used to simulate ablation and deposition profiles for pellets of different sizes and velocities injected into relevant W7-X plasma scenarios, while estimating the plasmoid drift and the fuelling efficiency of injections made from two W7-X ports.

  5. Modeling non-harmonic behavior of materials from experimental inelastic neutron scattering and thermal expansion measurements

    NASA Astrophysics Data System (ADS)

    Bansal, Dipanshu; Aref, Amjad; Dargush, Gary; Delaire, Olivier

    2016-09-01

    Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally-derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. We illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound \\text{FeSi} over a wide range of temperature. Results agree well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.

  6. Theoretical basis, principles of design, and experimental study of the prototype of perfect AFCS transmitting signals without coding

    NASA Astrophysics Data System (ADS)

    Platonov, A.; Zaitsev, Ie.; Opalski, L. J.

    2017-08-01

    The paper presents an overview of design methodology and results of experiments with a Prototype of highly efficient optimal adaptive feedback communication systems (AFCS), transmitting low frequency analog signals without coding. The paper emphasizes the role of the forward transmitter saturation as the factor that blocked implementation of theoretical results of pioneer (1960-1970s) and later research on FCS. Deepened analysis of the role of statistical fitting condition in adequate formulation and solution of AFCS optimization task is given. Solution of the task - optimal transmission/reception algorithms is presented in the form useful for elaboration of the hardware/software Prototype. A notable particularity of the Prototype is absence of the encoding/decoding units, whose functions are realized by the adaptive pulse amplitude modulator (PAM) of the forward transmitter (FT) and estimating/controlling algorithm in the receiver of base station (BS). Experiments confirm that the Prototype transmits signals from FT to BS "perfectly": with the bit rate equal to the capacity of the system, and with limit energy [J/bit] and spectral [bps/Hz] efficiency. Another, not less important and confirmed experimentally, particularity of AFCS is its capability to adjust parameters of FT and BS to the characteristics of scenario of application and maintain the ideal regime of transmission including spectralenergy efficiency. AFCS adjustment can be made using BS estimates of mean square error (MSE). The concluding part of the paper contains discussion of the presented results, stressing capability of AFCS to solve problems appearing in development of dense wireless networks.

  7. Alpha particle and proton relative thermoluminescence efficiencies in LiF:Mg,Cu,P:is track structure theory up to the task?

    PubMed

    Horowitz, Y S; Siboni, D; Oster, L; Livingstone, J; Guatelli, S; Rosenfeld, A; Emfietzoglou, D; Bilski, P; Obryk, B

    2012-07-01

    Low-energy alpha particle and proton heavy charged particle (HCP) relative thermoluminescence (TL) efficiencies are calculated for the major dosimetric glow peak in LiF:Mg,Cu,P (MCP-N) in the framework of track structure theory (TST). The calculations employ previously published TRIPOS-E Monte Carlo track segment values of the radial dose in condensed phase LiF calculated at the Instituto National de Investigaciones Nucleares (Mexico) and experimentally measured normalised (60)Co gamma-induced TL dose-response functions, f(D), carried out at the Institute of Nuclear Physics (Poland). The motivation for the calculations is to test the validity of TST in a TL system in which f(D) is not supralinear (f(D) >1) and is not significantly dependent on photon energy contrary to the behaviour of the dose-response of composite peak 5 in the glow curve of LiF:Mg,Ti (TLD-100). The calculated HCP relative efficiencies in LiF:MCP-N are 23-87% lower than the experimentally measured values, indicating a weakness in the major premise of TST which exclusively relates HCP effects to the radiation action of the secondary electrons liberated by the HCP slowing down. However, an analysis of the uncertainties involved in the TST calculations and experiments (i.e. experimental measurement of f(D) at high levels of dose, sample light self-absorption and accuracy in the estimation of D(r), especially towards the end of the HCP track) indicate that these may be too large to enable a definite conclusion. More accurate estimation of sample light self-absorption, improved measurements of f(D) and full-track Monte Carlo calculations of D(r) incorporating improvements of the low-energy electron transport are indicated in order to reduce uncertainties and enable a final conclusion.

  8. Density functional theory and phytochemical study of Pistagremic acid

    NASA Astrophysics Data System (ADS)

    Ullah, Habib; Rauf, Abdur; Ullah, Zakir; Fazl-i-Sattar; Anwar, Muhammad; Shah, Anwar-ul-Haq Ali; Uddin, Ghias; Ayub, Khurshid

    2014-01-01

    We report here for the first time a comparative theoretical and experimental study of Pistagremic acid (P.A). We have developed a theoretical model for obtaining the electronic and spectroscopic properties of P.A. The simulated data showed nice correlation with the experimental data. The geometric and electronic properties were simulated at B3LYP/6-31 G (d, p) level of density functional theory (DFT). The optimized geometric parameters of P.A were found consistent with those from X-ray crystal structure. Differences of about 0.01 and 0.15 Å in bond length and 0.19-1.30° degree in the angles, respectively; were observed between the experimental and theoretical data. The theoretical vibrational bands of P.A were found to correlate with the experimental IR spectrum after a common scaling factor of 0.963. The experimental and predicted UV-Vis spectra (at B3LYP/6-31+G (d, p)) have 36 nm differences. This difference from experimental results is because of the condensed phase nature of P.A. Electronic properties such as Ionization Potential (I.P), Electron Affinities (E.A), co-efficient of highest occupied molecular orbital (HOMO), co-efficient of lowest unoccupied molecular orbital (LUMO) of P.A were estimated for the first time however, no correlation can be made with experiment. Inter-molecular interaction and its effect on vibrational (IR), electronic and geometric parameters were simulated by using Formic acid as model for hydrogen bonding in P.A.

  9. Challenges to inferring causality from viral information dispersion in dynamic social networks

    NASA Astrophysics Data System (ADS)

    Ternovski, John

    2014-06-01

    Understanding the mechanism behind large-scale information dispersion through complex networks has important implications for a variety of industries ranging from cyber-security to public health. With the unprecedented availability of public data from online social networks (OSNs) and the low cost nature of most OSN outreach, randomized controlled experiments, the "gold standard" of causal inference methodologies, have been used with increasing regularity to study viral information dispersion. And while these studies have dramatically furthered our understanding of how information disseminates through social networks by isolating causal mechanisms, there are still major methodological concerns that need to be addressed in future research. This paper delineates why modern OSNs are markedly different from traditional sociological social networks and why these differences present unique challenges to experimentalists and data scientists. The dynamic nature of OSNs is particularly troublesome for researchers implementing experimental designs, so this paper identifies major sources of bias arising from network mutability and suggests strategies to circumvent and adjust for these biases. This paper also discusses the practical considerations of data quality and collection, which may adversely impact the efficiency of the estimator. The major experimental methodologies used in the current literature on virality are assessed at length, and their strengths and limits identified. Other, as-yetunsolved threats to the efficiency and unbiasedness of causal estimators--such as missing data--are also discussed. This paper integrates methodologies and learnings from a variety of fields under an experimental and data science framework in order to systematically consolidate and identify current methodological limitations of randomized controlled experiments conducted in OSNs.

  10. Optimal ciliary beating patterns

    NASA Astrophysics Data System (ADS)

    Vilfan, Andrej; Osterman, Natan

    2011-11-01

    We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.

  11. Modelling the cancer growth process by Stochastic Differential Equations with the effect of Chondroitin Sulfate (CS) as anticancer therapeutics

    NASA Astrophysics Data System (ADS)

    Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina

    2017-09-01

    A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.

  12. Free Energy Computations by Minimization of Kullback-Leibler Divergence: An Efficient Adaptive Biasing Potential Method for Sparse Representations

    DTIC Science & Technology

    2011-10-14

    landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and...statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy...experimentally, to characterize global changes as well as investigate relative stabilities. In most applications, a brute- force computation based on

  13. Edge Length and Surface Area of a Blank: Experimental Assessment of Measures, Size Predictions and Utility

    PubMed Central

    Dogandžić, Tamara; Braun, David R.; McPherron, Shannon P.

    2015-01-01

    Blank size and form represent one of the main sources of variation in lithic assemblages. They reflect economic properties of blanks and factors such as efficiency and use life. These properties require reliable measures of size, namely edge length and surface area. These measures, however, are not easily captured with calipers. Most attempts to quantify these features employ estimates; however, the efficacy of these estimations for measuring critical features such as blank surface area and edge length has never been properly evaluated. In addition, these parameters are even more difficult to acquire for retouched implements as their original size and hence indication of their previous utility have been lost. It has been suggested, in controlled experimental conditions, that two platform variables, platform thickness and exterior platform angle, are crucial in determining blank size and shape meaning that knappers can control the interaction between size and efficiency by selecting specific core angles and controlling where fracture is initiated. The robustness of these models has rarely been tested and confirmed in context other than controlled experiments. In this paper, we evaluate which currently employed caliper measurement methods result in the highest accuracy of size estimations of blanks, and we evaluate how platform variables can be used to indirectly infer aspects of size on retouched artifacts. Furthermore, we investigate measures of different platform management strategies that control the shape and size of artifacts. To investigate these questions, we created an experimental lithic assemblage, we digitized images to calculate 2D surface area and edge length, which are used as a point of comparison for the caliper measurements and additional analyses. The analysis of aspects of size determinations and the utility of blanks contributes to our understanding of the technological strategies of prehistoric knappers and what economic decisions they made during process of blank production. PMID:26332773

  14. Quantifying the flow efficiency in constant-current capacitive deionization.

    PubMed

    Hawks, Steven A; Knipe, Jennifer M; Campbell, Patrick G; Loeb, Colin K; Hubert, McKenzie A; Santiago, Juan G; Stadermann, Michael

    2018-02-01

    Here we detail a previously unappreciated loss mechanism inherent to capacitive deionization (CDI) cycling operation that has a substantial role determining performance. This mechanism reflects the fact that desalinated water inside a cell is partially lost to re-salination if desorption is carried out immediately after adsorption. We describe such effects by a parameter called the flow efficiency, and show that this efficiency is distinct from and yet multiplicative with other highly-studied adsorption efficiencies. Flow losses can be minimized by flowing more feed solution through the cell during desalination; however, this also results in less effluent concentration reduction. While the rationale outlined here is applicable to all CDI cell architectures that rely on cycling, we validate our model with a flow-through electrode CDI device operated in constant-current mode. We find excellent agreement between flow efficiency model predictions and experimental results, thus giving researchers simple equations by which they can estimate this distinct loss process for their operation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  16. Parameters estimation for reactive transport: A way to test the validity of a reactive model

    NASA Astrophysics Data System (ADS)

    Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme

    The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.

  17. Independent-Trajectory Thermodynamic Integration: a practical guide to protein-drug binding free energy calculations using distributed computing.

    PubMed

    Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew

    2012-01-01

    The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.

  18. A theoretical framework for whole-plant carbon assimilation efficiency based on metabolic scaling theory: a test case using Picea seedlings.

    PubMed

    Wang, Zhiqiang; Ji, Mingfei; Deng, Jianming; Milne, Richard I; Ran, Jinzhi; Zhang, Qiang; Fan, Zhexuan; Zhang, Xiaowei; Li, Jiangtao; Huang, Heng; Cheng, Dongliang; Niklas, Karl J

    2015-06-01

    Simultaneous and accurate measurements of whole-plant instantaneous carbon-use efficiency (ICUE) and annual total carbon-use efficiency (TCUE) are difficult to make, especially for trees. One usually estimates ICUE based on the net photosynthetic rate or the assumed proportional relationship between growth efficiency and ICUE. However, thus far, protocols for easily estimating annual TCUE remain problematic. Here, we present a theoretical framework (based on the metabolic scaling theory) to predict whole-plant annual TCUE by directly measuring instantaneous net photosynthetic and respiratory rates. This framework makes four predictions, which were evaluated empirically using seedlings of nine Picea taxa: (i) the flux rates of CO(2) and energy will scale isometrically as a function of plant size, (ii) whole-plant net and gross photosynthetic rates and the net primary productivity will scale isometrically with respect to total leaf mass, (iii) these scaling relationships will be independent of ambient temperature and humidity fluctuations (as measured within an experimental chamber) regardless of the instantaneous net photosynthetic rate or dark respiratory rate, or overall growth rate and (iv) TCUE will scale isometrically with respect to instantaneous efficiency of carbon use (i.e., the latter can be used to predict the former) across diverse species. These predictions were experimentally verified. We also found that the ranking of the nine taxa based on net photosynthetic rates differed from ranking based on either ICUE or TCUE. In addition, the absolute values of ICUE and TCUE significantly differed among the nine taxa, with both ICUE and temperature-corrected ICUE being highest for Picea abies and lowest for Picea schrenkiana. Nevertheless, the data are consistent with the predictions of our general theoretical framework, which can be used to access annual carbon-use efficiency of different species at the level of an individual plant based on simple, direct measurements. Moreover, we believe that our approach provides a way to cope with the complexities of different ecosystems, provided that sufficient measurements are taken to calibrate our approach to that of the system being studied. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  20. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  1. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  2. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  3. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  4. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  5. Robust Characterization of Loss Rates

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2015-08-01

    Many physical implementations of qubits—including ion traps, optical lattices and linear optics—suffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.

  6. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis

    PubMed Central

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed. PMID:26167524

  7. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.

    PubMed

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  8. Determination of the Maximum Temperature in a Non-Uniform Hot Zone by Line-of-Site Absorption Spectroscopy with a Single Diode Laser.

    PubMed

    Liger, Vladimir V; Mironenko, Vladimir R; Kuritsyn, Yurii A; Bolshov, Mikhail A

    2018-05-17

    A new algorithm for the estimation of the maximum temperature in a non-uniform hot zone by a sensor based on absorption spectrometry with a diode laser is developed. The algorithm is based on the fitting of the absorption spectrum with a test molecule in a non-uniform zone by linear combination of two single temperature spectra simulated using spectroscopic databases. The proposed algorithm allows one to better estimate the maximum temperature of a non-uniform zone and can be useful if only the maximum temperature rather than a precise temperature profile is of primary interest. The efficiency and specificity of the algorithm are demonstrated in numerical experiments and experimentally proven using an optical cell with two sections. Temperatures and water vapor concentrations could be independently regulated in both sections. The best fitting was found using a correlation technique. A distributed feedback (DFB) diode laser in the spectral range around 1.343 µm was used in the experiments. Because of the significant differences between the temperature dependences of the experimental and theoretical absorption spectra in the temperature range 300⁻1200 K, a database was constructed using experimentally detected single temperature spectra. Using the developed algorithm the maximum temperature in the two-section cell was estimated with accuracy better than 30 K.

  9. Numerical sensitivity analysis of a variational data assimilation procedure for cardiac conductivities

    NASA Astrophysics Data System (ADS)

    Barone, Alessandro; Fenton, Flavio; Veneziani, Alessandro

    2017-09-01

    An accurate estimation of cardiac conductivities is critical in computational electro-cardiology, yet experimental results in the literature significantly disagree on the values and ratios between longitudinal and tangential coefficients. These are known to have a strong impact on the propagation of potential particularly during defibrillation shocks. Data assimilation is a procedure for merging experimental data and numerical simulations in a rigorous way. In particular, variational data assimilation relies on the least-square minimization of the misfit between simulations and experiments, constrained by the underlying mathematical model, which in this study is represented by the classical Bidomain system, or its common simplification given by the Monodomain problem. Operating on the conductivity tensors as control variables of the minimization, we obtain a parameter estimation procedure. As the theory of this approach currently provides only an existence proof and it is not informative for practical experiments, we present here an extensive numerical simulation campaign to assess practical critical issues such as the size and the location of the measurement sites needed for in silico test cases of potential experimental and realistic settings. This will be finalized with a real validation of the variational data assimilation procedure. Results indicate the presence of lower and upper bounds for the number of sites which guarantee an accurate and minimally redundant parameter estimation, the location of sites being generally non critical for properly designed experiments. An effective combination of parameter estimation based on the Monodomain and Bidomain models is tested for the sake of computational efficiency. Parameter estimation based on the Monodomain equation potentially leads to the accurate computation of the transmembrane potential in real settings.

  10. Transport of biocolloids in water saturated columns packed with sand: Effect of grain size and pore water velocity

    NASA Astrophysics Data System (ADS)

    Syngouna, Vasiliki I.; Chrysikopoulos, Constantinos V.

    2012-03-01

    The main objective of this study was to evaluate the combined effects of grain size and pore water velocity on the transport in water saturated porous media of three waterborne fecal indicator organisms (Escherichia coli, MS2, and ΦX174) in laboratory-scale columns packed with clean quartz sand. Three different grain sizes and three pore water velocities were examined and the attachment behavior of Escherichia coli, MS2, and ΦX174 onto quartz sand was evaluated. The mass recoveries of the biocolloids examined were shown to be highest for Escherichia coli and lowest for MS2. However, no obvious relationships between mass recoveries and water velocity or grain size could be established from the experimental results. The observed mean dispersivity values for each sand grain size were smaller for bacteria than coliphages, but higher for MS2 than ΦX174. The single collector removal and collision efficiencies were quantified using the classical colloid filtration theory. Furthermore, theoretical collision efficiencies were estimated only for E. coli by the Interaction-Force-Boundary-Layer, and Maxwell approximations. Better agreement between the experimental and Maxwell theoretical collision efficiencies were observed.

  11. Transport of biocolloids in water saturated columns packed with sand: Effect of grain size and pore water velocity

    NASA Astrophysics Data System (ADS)

    Syngouna, Vasiliki I.; Chrysikopoulos, Constantinos V.

    2011-11-01

    The main objective of this study was to evaluate the combined effects of grain size and pore water velocity on the transport in water saturated porous media of three waterborne fecal indicator organisms ( Escherichia coli, MS2, and ΦX174) in laboratory-scale columns packed with clean quartz sand. Three different grain sizes and three pore water velocities were examined and the attachment behavior of Escherichia coli, MS2, and ΦX174 onto quartz sand was evaluated. The mass recoveries of the biocolloids examined were shown to be highest for Escherichia coli and lowest for MS2. However, no obvious relationships between mass recoveries and water velocity or grain size could be established from the experimental results. The observed mean dispersivity values for each sand grain size were smaller for bacteria than coliphages, but higher for MS2 than ΦX174. The single collector removal and collision efficiencies were quantified using the classical colloid filtration theory. Furthermore, theoretical collision efficiencies were estimated only for E. coli by the Interaction-Force-Boundary-Layer, and Maxwell approximations. Better agreement between the experimental and Maxwell theoretical collision efficiencies were observed.

  12. The Efficiency of Different Salts to Screen Charge Interactions in Proteins: A Hofmeister Effect?

    PubMed Central

    Perez-Jimenez, Raul; Godoy-Ruiz, Raquel; Ibarra-Molero, Beatriz; Sanchez-Ruiz, Jose M.

    2004-01-01

    Understanding the screening by salts of charge-charge interactions in proteins is important for at least two reasons: a), screening by intracellular salt concentration may modulate the stability and interactions of proteins in vivo; and b), the in vitro experimental estimation of the contributions from charge-charge interactions to molecular processes involving proteins is generally carried out on the basis of the salt effect on process energetics, under the assumption that these interactions are screened out by moderate salt concentrations. Here, we explore experimentally the extent to which the screening efficiency depends on the nature of the salt. To this end, we have carried out an energetic characterization of the effect of NaCl (a nondenaturing salt), guanidinium chloride (a denaturing salt), and guanidinium thiocyanate (a stronger denaturant) on the stability of the wild-type form and a T14K variant of Escherichia coli thioredoxin. Our results suggest that the efficiency of different salts to screen charge-charge interactions correlates with their denaturing strength and with the position of the constituent ions in the Hofmeister rankings. This result appears consistent with the plausible relation of the Hofmeister rankings with the extent of solute accumulation/exclusion from protein surfaces. PMID:15041679

  13. Efficient Numerical Methods for Nonlinear-Facilitated Transport and Exchange in a Blood-Tissue Exchange Unit

    PubMed Central

    Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.

    2010-01-01

    The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808

  14. Obtaining mathematical models for assessing efficiency of dust collectors using integrated system of analysis and data management STATISTICA Design of Experiments

    NASA Astrophysics Data System (ADS)

    Azarov, A. V.; Zhukova, N. S.; Kozlovtseva, E. Yu; Dobrinsky, D. R.

    2018-05-01

    The article considers obtaining mathematical models to assess the efficiency of the dust collectors using an integrated system of analysis and data management STATISTICA Design of Experiments. The procedure for obtaining mathematical models and data processing is considered by the example of laboratory studies on a mounted installation containing a dust collector in counter-swirling flows (CSF) using gypsum dust of various fractions. Planning of experimental studies has been carried out in order to reduce the number of experiments and reduce the cost of experimental research. A second-order non-position plan (Box-Bencken plan) was used, which reduced the number of trials from 81 to 27. The order of statistical data research of Box-Benken plan using standard tools of integrated system for analysis and data management STATISTICA Design of Experiments is considered. Results of statistical data processing with significance estimation of coefficients and adequacy of mathematical models are presented.

  15. Particle collection by a pilot plant venturi scrubber downstream from a pilot plant electrostatic precipitator

    NASA Astrophysics Data System (ADS)

    Sparks, L. E.; Ramsey, G. H.; Daniel, B. E.

    The results of pilot plant experiments of particulate collection by a venturi scrubber downstream from an electrostatic precipitator (ESP) are presented. The data, which cover a range of scrubber operating conditions and ESP efficiencies, show that particle collection by the venturi scrubber is not affected by the upstream ESP; i.e., for a given scrubber pressure drop, particle collection efficiency as a function of particle diameter is the same for both ESP on and ESP off. The experimental results are in excellent agreement with theoretical predictions. Order of magnitude cost estimates indicate that particle collection by ESP scrubber systems may be economically attractive when scrubbers must be used for SO x control.

  16. Market Model for Resource Allocation in Emerging Sensor Networks with Reinforcement Learning

    PubMed Central

    Zhang, Yue; Song, Bin; Zhang, Ying; Du, Xiaojiang; Guizani, Mohsen

    2016-01-01

    Emerging sensor networks (ESNs) are an inevitable trend with the development of the Internet of Things (IoT), and intend to connect almost every intelligent device. Therefore, it is critical to study resource allocation in such an environment, due to the concern of efficiency, especially when resources are limited. By viewing ESNs as multi-agent environments, we model them with an agent-based modelling (ABM) method and deal with resource allocation problems with market models, after describing users’ patterns. Reinforcement learning methods are introduced to estimate users’ patterns and verify the outcomes in our market models. Experimental results show the efficiency of our methods, which are also capable of guiding topology management. PMID:27916841

  17. Importance of the accuracy of experimental data in the nonlinear chromatographic determination of adsorption energy distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanley, B.J.; Guiochon, G.

    1994-11-01

    Adsorption energy distributions (AEDs) are calculated from the classical, fundamental integral equation of adsorption using adsorption isotherms and the expectation-maximization method of parameter estimation. The adsorption isotherms are calculated from nonlinear elution profiles obtained from gas chromatographic data using the characteristic points method of finite concentration chromatography. Porous layer open tubular capillary columns are used to support the adsorbent. The performance of these columns is compared to that of packed columns in terms of their ability to supply accurate isotherm data and AEDs. The effect of the finite column efficiency and the limited loading factor on the accuracy of themore » estimated energy distributions is presented. This accuracy decreases with decreasing efficiency, and approximately 5000 theoretical plates are needed when the loading factor, L[sub f], equals 0.56 for sampling of a unimodal Gaussian distribution. Increasing L[sub f] further increases the contribution of finite efficiency to the AED and causes a divergence at the low-energy endpoint if too high. This occurs as the retention time approaches the holdup time. Data are presented for diethyl ether adsorption on porous silica and its C-18-bonded derivative. 36 refs., 8 figs., 2 tabs.« less

  18. Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.

    PubMed

    Maani, Ehsan; Katsaggelos, Aggelos K

    2009-09-01

    The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.

  19. Computing rank-revealing QR factorizations of dense matrices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science

    1998-06-01

    We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliablemore » algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.« less

  20. Performance estimation of a Venturi scrubber using a computational model for capturing dust particles with liquid spray.

    PubMed

    Pak, S I; Chang, K S

    2006-12-01

    A Venturi scrubber has dispersed three-phase flow of gas, dust, and liquid. Atomization of a liquid jet and interaction between the phases has a large effect on the performance of Venturi scrubbers. In this study, a computational model for the interactive three-phase flow in a Venturi scrubber has been developed to estimate pressure drop and collection efficiency. The Eulerian-Lagrangian method is used to solve the model numerically. Gas flow is solved using the Eulerian approach by using the Navier-Stokes equations, and the motion of dust and liquid droplets, described by the Basset-Boussinesq-Oseen (B-B-O) equation, is solved using the Lagrangian approach. This model includes interaction between gas and droplets, atomization of a liquid jet, droplet deformation, breakup and collision of droplets, and capture of dust by droplets. A circular Pease-Anthony Venturi scrubber was simulated numerically with this new model. The numerical results were compared with earlier experimental data for pressure drop and collection efficiency, and gave good agreements.

  1. Exciton Transport Simulations in Phenyl Cored Thiophene Dendrimers

    NASA Astrophysics Data System (ADS)

    Kim, Kwiseon; Erkan Kose, Muhammet; Graf, Peter; Kopidakis, Nikos; Rumbles, Garry; Shaheen, Sean E.

    2009-03-01

    Phenyl cored 3-arm and 4-arm thiophene dendrimers are promising materials for use in photovoltaic devices. It is important to understand the energy transfer mechanisms in these molecules to guide the synthesis of novel dendrimers with improved efficiency. A method is developed to estimate the exciton diffusion lengths for the dendrimers and similar chromophores in amorphous films. The approach exploits Fermi's Golden Rule to estimate the energy transfer rates for an ensemble of bimolecular complexes in random orientations. Using Poisson's equation to evaluate Coulomb integrals led to efficient calculation of excitonic couplings between the transition densities. Monte-Carlo simulations revealed the dynamics of energy transport in the dendrimers. Experimental exciton diffusion lengths of the dendrimers range 10 ˜ 20 nm, increasing with the size of the dendrimer. Simulated diffusion lengths correlate well with experiments. The chemical structure of the chromophore, the shape of the transition densities and the exciton lifetime are found to be the most important factors that determine the exciton diffusion length in amorphous films.

  2. An efficient approach to ARMA modeling of biological systems with multiple inputs and delays

    NASA Technical Reports Server (NTRS)

    Perrott, M. H.; Cohen, R. J.

    1996-01-01

    This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.

  3. High-throughput sample adaptive offset hardware architecture for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin

    2018-03-01

    A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.

  4. Experimental study and thermodynamic modeling for determining the effect of non-polar solvent (hexane)/polar solvent (methanol) ratio and moisture content on the lipid extraction efficiency from Chlorella vulgaris.

    PubMed

    Malekzadeh, Mohammad; Abedini Najafabadi, Hamed; Hakim, Maziar; Feilizadeh, Mehrzad; Vossoughi, Manouchehr; Rashtchian, Davood

    2016-02-01

    In this research, organic solvent composed of hexane and methanol was used for lipid extraction from dry and wet biomass of Chlorella vulgaris. The results indicated that lipid and fatty acid extraction yield was decreased by increasing the moisture content of biomass. However, the maximum extraction efficiency was attained by applying equivolume mixture of hexane and methanol for both dry and wet biomass. Thermodynamic modeling was employed to estimate the effect of hexane/methanol ratio and moisture content on fatty acid extraction yield. Hansen solubility parameter was used in adjusting the interaction parameters of the model, which led to decrease the number of tuning parameters from 6 to 2. The results indicated that the model can accurately estimate the fatty acid recovery with average absolute deviation percentage (AAD%) of 13.90% and 15.00% for the two cases of using 6 and 2 adjustable parameters, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Tunable φ Josephson junction ratchet.

    PubMed

    Menditto, R; Sickinger, H; Weides, M; Kohlstedt, H; Koelle, D; Kleiner, R; Goldobin, E

    2016-10-01

    We demonstrate experimentally the operation of a deterministic Josephson ratchet with tunable asymmetry. The ratchet is based on a φ Josephson junction with a ferromagnetic barrier operating in the underdamped regime. The system is probed also under the action of an additional dc current, which acts as a counterforce trying to stop the ratchet. Under these conditions the ratchet works against the counterforce, thus producing a nonzero output power. Finally, we estimate the efficiency of the φ Josephson junction ratchet.

  6. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  7. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Photoionization pathways and thresholds in generation of Lyman-α radiation by resonant four-wave mixing in Kr-Ar mixture

    NASA Astrophysics Data System (ADS)

    Louchev, Oleg A.; Saito, Norihito; Oishi, Yu; Miyazaki, Koji; Okamura, Kotaro; Nakamura, Jumpei; Iwasaki, Masahiko; Wada, Satoshi

    2016-09-01

    We develop a set of analytical approximations for the estimation of the combined effect of various photoionization processes involved in the resonant four-wave mixing generation of ns pulsed Lyman-α (L-α ) radiation by using 212.556 nm and 820-845 nm laser radiation pulses in Kr-Ar mixture: (i) multi-photon ionization, (ii) step-wise (2+1)-photon ionization via the resonant 2-photon excitation of Kr followed by 1-photon ionization and (iii) laser-induced avalanche ionization produced by generated free electrons. Developed expressions validated by order of magnitude estimations and available experimental data allow us to identify the area for the operation under high input laser intensities avoiding the onset of full-scale discharge, loss of efficiency and inhibition of generated L-α radiation. Calculations made reveal an opportunity for scaling up the output energy of the experimentally generated pulsed L-α radiation without significant enhancement of photoionization.

  9. High-efficiency silicon solar-cell design and practical barriers

    NASA Technical Reports Server (NTRS)

    Mokashi, A.

    1985-01-01

    A numerical evaluation technique is used to study the impact of practical barriers, such as heavy doping effects (Auger recombination, band gap narrowing), surface recombination, shadowing losses and minority-carrier lifetime (Tau), on a high efficiency silicon solar cell performance. Considering a high Tau of 1 ms, efficiency of a silicon solar cell of the hypothetical case is estimated to be around 29%. This is comparable with (detailed balance limit) maximum efficiency of a p-n junction solar cell of 30%. Value of Tau is varied from 1 second to 20 micro. Heavy doping effects, and realizable values of surface recombination velocities and shadowing, are then considered in succession and their influence on cell efficiency is evaluated and quantified. These practical barriers cause the cell efficiency to reduce from the maximum value of 29% to the experimentally achieved value of about 19%. Improvement in open circuit voltage V sub oc is required to achieve cell efficiency greater than 20%. Increased value of Tau reduces reverse saturation current and, hence, improves V sub oc. Control of surface recombination losses becomes critical at higher V sub oc. Substantial improvement in Tau and considerable reduction in surface recombination velocities is essential to achieve cell efficiencies greater than 20%.

  10. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  11. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  12. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  13. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE PAGES

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.; ...

    2018-03-27

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  14. Energy awareness for supercapacitors using Kalman filter state-of-charge tracking

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Hassanalieragh, Moeen; Sharma, Gaurav; Soyata, Tolga

    2015-11-01

    Among energy buffering alternatives, supercapacitors can provide unmatched efficiency and durability. Additionally, the direct relation between a supercapacitor's terminal voltage and stored energy can improve energy awareness. However, a simple capacitive approximation cannot adequately represent the stored energy in a supercapacitor. It is shown that the three branch equivalent circuit model provides more accurate energy awareness. This equivalent circuit uses three capacitances and associated resistances to represent the supercapacitor's internal SOC (state-of-charge). However, the SOC cannot be determined from one observation of the terminal voltage, and must be tracked over time using inexact measurements. We present: 1) a Kalman filtering solution for tracking the SOC; 2) an on-line system identification procedure to efficiently estimate the equivalent circuit's parameters; and 3) experimental validation of both parameter estimation and SOC tracking for 5 F, 10 F, 50 F, and 350 F supercapacitors. Validation is done within the operating range of a solar powered application and the associated power variability due to energy harvesting. The proposed techniques are benchmarked against the simple capacitive model and prior parameter estimation techniques, and provide a 67% reduction in root-mean-square error for predicting usable buffered energy.

  15. Real-time yield estimation based on deep learning

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Sheppard, Clay

    2017-05-01

    Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.

  16. Development of neutron measurement in high gamma field using new nuclear emulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawarabayashi, J.; Ishihara, K.; Takagi, K.

    2011-07-01

    To precisely measure the neutron emissions from a spent fuel assembly of a fast breeder reactor, we formed nuclear emulsions based on a non-sensitized Oscillation Project with Emulsion tracking Apparatus (OPERA) film with AgBr grain sizes of 60, 90, and 160 nm. The efficiency for {sup 252}Cf neutron detection of the new emulsion was calculated to be 0.7 x 10{sup -4}, which corresponded to an energy range from 0.3 to 2 MeV and was consistent with a preliminary estimate based on experimental results. The sensitivity of the new emulsion was also experimentally estimated by irradiating with 565 keV and 14more » MeV neutrons. The emulsion with an AgBr grain size of 60 nm had the lowest sensitivity among the above three emulsions but was still sensitive enough to detect protons. Furthermore, the experimental data suggested that there was a threshold linear energy transfer of 15 keV/{mu}m for the new emulsion, below which no silver clusters developed. Further development of nuclear emulsion with an AgBr grain size of a few tens of nanometers will be the next stage of the present study. (authors)« less

  17. Increased Uptake of Chelated Copper Ions by Lolium perenne Attributed to Amplified Membrane and Endodermal Damage

    PubMed Central

    Johnson, Anthea; Singhal, Naresh

    2015-01-01

    The contributions of mechanisms by which chelators influence metal translocation to plant shoot tissues are analyzed using a combination of numerical modelling and physical experiments. The model distinguishes between apoplastic and symplastic pathways of water and solute movement. It also includes the barrier effects of the endodermis and plasma membrane. Simulations are used to assess transport pathways for free and chelated metals, identifying mechanisms involved in chelate-enhanced phytoextraction. Hypothesized transport mechanisms and parameters specific to amendment treatments are estimated, with simulated results compared to experimental data. Parameter values for each amendment treatment are estimated based on literature and experimental values, and used for model calibration and simulation of amendment influences on solute transport pathways and mechanisms. Modeling indicates that chelation alters the pathways for Cu transport. For free ions, Cu transport to leaf tissue can be described using purely apoplastic or transcellular pathways. For strong chelators (ethylenediaminetetraacetic acid (EDTA) and diethylenetriaminepentaacetic acid (DTPA)), transport by the purely apoplastic pathway is insufficient to represent measured Cu transport to leaf tissue. Consistent with experimental observations, increased membrane permeability is required for simulating translocation in EDTA and DTPA treatments. Increasing the membrane permeability is key to enhancing phytoextraction efficiency. PMID:26512647

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbanis, E.; Martin, A.; Rosset, D.

    Imperfections in experimental measurement schemes can lead to falsely identifying, or over estimating, entanglement in a quantum system. A recent solution to this is to define schemes that are robust to measurement imperfections—measurement-device-independent entanglement witness (MDI-EW). This approach can be adapted to witness all entangled qubit states for a wide range of physical systems and does not depend on detection efficiencies or classical communication between devices. In this paper, we extend the theory to remove the necessity of prior knowledge about the two-qubit states to be witnessed. Moreover, we tested this model via a novel experimental implementation for MDI-EW thatmore » significantly reduces the experimental complexity. Finally, by applying it to a bipartite Werner state, we demonstrate the robustness of this approach against noise by witnessing entanglement down to an entangled state fraction close to 0.4.« less

  19. Modeling non-harmonic behavior of materials from experimental inelastic neutron scattering and thermal expansion measurements

    DOE PAGES

    Bansal, Dipanshu; Aref, Amjad; Dargush, Gary; ...

    2016-07-20

    Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally-derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. In this study, we illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Our results agreemore » well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.« less

  20. Modeling non-harmonic behavior of materials from experimental inelastic neutron scattering and thermal expansion measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bansal, Dipanshu; Aref, Amjad; Dargush, Gary

    Based on thermodynamic principles, we derive expressions quantifying the non-harmonic vibrational behavior of materials, which are rigorous yet easily evaluated from experimentally available data for the thermal expansion coefficient and the phonon density of states. These experimentally-derived quantities are valuable to benchmark first-principles theoretical predictions of harmonic and non-harmonic thermal behaviors using perturbation theory, ab initio molecular-dynamics, or Monte-Carlo simulations. In this study, we illustrate this analysis by computing the harmonic, dilational, and anharmonic contributions to the entropy, internal energy, and free energy of elemental aluminum and the ordered compound FeSi over a wide range of temperature. Our results agreemore » well with previous data in the literature and provide an efficient approach to estimate anharmonic effects in materials.« less

  1. An improved hybrid of particle swarm optimization and the gravitational search algorithm to produce a kinetic parameter estimation of aspartate biochemical pathways.

    PubMed

    Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal

    2017-12-01

    Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Biomass thermochemical gasification: Experimental studies and modeling

    NASA Astrophysics Data System (ADS)

    Kumar, Ajay

    The overall goals of this research were to study the biomass thermochemical gasification using experimental and modeling techniques, and to evaluate the cost of industrial gas production and combined heat and power generation. This dissertation includes an extensive review of progresses in biomass thermochemical gasification. Product gases from biomass gasification can be converted to biopower, biofuels and chemicals. However, for its viable commercial applications, the study summarizes the technical challenges in the gasification and downstream processing of product gas. Corn stover and dried distillers grains with solubles (DDGS), a non-fermentable byproduct of ethanol production, were used as the biomass feedstocks. One of the objectives was to determine selected physical and chemical properties of corn stover related to thermochemical conversion. The parameters of the reaction kinetics for weight loss were obtained. The next objective was to investigate the effects of temperature, steam to biomass ratio and equivalence ratio on gas composition and efficiencies. DDGS gasification was performed on a lab-scale fluidized-bed gasifier with steam and air as fluidizing and oxidizing agents. Increasing the temperature resulted in increases in hydrogen and methane contents and efficiencies. A model was developed to simulate the performance of a lab-scale gasifier using Aspen Plus(TM) software. Mass balance, energy balance and minimization of Gibbs free energy were applied for the gasification to determine the product gas composition. The final objective was to optimize the process by maximizing the net energy efficiency, and to estimate the cost of industrial gas, and combined heat and power (CHP) at a biomass feedrate of 2000 kg/h. The selling price of gas was estimated to be 11.49/GJ for corn stover, and 13.08/GJ for DDGS. For CHP generation, the electrical and net efficiencies were 37 and 86%, respectively for corn stover, and 34 and 78%, respectively for DDGS. For corn stover, the selling price of electricity was 0.1351/kWh. For DDGS, the selling price of electricity was 0.1287/kWh.

  3. Pixel-By Estimation of Scene Motion in Video

    NASA Astrophysics Data System (ADS)

    Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.

    2017-05-01

    The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.

  4. Dynamics of in vivo power output and efficiency of Nasonia asynchronous flight muscle.

    PubMed

    Lehmann, Fritz-Olaf; Heymann, Nicole

    2006-06-25

    By simultaneously measuring aerodynamic performance, wing kinematics, and metabolic activity, we have estimated the in vivo limits of mechanical power production and efficiency of the asynchronous flight muscle (IFM) in three species of ectoparasitoid wasps genus Nasonia (N. giraulti, N. longicornis, and N. vitripennis). The 0.6 mg animals were flown under tethered flight conditions in a flight simulator that allowed modulation of power production by employing an open-loop visual stimulation technique. At maximum locomotor capacity, flight muscles of Nasonia are capable to sustain 72.2 +/- 18.3 W kg(-1) muscle mechanical power at a chemo-mechanical conversion efficiency of approximately 9.8 +/- 0.9%. Within the working range of the locomotor system, profile power requirement for flight dominates induced power requirement suggesting that the cost to overcome wing drag places the primary limit on overall flight performance. Since inertial power is only approximately 25% of the sum of induced and profile power requirements, Nasonia spp. may not benefit from elastic energy storage during wing deceleration phases. A comparison between wing size-polymorphic males revealed that wing size reduction is accompanied by a decrease in total flight muscle volume, muscle mass-specific mechanical power production, and total flight efficiency. In animals with small wings maximum total flight efficiency is below 0.5%. The aerodynamic and power estimates reported here for Nasonia are comparable to values reported previously for the fruit fly Drosophila flying under similar experimental conditions, while muscle efficiency of the tiny wasp is more at the lower end of values published for various other insects.

  5. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

    PubMed Central

    Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael

    2014-01-01

    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489

  6. Effect of air flow on tubular solar still efficiency

    PubMed Central

    2013-01-01

    Background An experimental work was reported to estimate the increase in distillate yield for a compound parabolic concentrator-concentric tubular solar still (CPC-CTSS). The CPC dramatically increases the heating of the saline water. A novel idea was proposed to study the characteristic features of CPC for desalination to produce a large quantity of distillate yield. A rectangular basin of dimension 2 m × 0.025 m × 0.02 m was fabricated of copper and was placed at the focus of the CPC. This basin is covered by two cylindrical glass tubes of length 2 m with two different diameters of 0.02 m and 0.03 m. The experimental study was operated with two modes: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively. Findings The experimental study was operated with two modes: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. Conclusions On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively. PMID:23587020

  7. Effect of air flow on tubular solar still efficiency.

    PubMed

    Thirugnanasambantham, Arunkumar; Rajan, Jayaprakash; Ahsan, Amimul; Kandasamy, Vinothkumar

    2013-01-01

    An experimental work was reported to estimate the increase in distillate yield for a compound parabolic concentrator-concentric tubular solar still (CPC-CTSS). The CPC dramatically increases the heating of the saline water. A novel idea was proposed to study the characteristic features of CPC for desalination to produce a large quantity of distillate yield. A rectangular basin of dimension 2 m × 0.025 m × 0.02 m was fabricated of copper and was placed at the focus of the CPC. This basin is covered by two cylindrical glass tubes of length 2 m with two different diameters of 0.02 m and 0.03 m. The experimental study was operated with two modes: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively. THE EXPERIMENTAL STUDY WAS OPERATED WITH TWO MODES: without and with air flow between inner and outer tubes. The rate of air flow was fixed throughout the experiment at 4.5 m/s. On the basis of performance results, the water collection rate was 1445 ml/day without air flow and 2020 ml/day with air flow and the efficiencies were 16.2% and 18.9%, respectively.

  8. Atmospheric chemistry of carboxylic acids: microbial implication versus photochemistry

    NASA Astrophysics Data System (ADS)

    Vaïtilingom, M.; Charbouillot, T.; Deguillaume, L.; Maisonobe, R.; Parazols, M.; Amato, P.; Sancelme, M.; Delort, A.-M.

    2011-08-01

    The objective of this work was to compare experimentally the contribution of photochemistry vs. microbial activity to the degradation of carboxylic acids present in cloud water. For this, we selected 17 strains representative of the microflora existing in real clouds and worked on two distinct artificial cloud media that reproduce marine and continental cloud chemical composition. Photodegradation experiments with hydrogen peroxide (H2O2) as a source of hydroxyl radicals were performed under the same microcosm conditions using two irradiation systems. Biodegradation and photodegradation rates of acetate, formate, oxalate and succinate were measured on both media at 5 °C and 17 °C and were shown to be on the same order of magnitude (around 10-10-10-11 M s-1). The chemical composition (marine or continental origin) had little influence on photodegradation and biodegradation rates while the temperature shift from 17 °C to 5 °C decreased biodegradation rates of a factor 2 to 5. In order to test other photochemical scenarios, theoretical photodegradation rates were calculated considering hydroxyl (OH) radical concentration values in cloud water estimated by cloud chemistry modelling studies and available reaction rate constants of carboxylic compounds with both hydroxyl and nitrate radicals. Considering high OH concentration ([OH] = 1 × 10-12 M) led to no significant contribution of microbial activity in the destruction of carboxylic acids. On the contrary, for lower OH concentration (at noon, [OH] = 1 × 10-14 M), microorganisms could efficiently compete with photochemistry and in similar contributions than the ones estimated by our experimental approach. Combining these two approaches (experimental and theoretical), our results led to the following conclusions: oxalate was only photodegraded; the photodegradation of formate was usually more efficient than its biodegradation; the biodegradation of acetate and succinate seemed to exceed their photodegradation.

  9. Unveiling the Atomic-Level Determinants of Acylase-Ligand Complexes: An Experimental and Computational Study.

    PubMed

    Mollica, Luca; Conti, Gianluca; Pollegioni, Loredano; Cavalli, Andrea; Rosini, Elena

    2015-10-26

    The industrial production of higher-generation semisynthetic cephalosporins starts from 7-aminocephalosporanic acid (7-ACA), which is obtained by deacylation of the naturally occurring antibiotic cephalosporin C (CephC). The enzymatic process in which CephC is directly converted into 7-ACA by a cephalosporin C acylase has attracted industrial interest because of the prospects of simplifying the process and reducing costs. We recently enhanced the catalytic efficiency on CephC of a glutaryl acylase from Pseudomonas N176 (named VAC) by a protein engineering approach and solved the crystal structures of wild-type VAC and the H57βS-H70βS VAC double variant. In the present work, experimental measurements on several CephC derivatives and six VAC variants were carried out, and the binding of ligands into the VAC active site was investigated at an atomistic level by means of molecular docking and molecular dynamics simulations and analyzed on the basis of the molecular geometry of encounter complex formation and protein-ligand potential of mean force profiles. The observed significant correlation between the experimental data and estimated binding energies highlights the predictive power of our computational method to identify the ligand binding mode. The present experimental-computational study is well-suited both to provide deep insight into the reaction mechanism of cephalosporin C acylase and to improve the efficiency of the corresponding industrial process.

  10. MIDAS: a practical Bayesian design for platform trials with molecularly targeted agents.

    PubMed

    Yuan, Ying; Guo, Beibei; Munsell, Mark; Lu, Karen; Jazaeri, Amir

    2016-09-30

    Recent success of immunotherapy and other targeted therapies in cancer treatment has led to an unprecedented surge in the number of novel therapeutic agents that need to be evaluated in clinical trials. Traditional phase II clinical trial designs were developed for evaluating one candidate treatment at a time and thus not efficient for this task. We propose a Bayesian phase II platform design, the multi-candidate iterative design with adaptive selection (MIDAS), which allows investigators to continuously screen a large number of candidate agents in an efficient and seamless fashion. MIDAS consists of one control arm, which contains a standard therapy as the control, and several experimental arms, which contain the experimental agents. Patients are adaptively randomized to the control and experimental agents based on their estimated efficacy. During the trial, we adaptively drop inefficacious or overly toxic agents and 'graduate' the promising agents from the trial to the next stage of development. Whenever an experimental agent graduates or is dropped, the corresponding arm opens immediately for testing the next available new agent. Simulation studies show that MIDAS substantially outperforms the conventional approach. The proposed design yields a significantly higher probability for identifying the promising agents and dropping the futile agents. In addition, MIDAS requires only one master protocol, which streamlines trial conduct and substantially decreases the overhead burden. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. MIDAS: A Practical Bayesian Design for Platform Trials with Molecularly Targeted Agents

    PubMed Central

    Yuan, Ying; Guo, Beibei; Munsell, Mark; Lu, Karen; Jazaeri, Amir

    2016-01-01

    Recent success of immunotherapy and other targeted therapies in cancer treatment has led to an unprecedented surge in the number of novel therapeutic agents that need to be evaluated in clinical trials. Traditional phase II clinical trial designs were developed for evaluating one candidate treatment at a time, and thus not efficient for this task. We propose a Bayesian phase II platform design, the Multi-candidate Iterative Design with Adaptive Selection (MIDAS), which allows investigators to continuously screen a large number of candidate agents in an efficient and seamless fashion. MIDAS consists of one control arm, which contains a standard therapy as the control, and several experimental arms, which contain the experimental agents. Patients are adaptively randomized to the control and experimental agents based on their estimated efficacy. During the trial, we adaptively drop inefficacious or overly toxic agents and “graduate” the promising agents from the trial to the next stage of development. Whenever an experimental agent graduates or is dropped, the corresponding arm opens immediately for testing the next available new agent. Simulation studies show that MIDAS substantially outperforms the conventional approach. The proposed design yields a significantly higher probability for identifying the promising agents and dropping the futile agents. In addition, MIDAS requires only one master protocol, which streamlines trial conduct and substantially decreases the overhead burden. PMID:27112322

  12. Characterization of Fissile Assemblies Using Low-Efficiency Detection Systems

    DOE PAGES

    Chapline, George F.; Verbeke, Jerome M.

    2017-02-02

    Here, we have investigated the possibility that the amount, chemical form, multiplication, and shape of the fissile material in an assembly can be passively assayed using scintillator detection systems by only measuring the fast neutron pulse height distribution and distribution of time intervals Δt between fast neutrons. We have previously demonstrated that the alpha-ratio can be obtained from the observed pulse height distribution for fast neutrons. In this paper we report that we report that when the distribution of time intervals is plotted as a function of logΔt, the position of the correlated neutron peak is nearly independent of detectormore » efficiency and determines the internal relaxation rate for fast neutrons. If this information is combined with knowledge of the alpha-ratio, then the position of the minimum between the correlated and uncorrelated peaks can be used to rapidly estimate the mass, multiplication, and shape of fissile material. This method does not require a priori knowledge of either the efficiency for neutron detection or the alpha-ratio. Although our method neglects 3-neutron correlations, we have used previously obtained experimental data for metallic and oxide forms of Pu to demonstrate that our method yields good estimates for multiplications as large as 2, and that the only constraint on detector efficiency/observation time is that a peak in the interval time distribution due to correlated neutrons is visible.« less

  13. Digital signal processing techniques for coherent optical communication

    NASA Astrophysics Data System (ADS)

    Goldfarb, Gilad

    Coherent detection with subsequent digital signal processing (DSP) is developed, analyzed theoretically and numerically and experimentally demonstrated in various fiber-optic transmission scenarios. The use of DSP in conjunction with coherent detection unleashes the benefits of coherent detection which rely on the preservaton of full information of the incoming field. These benefits include high receiver sensitivity, the ability to achieve high spectral-efficiency and the use of advanced modulation formats. With the immense advancements in DSP speeds, many of the problems hindering the use of coherent detection in optical transmission systems have been eliminated. Most notably, DSP alleviates the need for hardware phase-locking and polarization tracking, which can now be achieved in the digital domain. The complexity previously associated with coherent detection is hence significantly diminished and coherent detection is once gain considered a feasible detection alternative. In this thesis, several aspects of coherent detection (with or without subsequent DSP) are addressed. Coherent detection is presented as a means to extend the dispersion limit of a duobinary signal using an analog decision-directed phase-lock loop. Analytical bit-error ratio estimation for quadrature phase-shift keying signals is derived. To validate the promise for high spectral efficiency, the orthogonal-wavelength-division multiplexing scheme is suggested. In this scheme the WDM channels are spaced at the symbol rate, thus achieving the spectral efficiency limit. Theory, simulation and experimental results demonstrate the feasibility of this approach. Infinite impulse response filtering is shown to be an efficient alternative to finite impulse response filtering for chromatic dispersion compensation. Theory, design considerations, simulation and experimental results relating to this topic are presented. Interaction between fiber dispersion and nonlinearity remains the last major challenge deterministic effects pose for long-haul optical data transmission. Experimental results which demonstrate the possibility to digitally mitigate both dispersion and nonlinearity are presented. Impairment compensation is achieved using backward propagation by implementing the split-step method. Efficient realizations of the dispersion compensation operator used in this implementation are considered. Infinite-impulse response and wavelet-based filtering are both investigated as a means to reduce the required computational load associated with signal backward-propagation. Possible future research directions conclude this dissertation.

  14. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  15. Experimental study of the continuous casting slab solidification microstructure by the dendrite etching method

    NASA Astrophysics Data System (ADS)

    Yang, X. G.; Xu, Q. T.; Wu, C. L.; Chen, Y. S.

    2017-12-01

    The relationship between the microstructure of the continuous casting slab (CCS) and quality defects of the steel products, as well as evolution and characteristics of the fine equiaxed, columnar, equiaxed zones and crossed dendrites of CCS were systematically investigated in this study. Different microstructures of various CCS samples were revealed. The dendrite etching method was proved to be quite efficient for the analysis of solidified morphologies, which are essential to estimate the material characteristics, especially the CCS microstructure defects.

  16. Estimation of optimum density and temperature for maximum efficiency of tin ions in Z discharge extreme ultraviolet sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masnavi, Majid; Nakajima, Mitsuo; Hotta, Eiki

    Extreme ultraviolet (EUV) discharge-based lamps for EUV lithography need to generate extremely high power in the narrow spectrum band of 13.5{+-}0.135 nm. A simplified collisional-radiative model and radiative transfer solution for an isotropic medium were utilized to investigate the wavelength-integrated light outputs in tin (Sn) plasma. Detailed calculations using the Hebrew University-Lawrence Livermore atomic code were employed for determination of necessary atomic data of the Sn{sup 4+} to Sn{sup 13+} charge states. The result of model is compared with experimental spectra from a Sn-based discharge-produced plasma. The analysis reveals that considerably larger efficiency compared to the so-called efficiency of amore » black-body radiator is formed for the electron density {approx_equal}10{sup 18} cm{sup -3}. For higher electron density, the spectral efficiency of Sn plasma reduces due to the saturation of resonance transitions.« less

  17. Optical determination of Shockley-Read-Hall and interface recombination currents in hybrid perovskites

    PubMed Central

    Sarritzu, Valerio; Sestu, Nicola; Marongiu, Daniela; Chang, Xueqing; Masi, Sofia; Rizzo, Aurora; Colella, Silvia; Quochi, Francesco; Saba, Michele; Mura, Andrea; Bongiovanni, Giovanni

    2017-01-01

    Metal-halide perovskite solar cells rival the best inorganic solar cells in power conversion efficiency, providing the outlook for efficient, cheap devices. In order for the technology to mature and approach the ideal Shockley-Queissier efficiency, experimental tools are needed to diagnose what processes limit performances, beyond simply measuring electrical characteristics often affected by parasitic effects and difficult to interpret. Here we study the microscopic origin of recombination currents causing photoconversion losses with an all-optical technique, measuring the electron-hole free energy as a function of the exciting light intensity. Our method allows assessing the ideality factor and breaks down the electron-hole recombination current into bulk defect and interface contributions, providing an estimate of the limit photoconversion efficiency, without any real charge current flowing through the device. We identify Shockley-Read-Hall recombination as the main decay process in insulated perovskite layers and quantify the additional performance degradation due to interface recombination in heterojunctions. PMID:28317883

  18. The effect of life-cycle cost disclosure on consumer behavior

    NASA Astrophysics Data System (ADS)

    Deutsch, Matthias

    For more than 20 years, analysts have reported on the so-called "energy paradox" or the "energy efficiency gap", referring to the fact that economic agents could in principle lower their total cost at current prices by using more energy-efficient technology but, nevertheless, often decide not to do so. Theory suggests that providing information in a simplified way could potentially reduce this "efficiency gap". Such simplification may be achieved by providing the estimated monetary operating cost and life-cycle cost (LCC) of a given appliance---which has been a recurring theme within the energy policy and efficiency labeling community. Yet, little is known so far about the causal effects of LCC disclosure on consumer action because of the gap between the acquisition of efficiency information and consumer purchasing behavior in the real marketplace. This dissertation bridges the gap by experimentally integrating LCC disclosure into two major German commercial websites---a price comparison engine for cooling appliances, and an online shop for washing machines. Internet users arriving on these websites were randomly assigned to two experimental groups, and the groups were exposed to different visual stimuli. The control group received regular product price information, whereas the treatment group was, in addition, offered information about operating cost and total LCC. Click-stream data of consumers' shopping behavior was evaluated with multiple regression analysis by controlling for several product characteristics. This dissertation finds that LCC disclosure reduces the mean energy use of chosen cooling appliances by 2.5% (p<0.01), and the energy use of chosen washing machines by 0.8% (p<0.001). For the latter, it also reduces the mean water use by 0.7% (p<0.05). These effects suggest a potential role for public policy in promoting LCC disclosure. While I do not attempt to estimate the costs of such a policy, a simple quantification shows that the benefits amount to 100 to 200 thousand Euros per year for Germany, given current predictions regarding the price of tradable permits for CO2, and not counting other potential benefits. Future research should strive for increasing external validity, using better instruments, and evaluating the effectiveness of different information formats for LCC disclosure.

  19. Algal cell disruption using microbubbles to localize ultrasonic energy

    PubMed Central

    Krehbiel, Joel D.; Schideman, Lance C.; King, Daniel A.; Freund, Jonathan B.

    2015-01-01

    Microbubbles were added to an algal solution with the goal of improving cell disruption efficiency and the net energy balance for algal biofuel production. Experimental results showed that disruption increases with increasing peak rarefaction ultrasound pressure over the range studied: 1.90 to 3.07 MPa. Additionally, ultrasound cell disruption increased by up to 58% by adding microbubbles, with peak disruption occurring in the range of 108 microbubbles/ml. The localization of energy in space and time provided by the bubbles improve efficiency: energy requirements for such a process were estimated to be one-fourth of the available heat of combustion of algal biomass and one-fifth of currently used cell disruption methods. This increase in energy efficiency could make microbubble enhanced ultrasound viable for bioenergy applications and is expected to integrate well with current cell harvesting methods based upon dissolved air flotation. PMID:25311188

  20. Steam engine research for solar parabolic dish

    NASA Technical Reports Server (NTRS)

    Demler, R. L.

    1981-01-01

    The parabolic dish solar concentrator provides an opportunity to generate high grade energy in a modular system. Most of the capital is projected to be in the dish and its installation. Assurance of a high production demand of a standard dish could lead to dramatic cost reductions. High production volume in turn depends upon maximum application flexibility by providing energy output options, e.g., heat, electricity, chemicals and combinations thereof. Subsets of these options include energy storage and combustion assist. A steam engine design and experimental program is described which investigate the efficiency potential of a small 25 kW compound reheat cycle piston engine. An engine efficiency of 35 percent is estimated for a 700 C steam temperature from the solar receiver.

  1. Super-Nyquist shaping and processing technologies for high-spectral-efficiency optical systems

    NASA Astrophysics Data System (ADS)

    Jia, Zhensheng; Chien, Hung-Chang; Zhang, Junwen; Dong, Ze; Cai, Yi; Yu, Jianjun

    2013-12-01

    The implementations of super-Nyquist pulse generation, both in a digital field using a digital-to-analog converter (DAC) or an optical filter at transmitter side, are introduced. Three corresponding signal processing algorithms at receiver are presented and compared for high spectral-efficiency (SE) optical systems employing the spectral prefiltering. Those algorithms are designed for the mitigation towards inter-symbol-interference (ISI) and inter-channel-interference (ICI) impairments by the bandwidth constraint, including 1-tap constant modulus algorithm (CMA) and 3-tap maximum likelihood sequence estimation (MLSE), regular CMA and digital filter with 2-tap MLSE, and constant multi-modulus algorithm (CMMA) with 2-tap MLSE. The principles and prefiltering tolerance are given through numerical and experimental results.

  2. Cell-stimulation therapy of lateral epicondylitis with frequency-modulated low-intensity electric current.

    PubMed

    Aliyev, R M; Geiger, G

    2012-03-01

    In addition to the routine therapy, the patients with lateral epicondylitis included into experimental group were subjected to a 12-week cell-stimulation therapy with low-intensity frequency-modulated electric current. The control group received the same routine therapy and sham stimulation (the therapeutic apparatus was not energized). The efficiency of this microcurrent therapy was estimated by comparing medical indices before therapy and at the end of a 12-week therapeutic course using a 10-point pain severity numeric rating scale (NRS) and Roles-Maudsley pain score. The study revealed high therapeutic efficiency of cell-stimulation with low-intensity electric current resulting probably from up-regulation of intracellular transmitters, interleukins, and prostaglandins playing the key role in the regulation of inflammation.

  3. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  4. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  5. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  6. Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.

    PubMed

    Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P

    2017-03-01

    We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  8. Simulation of high SNR photodetector with L-C coupling and transimpedance amplifier circuit and its verification

    NASA Astrophysics Data System (ADS)

    Wang, Shaofeng; Xiang, Xiao; Zhou, Conghua; Zhai, Yiwei; Quan, Runai; Wang, Mengmeng; Hou, Feiyan; Zhang, Shougang; Dong, Ruifang; Liu, Tao

    2017-01-01

    In this paper, a model for simulating the optical response and noise performances of photodetectors with L-C coupling and transimpedance amplification circuit is presented. To verify the simulation, two kinds of photodetectors, which are based on the same printed-circuit-board (PCB) designing and PIN photodiode but different operational amplifiers, are developed and experimentally investigated. Through the comparisons between the numerical simulation results and the experimentally obtained data, excellent agreements are achieved, which show that the model provides a highly efficient guide for the development of a high signal to noise ratio photodetector. Furthermore, the parasite capacitances on the developed PCB, which are always hardly measured but play a non-negligible influence on the photodetectors' performances, are estimated.

  9. Simulation of high SNR photodetector with L-C coupling and transimpedance amplifier circuit and its verification.

    PubMed

    Wang, Shaofeng; Xiang, Xiao; Zhou, Conghua; Zhai, Yiwei; Quan, Runai; Wang, Mengmeng; Hou, Feiyan; Zhang, Shougang; Dong, Ruifang; Liu, Tao

    2017-01-01

    In this paper, a model for simulating the optical response and noise performances of photodetectors with L-C coupling and transimpedance amplification circuit is presented. To verify the simulation, two kinds of photodetectors, which are based on the same printed-circuit-board (PCB) designing and PIN photodiode but different operational amplifiers, are developed and experimentally investigated. Through the comparisons between the numerical simulation results and the experimentally obtained data, excellent agreements are achieved, which show that the model provides a highly efficient guide for the development of a high signal to noise ratio photodetector. Furthermore, the parasite capacitances on the developed PCB, which are always hardly measured but play a non-negligible influence on the photodetectors' performances, are estimated.

  10. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  11. Application of Adaptive Neuro-Fuzzy Inference System for Prediction of Neutron Yield of IR-IECF Facility in High Voltages

    NASA Astrophysics Data System (ADS)

    Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.

    2013-09-01

    This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.

  12. Structural Heterogeneity and Quantitative FRET Efficiency Distributions of Polyprolines through a Hybrid Atomistic Simulation and Monte Carlo Approach

    PubMed Central

    Hoefling, Martin; Lima, Nicola; Haenni, Dominik; Seidel, Claus A. M.; Schuler, Benjamin; Grubmüller, Helmut

    2011-01-01

    Förster Resonance Energy Transfer (FRET) experiments probe molecular distances via distance dependent energy transfer from an excited donor dye to an acceptor dye. Single molecule experiments not only probe average distances, but also distance distributions or even fluctuations, and thus provide a powerful tool to study biomolecular structure and dynamics. However, the measured energy transfer efficiency depends not only on the distance between the dyes, but also on their mutual orientation, which is typically inaccessible to experiments. Thus, assumptions on the orientation distributions and averages are usually made, limiting the accuracy of the distance distributions extracted from FRET experiments. Here, we demonstrate that by combining single molecule FRET experiments with the mutual dye orientation statistics obtained from Molecular Dynamics (MD) simulations, improved estimates of distances and distributions are obtained. From the simulated time-dependent mutual orientations, FRET efficiencies are calculated and the full statistics of individual photon absorption, energy transfer, and photon emission events is obtained from subsequent Monte Carlo (MC) simulations of the FRET kinetics. All recorded emission events are collected to bursts from which efficiency distributions are calculated in close resemblance to the actual FRET experiment, taking shot noise fully into account. Using polyproline chains with attached Alexa 488 and Alexa 594 dyes as a test system, we demonstrate the feasibility of this approach by direct comparison to experimental data. We identified cis-isomers and different static local environments as sources of the experimentally observed heterogeneity. Reconstructions of distance distributions from experimental data at different levels of theory demonstrate how the respective underlying assumptions and approximations affect the obtained accuracy. Our results show that dye fluctuations obtained from MD simulations, combined with MC single photon kinetics, provide a versatile tool to improve the accuracy of distance distributions that can be extracted from measured single molecule FRET efficiencies. PMID:21629703

  13. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  14. Crowdsourcing for Cognitive Science – The Utility of Smartphones

    PubMed Central

    Brown, Harriet R.; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A.; McNab, Fiona; Rutledge, Robb B.; Dolan, Raymond J.

    2014-01-01

    By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations. PMID:25025865

  15. Crowdsourcing for cognitive science--the utility of smartphones.

    PubMed

    Brown, Harriet R; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A; McNab, Fiona; Rutledge, Robb B; Dolan, Raymond J

    2014-01-01

    By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations.

  16. Active media for up-conversion diode-pumped lasers

    NASA Astrophysics Data System (ADS)

    Tkachuk, Alexandra M.

    1996-03-01

    In this work, we consider the different methods of populating the initial and final working levels of laser transitions in TR-doped crystals under the selective 'up-conversion' and 'avalanche' diode laser pumping. On the basis of estimates of the probabilities of competing non-radiative energy-transfer processes rates obtained from the experimental data and theoretical calculations, we estimated the efficiency of the up-conversion pumping and selfquenching of the upper TR3+ states excited by laser-diode emission. The effect of the host composition, dopant concentration, and temperature on the output characteristics and up-conversion processes in YLF:Er; BaY2F8:Er; BaY2F8:Er,Yb and BaY2F8:Yb,Ho are determined.

  17. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    NASA Astrophysics Data System (ADS)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  18. Type-I frequency-doubling characteristics of high-power, ultrafast fiber laser in thick BIBO crystal.

    PubMed

    Chaitanya N, Apurv; Aadhi, A; Singh, R P; Samanta, G K

    2014-09-15

    We report on experimental realization of optimum focusing condition for type-I second-harmonic generation (SHG) of high-power, ultrafast laser in "thick" nonlinear crystal. Using single-pass, frequency doubling of a 5 W Yb-fiber laser of pulse width ~260 fs at repetition rate of 78 MHz in a 5-mm-long bismuth triborate (BIBO) crystal we observed that the optimum focusing condition is more dependent on the birefringence of the crystal than its group-velocity mismatch (GVM). A theoretical fit to our experimental results reveals that even in the presence of GVM, the optimum focusing condition matches the theoretical model of Boyd and Kleinman, predicted for continuous-wave and long-pulse SHG. Using a focusing factor of ξ=1.16 close to the estimated optimum value of ξ=1.72 for our experimental conditions, we generated 2.25 W of green radiation of pulse width 176 fs with single-pass conversion efficiency as high as 46.5%. Our study also verifies the effect of pulse narrowing and broadening of angular phase-matching bandwidth of SHG at tighter focusing. This study signifies the advantage of SHG in "thick" crystal in controlling SH-pulse width by changing the focusing lens while accessing high conversion efficiency and broad angular phase-matching bandwidth.

  19. Unbiased estimation in seamless phase II/III trials with unequal treatment effect variances and hypothesis-driven selection rules.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2016-09-30

    Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two-stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity-adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  20. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    PubMed

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  1. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  2. A learning framework for age rank estimation based on face images with scattering transform.

    PubMed

    Chang, Kuang-Yu; Chen, Chu-Song

    2015-03-01

    This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.

  3. Is effective force application in handrim wheelchair propulsion also efficient?

    PubMed

    Bregman, D J J; van Drongelen, S; Veeger, H E J

    2009-01-01

    Efficiency in manual wheelchair propulsion is low, as is the fraction of the propulsion force that is attributed to the moment of propulsion of the wheelchair. In this study we tested the hypothesis that a tangential propulsion force direction leads to an increase in physiological cost, due to (1) the sub-optimal use of elbow flexors and extensors, and/or (2) the necessity of preventing of glenohumeral subluxation. Five able-bodied and 11 individuals with a spinal cord injury propelled a wheelchair while kinematics and kinetics were collected. The results were used to perform inverse dynamical simulations with input of (1) the experimentally obtained propulsion force, and (2) only the tangential component of that force. In the tangential force condition the physiological cost was over 30% higher, while the tangential propulsion force was only 75% of the total experimental force. According to model estimations, the tangential force condition led to more co-contraction around the elbow, and a higher power production around the shoulder joint. The tangential propulsion force led to a significant, but small 4% increase in necessity for the model to compensate for glenohumeral subluxation, which indicates that this is not a likely cause of the decrease in efficiency. The present findings support the hypothesis that the observed force direction in wheelchair propulsion is a compromise between efficiency and the constraints imposed by the wheelchair-user system. This implies that training should not be aimed at optimization of the propulsion force, because this may be less efficient and more straining for the musculoskeletal system.

  4. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  5. Exact comprehensive equations for the photon management properties of silicon nanowire

    PubMed Central

    Li, Yingfeng; Li, Meicheng; Li, Ruike; Fu, Pengfei; Wang, Tai; Luo, Younan; Mbengue, Joseph Michel; Trevor, Mwenya

    2016-01-01

    Unique photon management (PM) properties of silicon nanowire (SiNW) make it an attractive building block for a host of nanowire photonic devices including photodetectors, chemical and gas sensors, waveguides, optical switches, solar cells, and lasers. However, the lack of efficient equations for the quantitative estimation of the SiNW’s PM properties limits the rational design of such devices. Herein, we establish comprehensive equations to evaluate several important performance features for the PM properties of SiNW, based on theoretical simulations. Firstly, the relationships between the resonant wavelengths (RW), where SiNW can harvest light most effectively, and the size of SiNW are formulized. Then, equations for the light-harvesting efficiency at RW, which determines the single-frequency performance limit of SiNW-based photonic devices, are established. Finally, equations for the light-harvesting efficiency of SiNW in full-spectrum, which are of great significance in photovoltaics, are established. Furthermore, using these equations, we have derived four extra formulas to estimate the optimal size of SiNW in light-harvesting. These equations can reproduce majority of the reported experimental and theoretical results with only ~5% error deviations. Our study fills up a gap in quantitatively predicting the SiNW’s PM properties, which will contribute significantly to its practical applications. PMID:27103087

  6. How to assess the efficiency of synchronization experiments in tokamaks

    NASA Astrophysics Data System (ADS)

    Murari, A.; Craciunescu, T.; Peluso, E.; Gelfusa, M.; Lungaroni, M.; Garzotti, L.; Frigione, D.; Gaudio, P.; Contributors, JET

    2016-07-01

    Control of instabilities such as ELMs and sawteeth is considered an important ingredient in the development of reactor-relevant scenarios. Various forms of ELM pacing have been tried in the past to influence their behavior using external perturbations. One of the main problems with these synchronization experiments resides in the fact that ELMs are periodic or quasi-periodic in nature. Therefore, after any pulsed perturbation, if one waits long enough, an ELM is always bound to occur. To evaluate the effectiveness of ELM pacing techniques, it is crucial to determine an appropriate interval over which they can have a real influence and an effective triggering capability. In this paper, three independent statistical methods are described to address this issue: Granger causality, transfer entropy and recurrence plots. The obtained results for JET with the ITER-like wall (ILW) indicate that the proposed techniques agree very well and provide much better estimates than the traditional heuristic criteria reported in the literature. Moreover, their combined use allows for the improvement of the time resolution of the assessment and determination of the efficiency of the pellet triggering in different phases of the same discharge. Therefore, the developed methods can be used to provide a quantitative and statistically robust estimate of the triggering efficiency of ELM pacing under realistic experimental conditions.

  7. An MRF-based device for the torque stiffness control of all movable vertical tails

    NASA Astrophysics Data System (ADS)

    Ameduri, Salvatore; Concilio, Antonio; Gianvito, Antonio; Lemme, Manuel

    2005-05-01

    Aerodynamic control surfaces efficiency is among the major parameters defining the performance of generic aircraft and is strongly affected by geometric and stiffness characteristics. A target of the '3AS' European Project is to estimate the eventual benefits coming from the adaptive control of the torque rigidity of the vertical tail of the EuRAM wind tunnel model. The specific role of CIRA inside the Project is the design of a device based on the "Smart Structures and Materials" concept, able to produce required stiffness variations. Numerical and experimental investigations pointed out that wide excursions of the tail torque rigidity may assure higher efficiency, for several flight regimes. Stiffness variations may be obtained through both classical mechanic-hydraulic and smart systems. In this case, the attainable weight and reliability level may be the significant parameters to drive the choice. For this reason, CIRA focused its efforts also on the design of devices without heavy mechanical parts. The device described in this work is schematically constituted by linear springs linked in a suitably way to the tail shaft. Required stiffness variations are achieved by selectively locking one or more springs, through a hydraulic system, MRF-based. An optimisation process was performed to find the spring features maximising the achievable stiffness range. Then, the hydraulic MRF design was dealt with. Finally, basing on numerical predictions, a prototype was manufactured and an experimental campaign was performed to estimate the device static and dynamic behaviour.

  8. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies.

    PubMed

    Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.

  9. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies

    PubMed Central

    Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007

  10. FIBER AND INTEGRATED OPTICS: Efficiency of nonstationary transformation of the spatial coherence of pulsed laser radiation in a multimode optical fibre upon self-phase modulation

    NASA Astrophysics Data System (ADS)

    Kitsak, M. A.; Kitsak, A. I.

    2007-08-01

    The model scheme of the nonlinear mechanism of transformation (decreasing) of the spatial coherence of a pulsed laser field in an extended multimode optical fibre upon nonstationary interaction with the fibre core is theoretically analysed. The case is considered when the spatial statistics of input radiation is caused by phase fluctuations. The analytic expression is obtained which relates the number of spatially coherent radiation modes with the spatially energy parameters on the initial radiation and fibre parameters. The efficiency of decorrelation of radiation upon excitation of the thermal and electrostriction nonlinearities in the fibre is estimated. Experimental studies are performed which revealed the basic properties of the transformation of the spatial coherence of a laser beam in a multimode fibre. The experimental results are compared with the predictions of the model of radiation transfer proposed in the paper. It is found that the spatial decorrelation of a light beam in a silica multimode fibre is mainly restricted by stimulated Raman scattering.

  11. Design and simulation of different multilayer solar selective coatings for solar thermal applications

    NASA Astrophysics Data System (ADS)

    El-Mahallawy, Nahed; Atia, Mostafa R. A.; Khaled, Amany; Shoeib, Madiha

    2018-04-01

    Research has adopted lately the improvement of solar collectors’ efficiency and durability by coating its surface with special selective coatings. The selectivity of any coat is governed by the ratio between the absorptivity of this coat in the UV range to its emissivity in the IR range (named selectivity). There emerged a need of using simulation software to estimate the effect of different elements and compounds on the optical properties before getting into experimental analysis. Several research has discussed the stability and durability of the coats under high temperature conditions since it was proved that the coat efficiency increases at high temperature; i.e. being more selective. This research has approached the simulation of different metal(M) / metal oxide (MOx) based tandems in order to obtain promising selective properties that can be taken into further experimental investigation. Five metals and six metal oxides were chosen based on previous literature to be simulated using OpenFilters open source software and results were analyzed. Oxides of tungsten, copper and silicon have shown superior selective results through different layering techniques than others.

  12. Efficient experimental design for uncertainty reduction in gene regulatory networks.

    PubMed

    Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R

    2015-01-01

    An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.

  13. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  14. 16 CFR 305.5 - Determinations of estimated annual energy consumption, estimated annual operating cost, and...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... consumption, estimated annual operating cost, and energy efficiency rating, and of water use rate. 305.5... energy efficiency rating, and of water use rate. (a) Procedures for determining the estimated annual energy consumption, the estimated annual operating costs, the energy efficiency ratings, and the efficacy...

  15. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function.

    PubMed

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A; Lu, Zhong-Lin; Myung, Jay I

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias.

  16. A hierarchical Bayesian approach to adaptive vision testing: A case study with the contrast sensitivity function

    PubMed Central

    Gu, Hairong; Kim, Woojae; Hou, Fang; Lesmes, Luis Andres; Pitt, Mark A.; Lu, Zhong-Lin; Myung, Jay I.

    2016-01-01

    Measurement efficiency is of concern when a large number of observations are required to obtain reliable estimates for parametric models of vision. The standard entropy-based Bayesian adaptive testing procedures addressed the issue by selecting the most informative stimulus in sequential experimental trials. Noninformative, diffuse priors were commonly used in those tests. Hierarchical adaptive design optimization (HADO; Kim, Pitt, Lu, Steyvers, & Myung, 2014) further improves the efficiency of the standard Bayesian adaptive testing procedures by constructing an informative prior using data from observers who have already participated in the experiment. The present study represents an empirical validation of HADO in estimating the human contrast sensitivity function. The results show that HADO significantly improves the accuracy and precision of parameter estimates, and therefore requires many fewer observations to obtain reliable inference about contrast sensitivity, compared to the method of quick contrast sensitivity function (Lesmes, Lu, Baek, & Albright, 2010), which uses the standard Bayesian procedure. The improvement with HADO was maintained even when the prior was constructed from heterogeneous populations or a relatively small number of observers. These results of this case study support the conclusion that HADO can be used in Bayesian adaptive testing by replacing noninformative, diffuse priors with statistically justified informative priors without introducing unwanted bias. PMID:27105061

  17. Optical coherence tomography retinal image reconstruction via nonlocal weighted sparse representation

    NASA Astrophysics Data System (ADS)

    Abbasi, Ashkan; Monadjemi, Amirhassan; Fang, Leyuan; Rabbani, Hossein

    2018-03-01

    We present a nonlocal weighted sparse representation (NWSR) method for reconstruction of retinal optical coherence tomography (OCT) images. To reconstruct a high signal-to-noise ratio and high-resolution OCT images, utilization of efficient denoising and interpolation algorithms are necessary, especially when the original data were subsampled during acquisition. However, the OCT images suffer from the presence of a high level of noise, which makes the estimation of sparse representations a difficult task. Thus, the proposed NWSR method merges sparse representations of multiple similar noisy and denoised patches to better estimate a sparse representation for each patch. First, the sparse representation of each patch is independently computed over an overcomplete dictionary, and then a nonlocal weighted sparse coefficient is computed by averaging representations of similar patches. Since the sparsity can reveal relevant information from noisy patches, combining noisy and denoised patches' representations is beneficial to obtain a more robust estimate of the unknown sparse representation. The denoised patches are obtained by applying an off-the-shelf image denoising method and our method provides an efficient way to exploit information from noisy and denoised patches' representations. The experimental results on denoising and interpolation of spectral domain OCT images demonstrated the effectiveness of the proposed NWSR method over existing state-of-the-art methods.

  18. Comparison of simulated and experimental results of temperature distribution in a closed two-phase thermosyphon cooling system

    NASA Astrophysics Data System (ADS)

    Shaanika, E.; Yamaguchi, K.; Miki, M.; Ida, T.; Izumi, M.; Murase, Y.; Oryu, T.; Yanamoto, T.

    2017-12-01

    Superconducting generators offer numerous advantages over conventional generators of the same rating. They are lighter, smaller and more efficient. Amongst a host of methods for cooling HTS machinery, thermosyphon-based cooling systems have been employed due to their high heat transfer rate and near-isothermal operating characteristics associated with them. To use them optimally, it is essential to study thermal characteristics of these cryogenic thermosyphons. To this end, a stand-alone neon thermosyphon cooling system with a topology resembling an HTS rotating machine was studied. Heat load tests were conducted on the neon thermosyphon cooling system by applying a series of heat loads to the evaporator at different filling ratios. The temperature at selected points of evaporator, adiabatic tube and condenser as well as total heat leak were measured. A further study involving a computer thermal model was conducted to gain further insight into the estimated temperature distribution of thermosyphon components and heat leak of the cooling system. The model employed boundary conditions from data of heat load tests. This work presents a comparison between estimated (by model) and experimental (measured) temperature distribution in a two-phase cryogenic thermosyphon cooling system. The simulation results of temperature distribution and heat leak compared generally well with experimental data.

  19. Nanonewton thrust measurement of photon pressure propulsion using semiconductor laser

    NASA Astrophysics Data System (ADS)

    Iwami, K.; Akazawa, Taku; Ohtsuka, Tomohiro; Nishida, Hiroyuki; Umeda, Norihiro

    2011-09-01

    To evaluate the thrust produced by photon pressure emitted from a 100 W class continuous-wave semiconductor laser, a torsion-balance precise thrust stand is designed and tested. Photon emission propulsion using semiconductor light sources attract interests as a possible candidate for deep-space propellant-less propulsion and attitude control system. However, the thrust produced by photon emission as large as several ten nanonewtons requires precise thrust stand. A resonant method is adopted to enhance the sensitivity of the biflier torsional-spring thrust stand. The torsional spring constant and the resonant of the stand is 1.245 × 10-3 Nm/rad and 0.118 Hz, respectively. The experimental results showed good agreement with the theoretical estimation. The thrust efficiency for photon propulsion was also defined. A maximum thrust of 499 nN was produced by the laser with 208 W input power (75 W of optical output) corresponding to a thrust efficiency of 36.7%. The minimum detectable thrust of the stand was estimated to be 2.62 nN under oscillation at a frequency close to resonance.

  20. Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network

    NASA Astrophysics Data System (ADS)

    Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea

    Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.

  1. Online estimation of internal stack temperatures in solid oxide fuel cell power generating units

    NASA Astrophysics Data System (ADS)

    Dolenc, B.; Vrečko, D.; Juričić, Ɖ.; Pohjoranta, A.; Pianese, C.

    2016-12-01

    Thermal stress is one of the main factors affecting the degradation rate of solid oxide fuel cell (SOFC) stacks. In order to mitigate the possibility of fatal thermal stress, stack temperatures and the corresponding thermal gradients need to be continuously controlled during operation. Due to the fact that in future commercial applications the use of temperature sensors embedded within the stack is impractical, the use of estimators appears to be a viable option. In this paper we present an efficient and consistent approach to data-driven design of the estimator for maximum and minimum stack temperatures intended (i) to be of high precision, (ii) to be simple to implement on conventional platforms like programmable logic controllers, and (iii) to maintain reliability in spite of degradation processes. By careful application of subspace identification, supported by physical arguments, we derive a simple estimator structure capable of producing estimates with 3% error irrespective of the evolving stack degradation. The degradation drift is handled without any explicit modelling. The approach is experimentally validated on a 10 kW SOFC system.

  2. A Balanced Approach to Adaptive Probability Density Estimation.

    PubMed

    Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy

    2017-01-01

    Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  3. Fragmentation efficiency of explosive volcanic eruptions: A study of experimentally generated pyroclasts

    NASA Astrophysics Data System (ADS)

    Kueppers, Ulrich; Scheu, Bettina; Spieler, Oliver; Dingwell, Donald B.

    2006-05-01

    Products of magma fragmentation can pose a severe threat to health, infrastructure, environment, and aviation. Systematic evaluation of the mechanisms and the consequences of volcanic fragmentation is very difficult as the adjacent processes cannot be observed directly and their deposits undergo transport-related sorting. However, enhanced knowledge is required for hazard assessment and risk mitigation. Laboratory experiments on natural samples allow the precise characterization of the generated pyroclasts and open the possibility for substantial advances in the quantification of fragmentation processes. They hold the promise of precise characterization and quantification of fragmentation efficiency and its dependence on changing material properties and the physical conditions at fragmentation. We performed a series of rapid decompression experiments on three sets of natural samples from Unzen volcano, Japan. The analysis comprised grain-size analysis and surface area measurements. The grain-size analysis is performed by dry sieving for particles larger than 250 μm and wet laser refraction for smaller particles. For all three sets of samples, the grain-size of the most abundant fraction decreases and the weight fraction of newly generated ash particles (up to 40 wt.%) increases with experimental pressure/potential energy for fragmentation. This energy can be estimated from the volume of the gas fraction and the applied pressure. The surface area was determined through Argon adsorption. The fragmentation efficiency is described by the degree of fine-particle generation. Results show that the fragmentation efficiency and the generated surface correlate positively with the applied energy.

  4. A Spherical Aerial Terrestrial Robot

    NASA Astrophysics Data System (ADS)

    Dudley, Christopher J.

    This thesis focuses on the design of a novel, ultra-lightweight spherical aerial terrestrial robot (ATR). The ATR has the ability to fly through the air or roll on the ground, for applications that include search and rescue, mapping, surveillance, environmental sensing, and entertainment. The design centers around a micro-quadcopter encased in a lightweight spherical exoskeleton that can rotate about the quadcopter. The spherical exoskeleton offers agile ground locomotion while maintaining characteristics of a basic aerial robot in flying mode. A model of the system dynamics for both modes of locomotion is presented and utilized in simulations to generate potential trajectories for aerial and terrestrial locomotion. Details of the quadcopter and exoskeleton design and fabrication are discussed, including the robot's turning characteristic over ground and the spring-steel exoskeleton with carbon fiber axle. The capabilities of the ATR are experimentally tested and are in good agreement with model-simulated performance. An energy analysis is presented to validate the overall efficiency of the robot in both modes of locomotion. Experimentally-supported estimates show that the ATR can roll along the ground for over 12 minutes and cover the distance of 1.7 km, or it can fly for 4.82 minutes and travel 469 m, on a single 350 mAh battery. Compared to a traditional flying-only robot, the ATR traveling over the same distance in rolling mode is 2.63-times more efficient, and in flying mode the system is only 39 percent less efficient. Experimental results also demonstrate the ATR's transition from rolling to flying mode.

  5. Closed-Loop Estimation of Retinal Network Sensitivity by Local Empirical Linearization

    PubMed Central

    2018-01-01

    Abstract Understanding how sensory systems process information depends crucially on identifying which features of the stimulus drive the response of sensory neurons, and which ones leave their response invariant. This task is made difficult by the many nonlinearities that shape sensory processing. Here, we present a novel perturbative approach to understand information processing by sensory neurons, where we linearize their collective response locally in stimulus space. We added small perturbations to reference stimuli and tested if they triggered visible changes in the responses, adapting their amplitude according to the previous responses with closed-loop experiments. We developed a local linear model that accurately predicts the sensitivity of the neural responses to these perturbations. Applying this approach to the rat retina, we estimated the optimal performance of a neural decoder and showed that the nonlinear sensitivity of the retina is consistent with an efficient encoding of stimulus information. Our approach can be used to characterize experimentally the sensitivity of neural systems to external stimuli locally, quantify experimentally the capacity of neural networks to encode sensory information, and relate their activity to behavior. PMID:29379871

  6. An Experimental Study of Energy Consumption in Buildings Providing Ancillary Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yashen; Afshari, Sina; Wolfe, John

    Heating, ventilation, and air conditioning (HVAC) systems in commercial buildings can provide ancillary services (AS) to the power grid, but by providing AS their energy consumption may increase. This inefficiency is evaluated using round-trip efficiency (RTE), which is defined as the ratio between the decrease and the increase in the HVAC system's energy consumption compared to the baseline consumption as a result of providing AS. This paper evaluates the RTE of a 30,000 m2 commercial building providing AS. We propose two methods to estimate the HVAC system's settling time after an AS event based on temperature and the air flowmore » measurements from the building. Experimental data gathered over a 4-month period are used to calculate the RTE for AS signals of various waveforms, magnitudes, durations, and polarities. The results indicate that the settling time estimation algorithm based on the air flow measurements obtains more accurate results compared to the temperature-based algorithm. Further, we study the impact of the AS signal shape parameters on the RTE and discuss the practical implications of our findings.« less

  7. Time-dependent, multimode interaction analysis of the gyroklystron amplifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swati, M. V., E-mail: swati.mv.ece10@iitbhu.ac.in; Chauhan, M. S.; Jain, P. K.

    2016-08-15

    In this paper, a time-dependent multimode nonlinear analysis for the gyroklystron amplifier has been developed by extending the analysis of gyrotron oscillators by employing the self-consistent approach. The nonlinear analysis developed here has been validated by taking into account the reported experimental results for a 32.3 GHz, three cavity, second harmonic gyroklystron operating in the TE{sub 02} mode. The analysis has been used to estimate the temporal RF growth in the operating mode as well as the nearby competing modes. Device gain and bandwidth have been computed for different drive powers and frequencies. The effect of various beam parameters, such asmore » beam voltage, beam current, and pitch factor, has also been studied. The computational results have estimated the gyroklystron saturated RF power ∼319 kW at 32.3 GHz with efficiency ∼23% and gain ∼26.3 dB with device bandwidth ∼0.027% (8 MHz) for a 70 kV, 20 A electron beam. The computed results are found to be in agreement with the experimental values within 10%.« less

  8. Absolute Paleointensity Estimates using Combined Shaw and Pseudo-Thellier Experimental Protocols

    NASA Astrophysics Data System (ADS)

    Foucher, M. S.; Smirnov, A. V.

    2016-12-01

    Data on the long-term evolution of Earth's magnetic field intensity have a great potential to advance our understanding of many aspects of the Earth's evolution. However, paleointensity determination is one of the most challenging aspects of paleomagnetic research so the quantity and quality of existing paleointensity data remain limited, especially for older epochs. While the Thellier double-heating method remains to be the most commonly used paleointensity technique, its applicability is limited for many rocks that undergo magneto-mineralogical alteration during the successive heating steps required by the method. In order to reduce the probability of alteration, several alternative methods that involve a limited number of or no heating steps have been proposed. However, continued efforts are needed to better understand the physical foundations and relative efficiency of reduced/non-heating methods in recovering the true paleofield strength and to better constrain their calibration factors. We will present the results of our investigation of synthetic and natural magnetite-bearing samples using a combination of the LTD-DHT Shaw and pseudo-Thellier experimental protocols for absolute paleointensity estimation.

  9. Uncertainties in the estimation of specific absorption rate during radiofrequency alternating magnetic field induced non-adiabatic heating of ferrofluids

    NASA Astrophysics Data System (ADS)

    Lahiri, B. B.; Ranoo, Surojit; Philip, John

    2017-11-01

    Magnetic fluid hyperthermia (MFH) is becoming a viable cancer treatment methodology where the alternating magnetic field induced heating of magnetic fluid is utilized for ablating the cancerous cells or making them more susceptible to the conventional treatments. The heating efficiency in MFH is quantified in terms of specific absorption rate (SAR), which is defined as the heating power generated per unit mass. In majority of the experimental studies, SAR is evaluated from the temperature rise curves, obtained under non-adiabatic experimental conditions, which is prone to various thermodynamic uncertainties. A proper understanding of the experimental uncertainties and its remedies is a prerequisite for obtaining accurate and reproducible SAR. Here, we study the thermodynamic uncertainties associated with peripheral heating, delayed heating, heat loss from the sample and spatial variation in the temperature profile within the sample. Using first order approximations, an adiabatic reconstruction protocol for the measured temperature rise curves is developed for SAR estimation, which is found to be in good agreement with those obtained from the computationally intense slope corrected method. Our experimental findings clearly show that the peripheral and delayed heating are due to radiation heat transfer from the heating coils and slower response time of the sensor, respectively. Our results suggest that the peripheral heating is linearly proportional to the sample area to volume ratio and coil temperature. It is also observed that peripheral heating decreases in presence of a non-magnetic insulating shielding. The delayed heating is found to contribute up to ~25% uncertainties in SAR values. As the SAR values are very sensitive to the initial slope determination method, explicit mention of the range of linear regression analysis is appropriate to reproduce the results. The effect of sample volume to area ratio on linear heat loss rate is systematically studied and the results are compared using a lumped system thermal model. The various uncertainties involved in SAR estimation are categorized as material uncertainties, thermodynamic uncertainties and parametric uncertainties. The adiabatic reconstruction is found to decrease the uncertainties in SAR measurement by approximately three times. Additionally, a set of experimental guidelines for accurate SAR estimation using adiabatic reconstruction protocol is also recommended. These results warrant a universal experimental and data analysis protocol for SAR measurements during field induced heating of magnetic fluids under non-adiabatic conditions.

  10. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  11. In situ measurement of gold nanoparticle production

    NASA Astrophysics Data System (ADS)

    Affandi, Mohd Syafiq; Bidin, Noriah; Abdullah, Mundzir; Aziz, Muhammad Safuan Abd.; Al-Azawi, Mohammed; Nugroho, Waskito

    2015-01-01

    The closeness of the experimental and theoretical values enables the development of an in situ characterization technique to monitor and analyze the production of gold nanoparticles (NPs), overcoming the use of high-end and expensive instrumentation. Gold NPs below the radius size of 10 nm were successfully synthesized in accordance with a few working parameters of pulse laser ablation in a liquid technique. In this report, the size, shape, concentration, and aggregation properties of gold NPs were estimated by the Mie-Gans model based on a reliable and interactive real-time absorption spectroscopy. The major features can be an important means toward determination of efficient process measures, productivity of gold NPs generated, and efficiency of the mass ablation rate. The accuracy in the measurement is confirmed via transmission electron microscopy analysis.

  12. Nanocluster metal films as thermoelectric material for radioisotope mini battery unit

    NASA Astrophysics Data System (ADS)

    Borisyuk, P. V.; Krasavin, A. V.; Tkalya, E. V.; Lebedinskii, Yu. Yu.; Vasiliev, O. S.; Yakovlev, V. P.; Kozlova, T. I.; Fetisov, V. V.

    2016-10-01

    The paper is devoted to studying the thermoelectric and structural properties of films based on metal nanoclusters (Au, Pd, Pt). The experimental results of the study of single nanoclusters' tunneling conductance obtained with scanning tunneling spectroscopy are presented. The obtained data allowed us to evaluate the thermoelectric power of thin film consisting of densely packed individual nanoclusters. It is shown that such thin films can operate as highly efficient thermoelectric materials. A scheme of miniature thermoelectric radioisotope power source based on the thorium-228 isotope is proposed. The efficiency of the radioisotope battery using thermoelectric converters based on nanocluster metal films is shown to reach values up to 1.3%. The estimated characteristics of the device are comparable with the parameters of up-to-date radioisotope batteries based on nickel-63.

  13. Quantifying Impact of Chromosome Copy Number on Recombination in Escherichia coli.

    PubMed

    Reynolds, T Steele; Gill, Ryan T

    2015-07-17

    The ability to precisely and efficiently recombineer synthetic DNA into organisms of interest in a quantitative manner is a key requirement in genome engineering. Even though considerable effort has gone into the characterization of recombination in Escherichia coli, there is still substantial variability in reported recombination efficiencies. We hypothesized that this observed variability could, in part, be explained by the variability in chromosome copy number as well as the location of the replication forks relative to the recombination site. During rapid growth, E. coli cells may contain several pairs of open replication forks. While recombineered forks are resolving and segregating within the population, changes in apparent recombineering efficiency should be observed. In the case of dominant phenotypes, we predicted and then experimentally confirmed that the apparent recombination efficiency declined during recovery until complete segregation of recombineered and wild-type genomes had occurred. We observed the reverse trend for recessive phenotypes. The observed changes in apparent recombination efficiency were found to be in agreement with mathematical calculations based on our proposed mechanism. We also provide a model that can be used to estimate the total segregated recombination efficiency based on an initial efficiency and growth rate. These results emphasize the importance of employing quantitative strategies in the design of genome-scale engineering efforts.

  14. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  15. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  16. High Throughput 600 Watt Hall Effect Thruster for Space Exploration

    NASA Technical Reports Server (NTRS)

    Szabo, James; Pote, Bruce; Tedrake, Rachel; Paintal, Surjeet; Byrne, Lawrence; Hruby, Vlad; Kamhawi, Hani; Smith, Tim

    2016-01-01

    A nominal 600-Watt Hall Effect Thruster was developed to propel unmanned space vehicles. Both xenon and iodine compatible versions were demonstrated. With xenon, peak measured thruster efficiency is 46-48% at 600-W, with specific impulse from 1400 s to 1700 s. Evolution of the thruster channel due to ion erosion was predicted through numerical models and calibrated with experimental measurements. Estimated xenon throughput is greater than 100 kg. The thruster is well sized for satellite station keeping and orbit maneuvering, either by itself or within a cluster.

  17. Product code optimization for determinate state LDPC decoding in robust image transmission.

    PubMed

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  18. Optical nonlinearity in gelatin layer film containing Au nanoparticles

    NASA Astrophysics Data System (ADS)

    Hirose, Tomohiro; Arisawa, Michiko; Omatsu, Takashige; Kuge, Ken'ichi; Hasegawa, Akira; Tateda, Mitsuhiro

    2002-09-01

    We demonstrate a novel technique to fabricate a gelatin film containing Au-nano-particles. The technique is based on silver halide photographic development. We investigated third-order non-linearity of the film by forward-four-wave-mixing technique. Peak absorption appeared at the wavelength of 560nm. Self-diffraction by the use of third order nonlinear grating formed by intense pico-second pulses was observed. Experimental diffraction efficiency was proportional to the square of the pump intensity. Third-order susceptibility c(3) of the film was estimated to be 1.8?~10^-7esu.

  19. Experimental characterization of plasma formation and shockwave propagation induced by high power pulsed underwater electrical discharge.

    PubMed

    Claverie, A; Deroy, J; Boustie, M; Avrillaud, G; Chuvatin, A; Mazanchenko, E; Demol, G; Dramane, B

    2014-06-01

    High power pulsed electrical discharges into liquids are investigated for new industrial applications based on the efficiency of controlled shock waves. We present here new experimental data obtained by combination of detailed high speed imaging equipments. It allows the visualization of the very first instants of plasma discharge formation, and then the pulsations of the gaseous bubble with an accurate timing of events. The time history of the expansion/compression of this bubble leads to an estimation of the energy effectively transferred to water during the discharge. Finally, the consecutive shock generation driven by this pulsating bubble is optically monitored by shadowgraphs and schlieren setup. These data provide essential information about the geometrical pattern and chronometry associated with the shock wave generation and propagation.

  20. An experimental study of solar desalination using free jets and an auxiliary hot air stream

    NASA Astrophysics Data System (ADS)

    Eid, Eldesouki I.; Khalaf-Allah, Reda A.; Dahab, Mohamed A.

    2018-04-01

    An experimental study for a solar desalination system based on jet-humidification with an auxiliary perpendicular hot air stream was carried out at Suez city, Egypt 29.9668°N, 32.5498°E. The tests were done from May to October 2016. The effects of nozzles situations and nozzle diameter with and without hot air stream on fresh water productivity were monitored. The results show that; the lateral and downward jets from narrow nozzles have more productivities than other situations. The hot air stream has to be adapted at a certain flow rate to get high values of productivity. The system productivity is (5.6 L/m 2 ), the estimated cost is (0.030063 / L) and the efficiency is 32.8%.

  1. Experimental Detection of Quantum Channel Capacities.

    PubMed

    Cuevas, Álvaro; Proietti, Massimiliano; Ciampini, Mario Arnolfo; Duranti, Stefano; Mataloni, Paolo; Sacchi, Massimiliano F; Macchiavello, Chiara

    2017-09-08

    We present an efficient experimental procedure that certifies nonvanishing quantum capacities for qubit noisy channels. Our method is based on the use of a fixed bipartite entangled state, where the system qubit is sent to the channel input. A particular set of local measurements is performed at the channel output and the ancilla qubit mode, obtaining lower bounds to the quantum capacities for any unknown channel with no need of quantum process tomography. The entangled qubits have a Bell state configuration and are encoded in photon polarization. The lower bounds are found by estimating the Shannon and von Neumann entropies at the output using an optimized basis, whose statistics is obtained by measuring only the three observables σ_{x}⊗σ_{x}, σ_{y}⊗σ_{y}, and σ_{z}⊗σ_{z}.

  2. Inferring phase equations from multivariate time series.

    PubMed

    Tokuda, Isao T; Jain, Swati; Kiss, István Z; Hudson, John L

    2007-08-10

    An approach is presented for extracting phase equations from multivariate time series data recorded from a network of weakly coupled limit cycle oscillators. Our aim is to estimate important properties of the phase equations including natural frequencies and interaction functions between the oscillators. Our approach requires the measurement of an experimental observable of the oscillators; in contrast with previous methods it does not require measurements in isolated single or two-oscillator setups. This noninvasive technique can be advantageous in biological systems, where extraction of few oscillators may be a difficult task. The method is most efficient when data are taken from the nonsynchronized regime. Applicability to experimental systems is demonstrated by using a network of electrochemical oscillators; the obtained phase model is utilized to predict the synchronization diagram of the system.

  3. Estimating power capability of aged lithium-ion batteries in presence of communication delays

    NASA Astrophysics Data System (ADS)

    Fridholm, Björn; Wik, Torsten; Kuusisto, Hannes; Klintberg, Anton

    2018-04-01

    Efficient control of electrified powertrains requires accurate estimation of the power capability of the battery for the next few seconds into the future. When implemented in a vehicle, the power estimation is part of a control loop that may contain several networked controllers which introduces time delays that may jeopardize stability. In this article, we present and evaluate an adaptive power estimation method that robustly can handle uncertain health status and time delays. A theoretical analysis shows that stability of the closed loop system can be lost if the resistance of the model is under-estimated. Stability can, however, be restored by filtering the estimated power at the expense of slightly reduced bandwidth of the signal. The adaptive algorithm is experimentally validated in lab tests using an aged lithium-ion cell subject to a high power load profile in temperatures from -20 to +25 °C. The upper voltage limit was set to 4.15 V and the lower voltage limit to 2.6 V, where significant non-linearities are occurring and the validity of the model is limited. After an initial transient when the model parameters are adapted, the prediction accuracy is within ± 2 % of the actually available power.

  4. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging.

    PubMed

    Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J

    2006-09-01

    A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (

  5. An upper limit to the abundance of lightning-produced amino acids in the Jovian water clouds

    NASA Astrophysics Data System (ADS)

    Bar-Nun, A.; Noy, N.; Podolak, M.

    1984-08-01

    The effect of excess hydrogen on the synthesis of amino acids by high-temperature shock waves in a hydrogen/methane/ammonia/water vapor mixture was studied experimentally. The energy efficiency results, together with the best estimate of the lightning energy dissipation rate on Jupiter from the Voyage data, were used to calculate an upper limit to the rate of amino acid production by lightning in Jovian water clouds. Using reasonable values for the eddy diffusion coefficients within and below the water clouds, the column abundance of lightning-produced amino acids in the clouds was estimated to be 6.2 x 10 to the -6th cm-am. Hence, concentration of amino acids in water droplets would be 8 x 10 to the -8th mole/liter.

  6. Efficient Estimation of the Standardized Value

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    2009-01-01

    We derive an estimator of the standardized value which, under the standard assumptions of normality and homoscedasticity, is more efficient than the established (asymptotically efficient) estimator and discuss its gains for small samples. (Contains 1 table and 3 figures.)

  7. Bayesian framework for modeling diffusion processes with nonlinear drift based on nonlinear and incomplete observations.

    PubMed

    Wu, Hao; Noé, Frank

    2011-03-01

    Diffusion processes are relevant for a variety of phenomena in the natural sciences, including diffusion of cells or biomolecules within cells, diffusion of molecules on a membrane or surface, and diffusion of a molecular conformation within a complex energy landscape. Many experimental tools exist now to track such diffusive motions in single cells or molecules, including high-resolution light microscopy, optical tweezers, fluorescence quenching, and Förster resonance energy transfer (FRET). Experimental observations are most often indirect and incomplete: (1) They do not directly reveal the potential or diffusion constants that govern the diffusion process, (2) they have limited time and space resolution, and (3) the highest-resolution experiments do not track the motion directly but rather probe it stochastically by recording single events, such as photons, whose properties depend on the state of the system under investigation. Here, we propose a general Bayesian framework to model diffusion processes with nonlinear drift based on incomplete observations as generated by various types of experiments. A maximum penalized likelihood estimator is given as well as a Gibbs sampling method that allows to estimate the trajectories that have caused the measurement, the nonlinear drift or potential function and the noise or diffusion matrices, as well as uncertainty estimates of these properties. The approach is illustrated on numerical simulations of FRET experiments where it is shown that trajectories, potentials, and diffusion constants can be efficiently and reliably estimated even in cases with little statistics or nonequilibrium measurement conditions.

  8. Melanoma Is Skin Deep: A 3D Reconstruction Technique for Computerized Dermoscopic Skin Lesion Classification

    PubMed Central

    Satheesha, T. Y.; Prasad, M. N. Giri; Dhruve, Kashyap D.

    2017-01-01

    Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images. PMID:28512610

  9. Catch of channel catfish with tandem-set hoop nets and gill nets in lentic systems of Nebraska

    USGS Publications Warehouse

    Richters, Lindsey K.; Pope, Kevin L.

    2011-01-01

    Twenty-six Nebraska water bodies representing two ecosystem types (small standing waters and large standing waters) were surveyed during 2008 and 2009 with tandem-set hoop nets and experimental gill nets to determine if similar trends existed in catch rates and size structures of channel catfish Ictalurus punctatus captured with these gears. Gear efficiency was assessed as the number of sets (nets) that would be required to capture 100 channel catfish given observed catch per unit effort (CPUE). Efficiency of gill nets was not correlated with efficiency of hoop nets for capturing channel catfish. Small sample sizes prohibited estimation of proportional size distributions in most surveys; in the four surveys for which sample size was sufficient to quantify length-frequency distributions of captured channel catfish, distributions differed between gears. The CPUE of channel catfish did not differ between small and large water bodies for either gear. While catch rates of hoop nets were lower than rates recorded in previous studies, this gear was more efficient than gill nets at capturing channel catfish. However, comparisons of size structure between gears may be problematic.

  10. Turbulent FEL theory and experiment on ELSA at Bruyeres-le-Chatel

    NASA Astrophysics Data System (ADS)

    Chaix, P.; Guimbal, P.

    1995-04-01

    We consider the asymptotic behaviour of long pulse high current Compton free electron laser oscillators. It is known that if the current is high enough and the cavity losses low enough, sideband instabilities and non-linear mode couplings eventually lead to a strong broadening of the radiated spectrum, and to a strong efficiency enhancement. In this “post-sideband” regime, the electron dynamics along the wiggler is intrinsically stochastic, and the efficiency is due to chaotic diffusion of the electrons toward lower energies, rather than to standard synchrotron oscillations. This results in new scaling laws for saturation properties. We have obtained simple analytical estimates for the extracted efficiency and for the spectral width, in very good agreement with numerical simulations. The infrared ELSA free electron laser at Bruyères-le-Châtel has been used to obtain experimental evidence for these new scaling laws. In particular it has been verified that in the post-sideband regime, the ratio of the extracted efficiency to the relative spectral width is independent of the operating parameters, and close to 3/3 as predicted by theory.

  11. Modeling recombination processes and predicting energy conversion efficiency of dye sensitized solar cells from first principles

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Meng, Sheng

    2014-03-01

    We present a set of algorithms based on solo first principles calculations, to accurately calculate key properties of a DSC device including sunlight harvest, electron injection, electron-hole recombination, and open circuit voltages. Two series of D- π-A dyes are adopted as sample dyes. The short circuit current can be predicted by calculating the dyes' photo absorption, and the electron injection and recombination lifetime using real-time time-dependent density functional theory (TDDFT) simulations. Open circuit voltage can be reproduced by calculating energy difference between the quasi-Fermi level of electrons in the semiconductor and the electrolyte redox potential, considering the influence of electron recombination. Based on timescales obtained from real time TDDFT dynamics for excited states, the estimated power conversion efficiency of DSC fits nicely with the experiment, with deviation below 1-2%. Light harvesting efficiency, incident photon-to-electron conversion efficiency and the current-voltage characteristics can also be well reproduced. The predicted efficiency can serve as either an ideal limit for optimizing photovoltaic performance of a given dye, or a virtual device that closely mimicking the performance of a real device under different experimental settings.

  12. An efficient sparse matrix multiplication scheme for the CYBER 205 computer

    NASA Technical Reports Server (NTRS)

    Lambiotte, Jules J., Jr.

    1988-01-01

    This paper describes the development of an efficient algorithm for computing the product of a matrix and vector on a CYBER 205 vector computer. The desire to provide software which allows the user to choose between the often conflicting goals of minimizing central processing unit (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of four types of storage is selected for each diagonal. The candidate storage types employed were chosen to be efficient on the CYBER 205 for diagonals which have nonzero structure which is dense, moderately sparse, very sparse and short, or very sparse and long; however, for many densities, no diagonal type is most efficient with respect to both resource requirements, and a trade-off must be made. For each diagonal, an initialization subroutine estimates the CPU time and storage required for each storage type based on results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the two resources. The adjusted resource requirements are then compared to select the most efficient storage and computational scheme.

  13. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods.

    PubMed

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke; Palsbøll, Per J

    2012-09-06

    Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments.

  14. ESTIMATION OF CONSTANT AND TIME-VARYING DYNAMIC PARAMETERS OF HIV INFECTION IN A NONLINEAR DIFFERENTIAL EQUATION MODEL.

    PubMed

    Liang, Hua; Miao, Hongyu; Wu, Hulin

    2010-03-01

    Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.

  15. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    PubMed Central

    2012-01-01

    Background Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Results Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Conclusion Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments. PMID:22954451

  16. Effect of the load size on the efficiency of microwave heating under stop flow and continuous flow conditions.

    PubMed

    Patil, Narendra G; Rebrov, Evgeny V; Eränen, Kari; Benaskar, Faysal; Meuldijk, Jan; Mikkola, Jyri-Pekka; Hessel, Volker; Hulshof, Lumbertus A; Murzin, Dmitry Yu; Schouten, Jaap C

    2012-01-01

    A novel heating efficiency analysis of the microwave heated stop-flow (i.e. stagnant liquid) and continuous-flow reactors has been presented. The thermal losses to the surrounding air by natural convection have been taken into account for heating efficiency calculation of the microwave heating process. The effect of the load diameter in the range of 4-29 mm on the heating efficiency of ethylene glycol was studied in a single mode microwave cavity under continuous flow and stop-flow conditions. The variation of the microwave absorbing properties of the load with temperature was estimated. Under stop-flow conditions, the heating efficiency depends on the load diameter. The highest heating efficiency has been observed at the load diameter close to the half wavelength of the electromagnetic field in the corresponding medium. Under continuous-flow conditions, the heating efficiency increased linearly. However, microwave leakage above the propagation diameter restricted further experimentation at higher load diameters. Contrary to the stop-flow conditions, the load temperature did not raise monotonously from the inlet to outlet under continuous-flow conditions. This was due to the combined effect of lagging convective heat fluxes in comparison to volumetric heating. This severely disturbs the uniformity of the electromagnetic field in the axial direction and creates areas of high and low field intensity along the load Length decreasing the heating efficiency as compared to stop-flow conditions.

  17. Reduced voltage losses yield 10% efficient fullerene free organic solar cells with >1 V open circuit voltages.

    PubMed

    Baran, D; Kirchartz, T; Wheeler, S; Dimitrov, S; Abdelsamie, M; Gorman, J; Ashraf, R S; Holliday, S; Wadsworth, A; Gasparini, N; Kaienburg, P; Yan, H; Amassian, A; Brabec, C J; Durrant, J R; McCulloch, I

    2016-12-01

    Optimization of the energy levels at the donor-acceptor interface of organic solar cells has driven their efficiencies to above 10%. However, further improvements towards efficiencies comparable with inorganic solar cells remain challenging because of high recombination losses, which empirically limit the open-circuit voltage ( V oc ) to typically less than 1 V. Here we show that this empirical limit can be overcome using non-fullerene acceptors blended with the low band gap polymer PffBT4T-2DT leading to efficiencies approaching 10% (9.95%). We achieve V oc up to 1.12 V, which corresponds to a loss of only E g / q - V oc = 0.5 ± 0.01 V between the optical bandgap E g of the polymer and V oc . This high V oc is shown to be associated with the achievement of remarkably low non-geminate and non-radiative recombination losses in these devices. Suppression of non-radiative recombination implies high external electroluminescence quantum efficiencies which are orders of magnitude higher than those of equivalent devices employing fullerene acceptors. Using the balance between reduced recombination losses and good photocurrent generation efficiencies achieved experimentally as a baseline for simulations of the efficiency potential of organic solar cells, we estimate that efficiencies of up to 20% are achievable if band gaps and fill factors are further optimized.

  18. Fundamental Escherichia coli biochemical pathways for biomass and energy production: creation of overall flux states.

    PubMed

    Carlson, Ross; Srienc, Friedrich

    2004-04-20

    We have previously shown that the metabolism for most efficient cell growth can be realized by a combination of two types of elementary modes. One mode produces biomass while the second mode generates only energy. The identity of the four most efficient biomass and energy pathway pairs changes, depending on the degree of oxygen limitation. The identification of such pathway pairs for different growth conditions offers a pathway-based explanation of maintenance energy generation. For a given growth rate, experimental aerobic glucose consumption rates can be used to estimate the contribution of each pathway type to the overall metabolic flux pattern. All metabolic fluxes are then completely determined by the stoichiometries of involved pathways defining all nutrient consumption and metabolite secretion rates. We present here equations that permit computation of network fluxes on the basis of unique pathways for the case of optimal, glucose-limited Escherichia coli growth under varying levels of oxygen stress. Predicted glucose and oxygen uptake rates and some metabolite secretion rates are in remarkable agreement with experimental observations supporting the validity of the presented approach. The entire most efficient, steady-state, metabolic rate structure is explicitly defined by the developed equations without need for additional computer simulations. The approach should be generally useful for analyzing and interpreting genomic data by predicting concise, pathway-based metabolic rate structures. Copyright 2004 Wiley Periodicals, Inc.

  19. Surface Engineering of PAMAM-SDB Chelating Resin with Diglycolamic Acid (DGA) Functional Group for Efficient Sorption of U(VI) and Th(IV) from Aqueous Medium.

    PubMed

    Ilaiyaraja, P; Deb, A K Singha; Ponraju, D; Ali, Sk Musharaf; Venkatraman, B

    2017-04-15

    A novel chelating resin obtained via growth of PAMAM dendron on surface of styrene divinyl benzene resin beads, followed by diglycolamic acid functionalization of the dendrimer terminal. Batch experiments were conducted to study the effects of pH, nitric acid concentration, amount of adsorbent, shaking time, initial metal ion concentration and temperature on U(VI) and Th(IV) adsorption efficiency. Diglycolamic acid terminated PAMAM dendrimer functionalized styrene divinylbenzene chelating resin (DGA-PAMAM-SDB) is found to be an efficient candidate for the removal of U(VI) and Th(IV) ions from aqueous (pH >4) and nitric acid media (>3M). The sorption equilibrium could be reached within 60min, and the experimental data fits with pseudo-second-order model. Langmuir sorption isotherm model correlates well with sorption equilibrium data. The maximum U(VI) and Th(IV) sorption capacity onto DGA-PAMAMG 5 -SDB was estimated to be about 682 and 544.2mgg -1 respectively at 25°C. The interaction of actinides and chelating resin is reversible and hence, the resin can be regenerated and reused. DFT calculation on the interaction of U(VI) and Th(IV) ions with chelating resin validates the experimental findings. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Potential Use of BEST® Sediment Trap in Splash - Saltation Transport Process by Simultaneous Wind and Rain Tests.

    PubMed

    Basaran, Mustafa; Uzun, Oguzhan; Cornelis, Wim; Gabriels, Donald; Erpul, Gunay

    2016-01-01

    The research on wind-driven rain (WDR) transport process of the splash-saltation has increased over the last twenty years as wind tunnel experimental studies provide new insights into the mechanisms of simultaneous wind and rain (WDR) transport. The present study was conducted to investigate the efficiency of the BEST® sediment traps in catching the sand particles transported through the splash-saltation process under WDR conditions. Experiments were conducted in a wind tunnel rainfall simulator facility with water sprayed through sprinkler nozzles and free-flowing wind at different velocities to simulate the WDR conditions. Not only for vertical sediment distribution, but a series of experimental tests for horizontal distribution of sediments was also performed using BEST® collectors to obtain the actual total sediment mass flow by the splash-saltation in the center of the wind tunnel test section. Total mass transport (kg m-2) were estimated by analytically integrating the exponential functional relationship using the measured sediment amounts at the set trap heights for every run. Results revealed the integrated efficiency of the BEST® traps at 6, 9, 12 and 15 m s-1 wind velocities under 55.8, 50.5, 55.0 and 50.5 mm h-1 rain intensities were, respectively, 83, 106, 105, and 102%. Results as well showed that the efficiencies of BEST® did not change much as compared with those under rainless wind condition.

  1. Hydrodynamic investigation of a self-propelled robotic fish based on a force-feedback control method.

    PubMed

    Wen, L; Wang, T M; Wu, G H; Liang, J H

    2012-09-01

    We implement a mackerel (Scomber scombrus) body-shaped robot, programmed to display the three most typical body/caudal fin undulatory kinematics (i.e. anguilliform, carangiform and thunniform), in order to biomimetically investigate hydrodynamic issues not easily tackled experimentally with live fish. The robotic mackerel, mounted on a servo towing system and initially at rest, can determine its self-propelled speed by measuring the external force acting upon it and allowing for the simultaneous measurement of power, flow field and self-propelled speed. Experimental results showed that the robotic swimmer with thunniform kinematics achieved a faster final swimming speed (St = 0.424) relative to those with carangiform (St = 0.43) and anguilliform kinematics (St = 0.55). The thrust efficiency, estimated from a digital particle image velocimetry (DPIV) flow field, showed that the robotic swimmer with thunniform kinematics is more efficient (47.3%) than those with carangiform (31.4%) and anguilliform kinematics (26.6%). Furthermore, the DPIV measurements illustrate that the large-scale characteristics of the flow pattern generated by the robotic swimmer with both anguilliform and carangiform kinematics were wedge-like, double-row wake structures. Additionally, a typical single-row reverse Karman vortex was produced by the robotic swimmer using thunniform kinematics. Finally, we discuss this novel force-feedback-controlled experimental method, and review the relative self-propelled hydrodynamic results of the robot when utilizing the three types of undulatory kinematics.

  2. Increase of efficiency of finishing-cleaning and hardening processing of details based on rotor-screw technological systems

    NASA Astrophysics Data System (ADS)

    Lebedev, V. A.; Serga, G. V.; Khandozhko, A. V.

    2018-03-01

    The article proposes technical solutions for increasing the efficiency of finishing-cleaning and hardening processing of parts on the basis of rotor-screw technological systems. The essence, design features and technological capabilities of the rotor-screw technological system with a rotating container are disclosed, which allows one to expand the range of the resulting displacement vectors, granules of the abrasive medium and processed parts. Ways of intensification of the processing on their basis by means of vibration activation of the process providing a combined effect on the mass of loading of large and small amplitude low-frequency oscillations are proposed. The results of the experimental studies of the movement of bulk materials in a screw container are presented, which showed that Kv = 0.5-0.6 can be considered the optimal value of the container filling factor. The estimation of screw containers application efficiency proceeding from their design features is given.

  3. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  4. Numerical model for the locomotion of spirilla.

    PubMed

    Ramia, M

    1991-11-01

    The swimming of trailing, leading, and bipolar spirilla (with realistic flagellar centerline geometries) is considered. A boundary element method is used to predict the instantaneous swimming velocity, counter-rotation angular velocity, and power dissipation of a given organism as functions of time and the geometry of the organism. Based on such velocities, swimming trajectories have been deduced enabling a realistic definition of mean swimming speeds. The power dissipation normalized in terms of the square of the mean swimming speed is considered to be a measure of hydrodynamic efficiency. In addition, kinematic efficiency is defined as the extent of deviation of the swimming motion from that of a previously proposed ideal corkscrew mechanism. The dependence of these efficiencies on the organism's geometry is examined giving estimates of its optimum dimensions. It is concluded that appreciable correlation exists between the two alternative definitions for many of the geometrical parameters considered. Furthermore, the organism having the deduced optimum dimensions closely resembles the real organism as experimentally observed.

  5. Numerical model for the locomotion of spirilla

    PubMed Central

    Ramia, M.

    1991-01-01

    The swimming of trailing, leading, and bipolar spirilla (with realistic flagellar centerline geometries) is considered. A boundary element method is used to predict the instantaneous swimming velocity, counter-rotation angular velocity, and power dissipation of a given organism as functions of time and the geometry of the organism. Based on such velocities, swimming trajectories have been deduced enabling a realistic definition of mean swimming speeds. The power dissipation normalized in terms of the square of the mean swimming speed is considered to be a measure of hydrodynamic efficiency. In addition, kinematic efficiency is defined as the extent of deviation of the swimming motion from that of a previously proposed ideal corkscrew mechanism. The dependence of these efficiencies on the organism's geometry is examined giving estimates of its optimum dimensions. It is concluded that appreciable correlation exists between the two alternative definitions for many of the geometrical parameters considered. Furthermore, the organism having the deduced optimum dimensions closely resembles the real organism as experimentally observed. PMID:19431804

  6. Analysis of twelve-month degradation in three polycrystalline photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Lai, T.; Potter, B. G.; Simmons-Potter, K.

    2016-09-01

    Polycrystalline silicon photovoltaic (PV) modules have the advantage of lower manufacturing cost as compared to their monocrystalline counterparts, but generally exhibit both lower initial module efficiencies and more significant early-stage efficiency degradation than do similar monocrystalline PV modules. For both technologies, noticeable deterioration in power conversion efficiency typically occurs over the first two years of usage. Estimating PV lifetime by examining the performance degradation behavior under given environmental conditions is, therefore, one of continual goals for experimental research and economic analysis. In the present work, accelerated lifecycle testing (ALT) on three polycrystalline PV technologies was performed in a full-scale, industrial-standard environmental chamber equipped with single-sun irradiance capability, providing an illumination uniformity of 98% over a 2 x 1.6m area. In order to investigate environmental aging effects, timedependent PV performance (I-V characteristic) was evaluated over a recurring, compressed day-night cycle, which simulated local daily solar insolation for the southwestern United States, followed by dark (night) periods. During a total test time of just under 4 months that corresponded to a year equivalent exposure on a fielded module, the temperature and humidity varied in ranges from 3°C to 40°C and 5% to 85% based on annual weather profiles for Tucson, AZ. Removing the temperature de-rating effect that was clearly seen in the data enabled the computation of normalized efficiency degradation with time and environmental exposure. Results confirm the impact of environmental conditions on the module long-term performance. Overall, more than 2% efficiency degradation in the first year of usage was observed for all thee polycrystalline Si solar modules. The average 5-year degradation of each PV technology was estimated based on their determined degradation rates.

  7. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  8. Venus Surface Power and Cooling System Design

    NASA Technical Reports Server (NTRS)

    Landis, Geoffrey A.; Mellott, Kenneth D.

    2004-01-01

    A radioisotope power and cooling system is designed to provide electrical power for the a probe operating on the surface of Venus. Most foreseeable electronics devices and sensors simply cannot operate at the 450 C ambient surface temperature of Venus. Because the mission duration is substantially long and the use of thermal mass to maintain an operable temperature range is likely impractical, some type of active refrigeration may be required to keep certain components at a temperature below ambient. The fundamental cooling requirements are comprised of the cold sink temperature, the hot sink temperature, and the amount of heat to be removed. In this instance, it is anticipated that electronics would have a nominal operating temperature of 300 C. Due to the highly thermal convective nature of the high-density atmosphere, the hot sink temperature was assumed to be 50 C, which provided a 500 C temperature of the cooler's heat rejecter to the ambient atmosphere. The majority of the heat load on the cooler is from the high temperature ambient surface environment on Venus. Assuming 5 cm radial thickness of ceramic blanket insulation, the ambient heat load was estimated at approximately 77 watts. With an estimated quantity of 10 watts of heat generation from electronics and sensors, and to accommodate some level of uncertainty, the total heat load requirement was rounded up to an even 100 watts. For the radioisotope Stirling power converter configuration designed, the Sage model predicts a thermodynamic power output capacity of 478.1 watts, which slightly exceeds the required 469.1 watts. The hot sink temperature is 1200 C, and the cold sink temperature is 500 C. The required heat input is 1740 watts. This gives a thermodynamic efficiency of 27.48 %. The maximum theoretically obtainable efficiency is 47.52 %. It is estimated that the mechanical efficiency of the power converter design is on the order of 85 %, based on experimental measurements taken from 500 watt power class, laboratory-tested Stirling engines at GRC. The overall efficiency is calculated to be 23.36 %. The mass of the power converter is estimated at approximately 21.6 kg.

  9. Ingestive Behavior of Ovine Fed with Marandu Grass Silage Added with Naturally Dehydrated Brewery Residue

    PubMed Central

    Lima de Souza, Alexandre; Divino Ribeiro, Marinaldo; Mattos Negrão, Fagton; Castro, Wanderson José Rodrigues; Valério Geron, Luiz Juliano; de Azevedo Câmara, Larissa Rodrigues

    2016-01-01

    The objective was to evaluate the ingestive behavior of ovine fed Marandu grass silage with dehydrated brewery residue added. The experiment had a completely randomized design with five treatments and four repetitions, with the treatments levels of inclusion being of 0, 10, 20, 30, and 40% natural matter of naturally dehydrated brewery residue for 36 hours to the marandu grass silage. 20 ovines were used and the experimental period was 21 days, 15 being for adaptation to diets. The use of brewery byproduct promoted quadratic effect (P < 0.05) for the consumption of dry matter with maximum point value estimated at adding 23.25% additive. Ingestion efficiency and rumination efficiency of dry matter (g DM/hour) were significant (P < 0.05), by quadratic behavior, and NDF ingestion and rumination efficiency showed crescent linear behavior. The DM and NDF consumption expressed in kg/meal and in minutes/kg were also significant (P < 0.05), showing quadratic behavior. Rumination activity expressed in g DM and NDF/piece was influenced (P < 0.05) by the adding of brewery residue in marandu grass silage in quadratic way, with maximum value estimated of 1.57 g DM/bolus chewed in inclusion of 24.72% additive in grass silage. The conclusion is that intermediary levels adding of 20 to 25% dehydrated brewery residue affects certain parameters of ingestive behavior. PMID:27547811

  10. Optimized efficient liver T1ρ mapping using limited spin lock times

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Zhao, Feng; Griffith, James F.; Chan, Queenie; Wang, Yi-Xiang J.

    2012-03-01

    T1ρ relaxation has recently been found to be sensitive to liver fibrosis and has potential to be used for early detection of liver fibrosis and grading. Liver T1ρ imaging and accurate mapping are challenging because of the long scan time, respiration motion and high specific absorption rate. Reduction and optimization of spin lock times (TSLs) are an efficient way to reduce scan time and radiofrequency energy deposition of T1ρ imaging, but maintain the near-optimal precision of T1ρ mapping. This work analyzes the precision in T1ρ estimation with limited, in particular two, spin lock times, and explores the feasibility of using two specific operator-selected TSLs for efficient and accurate liver T1ρ mapping. Two optimized TSLs were derived by theoretical analysis and numerical simulations first, and tested experimentally by in vivo rat liver T1ρ imaging at 3 T. The simulation showed that the TSLs of 1 and 50 ms gave optimal T1ρ estimation in a range of 10-100 ms. In the experiment, no significant statistical difference was found between the T1ρ maps generated using the optimized two-TSL combination and the maps generated using the six TSLs of [1, 10, 20, 30, 40, 50] ms according to one-way ANOVA analysis (p = 0.1364 for liver and p = 0.8708 for muscle).

  11. Isomer ratios for products of photonuclear reactions on 121Sb

    NASA Astrophysics Data System (ADS)

    Bezshyyko, Oleg; Dovbnya, Anatoliy; Golinka-Bezshyyko, Larisa; Kadenko, Igor; Vodin, Oleksandr; Olejnik, Stanislav; Tuller, Gleb; Kushnir, Volodymyr; Mitrochenko, Viktor

    2017-09-01

    Over the past several years various preequilibrium model approaches for nuclear reactions were developed. Diversified detailed experimental data in the medium excitation energy region for nucleus are needed for reasonable selection among these theoretical models. Lack of experimental data in this energy region does essentially limit the possibilities for analysis and comparison of different preequilibrium theoretical models. For photonuclear reactions this energy region extends between bremsstrahlung energies nearly 30-100 MeV. Experimental measurements and estimations of isomer ratios for products of photonuclear reactions with multiple particle escape on antimony have been performed using bremsstrahlung with end-point energies 38, 43 and 53 MeV. Method of induced activity measurement was applied. For acquisition of gamma spectra we used HPGe spectrometer with 20% efficiency and energy resolution 1.9 keV for 1332 keV gamma line of 60Co. Linear accelerator of electrons LU-40 was a source of bremsstrahlung. Energy resolution of electron beam was about 1% and mean current was within (3.8-5.3) μA.

  12. Experimental scheme for qubit and qutrit symmetric informationally complete positive operator-valued measurements using multiport devices

    NASA Astrophysics Data System (ADS)

    Tabia, Gelo Noel M.

    2012-12-01

    It is crucial for various quantum information processing tasks that the state of a quantum system can be determined reliably and efficiently from general quantum measurements. One important class of measurements for this purpose is symmetric informationally complete positive operator-valued measurements (SIC-POVMs). SIC-POVMs have the advantage of providing an unbiased estimator for the quantum state with the minimal number of outcomes needed for full tomography. By virtue of Naimark's dilation theorem, any POVM can always be realized with a suitable coupling between the system and an auxiliary system and by performing a projective measurement on the joint system. In practice, finding the appropriate coupling is rather nontrivial. Here we propose an experimental design for directly implementing SIC-POVMs using multiport devices and path-encoded qubits and qutrits, the utility of which has recently been demonstrated by several experimental groups around the world. Furthermore, we describe how these multiports can be attained in practice with an integrated photonic system composed of nested linear optical elements.

  13. Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System

    PubMed Central

    Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei

    2018-01-01

    The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751

  14. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  15. Kinetics and efficiency of ozone for treatment of landfill leachate including the effect of previous microbiological treatment.

    PubMed

    Lovato, María; Buffelli, José Real; Abrile, Mariana; Martín, Carlos

    2018-03-19

    The application of conventional physicochemical and microbiological techniques for the removal of organic pollutants has limitations for its utilization on wastewaters as landfill leachates because of their high concentration of not easily biodegradable organic compounds. The use of ozone-based technologies is an alternative and complementary treatment for this type of wastewaters. This paper reports the study of the degradation of landfill leachates from different stages of a treatment plant using ozone and ozone + UV. The experimental work included the determination of the temporal evolution of COD, TOC, UV254, and color. Along the experimental runs, the instantaneous off-gas ozone concentration was measured. The reaction kinetics follows a global second order expression with respect to COD and ozone concentrations. A kinetic model which takes into account the gas liquid mass transfer coupled with the chemical reaction was developed, and the corresponding parameters of the reacting system were determined. The mathematical model is able to appropriately simulate COD and ozone concentrations but exhibiting limitations when varying the leachate type. The potential application of ozone was verified, although the estimated efficiencies for COD removal and ozone consumption as well as the effect of UV radiation show variations on their trends. In this sense, it is interesting to note that the relative ozone yield has significant oscillations as the reaction proceeds. Finally, the set of experimental results demonstrates the crucial importance of the selection of process conditions to improve ozone efficiencies. This approach should consider variations in the ozone supply in order to minimize losses as well as the design of exhaustion methods as multiple stage reactors using chemical engineering design tools.

  16. Parameter estimation for lithium ion batteries

    NASA Astrophysics Data System (ADS)

    Santhanagopalan, Shriram

    With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).

  17. Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case

    PubMed Central

    Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique

    2017-01-01

    This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144

  18. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  19. Monitoring the aeration efficiency and carbon footprint of a medium-sized WWTP: experimental results on oxidation tank and aerobic digester.

    PubMed

    Caivano, Marianna; Bellandi, Giacomo; Mancini, Ignazio M; Masi, Salvatore; Brienza, Rosanna; Panariello, Simona; Gori, Riccardo; Caniani, Donatella

    2017-03-01

    The efficiency of aeration systems should be monitored to guarantee suitable biological processes. Among the available tools for evaluating the aeration efficiency, the off-gas method is one of the most useful. Increasing interest towards reducing greenhouse gas (GHG) emissions from biological processes has resulted in researchers using this method to quantify N 2 O and CO 2 concentrations in the off-gas. Experimental measurements of direct GHG emissions from aerobic digesters (AeDs) are not available in literature yet. In this study, the floating hood technique was used for the first time to monitor AeDs. The floating hood technique was used to evaluate oxygen transfer rates in an activated sludge (AS) tank of a medium-sized municipal wastewater treatment plant located in Italy. Very low values of oxygen transfer efficiency were found, confirming that small-to-medium-sized plants are often scarcely monitored and wrongly managed. Average CO 2 and N 2 O emissions from the AS tank were 0.14 kg CO2 /kg bCOD and 0.007 kg CO2,eq /kg bCOD , respectively. For an AeD, 3 × 10 -10  kg CO2 /kg bCOD direct CO 2 emissions were measured, while CO 2,eq emissions from N 2 O were 4 × 10 -9  kg CO2,eq /kg bCOD . The results for the AS tank and the AeD were used to estimate the net carbon and energy footprint of the entire plant.

  20. Cascaded Kalman and particle filters for photogrammetry based gyroscope drift and robot attitude estimation.

    PubMed

    Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

    2014-03-01

    Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. © 2013 ISA Published by ISA All rights reserved.

  1. Methodologies for Verification and Validation of Space Launch System (SLS) Structural Dynamic Models

    NASA Technical Reports Server (NTRS)

    Coppolino, Robert N.

    2018-01-01

    Responses to challenges associated with verification and validation (V&V) of Space Launch System (SLS) structural dynamics models are presented in this paper. Four methodologies addressing specific requirements for V&V are discussed. (1) Residual Mode Augmentation (RMA), which has gained acceptance by various principals in the NASA community, defines efficient and accurate FEM modal sensitivity models that are useful in test-analysis correlation and reconciliation and parametric uncertainty studies. (2) Modified Guyan Reduction (MGR) and Harmonic Reduction (HR, introduced in 1976), developed to remedy difficulties encountered with the widely used Classical Guyan Reduction (CGR) method, are presented. MGR and HR are particularly relevant for estimation of "body dominant" target modes of shell-type SLS assemblies that have numerous "body", "breathing" and local component constituents. Realities associated with configuration features and "imperfections" cause "body" and "breathing" mode characteristics to mix resulting in a lack of clarity in the understanding and correlation of FEM- and test-derived modal data. (3) Mode Consolidation (MC) is a newly introduced procedure designed to effectively "de-feature" FEM and experimental modes of detailed structural shell assemblies for unambiguous estimation of "body" dominant target modes. Finally, (4) Experimental Mode Verification (EMV) is a procedure that addresses ambiguities associated with experimental modal analysis of complex structural systems. Specifically, EMV directly separates well-defined modal data from spurious and poorly excited modal data employing newly introduced graphical and coherence metrics.

  2. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  3. Side-detecting optical fiber coated with Zn(OH)2 nanorods for ultraviolet sensing applications

    NASA Astrophysics Data System (ADS)

    Azad, S.; Parvizi, R.; Sadeghi, E.

    2017-09-01

    This paper presents an improved coupling efficiency and side detecting of UV radiation induced by light scattering and luminescent features of Zn(OH)2 nanorods coated multimode optical fibers. Uniform and high density Zn(OH)2 nanorods were grown hydrothermally on the core of chemically etched multimode optical fibers. The prepared samples were characterized through x-ray diffraction patterns, scanning electron microscopy and photoluminescence spectroscopy. The detecting technique was based on the intensity modulation of the side coupled light through the Zn(OH)2 nanorods. A simple and cost-effective UV radiation detecting setup has been designed. Experimentally estimated coupling efficiency of the proposed setup was obtained near 11%. The proposed device exhibited stable and reversible responses with a fast rising and decaying time of about 1.4 s and 0.85 s, respectively.

  4. Further study of inversion layer MIS solar cells

    NASA Technical Reports Server (NTRS)

    Ho, Fat Duen

    1992-01-01

    Many inversion layer metal-insulator-semiconductor (IL/MIS) solar cells have been fabricated. As of today, the best cell fabricated by us has a 9.138 percent AMO efficiency, with FF = 0.641, V(sub OC) = 0.557 V, and I(sub SC) = 26.9 micro A. Efforts made for fabricating an IL/MOS solar cell with reasonable efficiencies are reported. The more accurate control of the thickness of the thin layer of oxide between aluminum and silicon of the MIS contacts has been achieved by using two different process methods. Comparison of these two different thin oxide processings is reported. The effects of annealing time of the sample are discussed. The range of the resistivity of the substrates used in the IL cell fabrication is experimentally estimated. Theoretical study of the MIS contacts under dark conditions is addressed.

  5. An assessment of the liquid-gas partitioning behavior of major wastewater odorants using two comparative experimental approaches: liquid sample-based vaporization vs. impinger-based dynamic headspace extraction into sorbent tubes.

    PubMed

    Iqbal, Mohammad Asif; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo

    2014-01-01

    The gas-liquid partitioning behavior of major odorants (acetic acid, propionic acid, isobutyric acid, n-butyric acid, i-valeric acid, n-valeric acid, hexanoic acid, phenol, p-cresol, indole, skatole, and toluene (as a reference)) commonly found in microbially digested wastewaters was investigated by two experimental approaches. Firstly, a simple vaporization method was applied to measure the target odorants dissolved in liquid samples with the aid of sorbent tube/thermal desorption/gas chromatography/mass spectrometry. As an alternative method, an impinger-based dynamic headspace sampling method was also explored to measure the partitioning of target odorants between the gas and liquid phases with the same detection system. The relative extraction efficiency (in percent) of the odorants by dynamic headspace sampling was estimated against the calibration results derived by the vaporization method. Finally, the concentrations of the major odorants in real digested wastewater samples were also analyzed using both analytical approaches. Through a parallel application of the two experimental methods, we intended to develop an experimental approach to be able to assess the liquid-to-gas phase partitioning behavior of major odorants in a complex wastewater system. The relative sensitivity of the two methods expressed in terms of response factor ratios (RFvap/RFimp) of liquid standard calibration between vaporization and impinger-based calibrations varied widely from 981 (skatole) to 6,022 (acetic acid). Comparison of this relative sensitivity thus highlights the rather low extraction efficiency of the highly soluble and more acidic odorants from wastewater samples in dynamic headspace sampling.

  6. Modeling urban expansion in Yangon, Myanmar using Landsat time-series and stereo GeoEye Images

    NASA Astrophysics Data System (ADS)

    Sritarapipat, Tanakorn; Takeuchi, Wataru

    2016-06-01

    This research proposed a methodology to model the urban expansion based dynamic statistical model using Landsat and GeoEye Images. Landsat Time-Series from 1978 to 2010 have been applied to extract land covers from the past to the present. Stereo GeoEye Images have been employed to obtain the height of the building. The class translation was obtained by observing land cover from the past to the present. The height of the building can be used to detect the center of the urban area (mainly commercial area). It was assumed that the class translation and the distance of multi-centers of the urban area also the distance of the roads affect the urban growth. The urban expansion model based on the dynamic statistical model was defined to refer to three factors; (1) the class translation, (2) the distance of the multicenters of the urban areas, and (3) the distance from the roads. Estimation and prediction of urban expansion by using our model were formulated and expressed in this research. The experimental area was set up in Yangon, Myanmar. Since it is the major of country's economic with more than five million population and the urban areas have rapidly increased. The experimental results indicated that our model of urban expansion estimated urban growth in both estimation and prediction steps in efficiency.

  7. ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES

    PubMed Central

    van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.

    2014-01-01

    SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298

  8. A Model Parameter Extraction Method for Dielectric Barrier Discharge Ozone Chamber using Differential Evolution

    NASA Astrophysics Data System (ADS)

    Amjad, M.; Salam, Z.; Ishaque, K.

    2014-04-01

    In order to design an efficient resonant power supply for ozone gas generator, it is necessary to accurately determine the parameters of the ozone chamber. In the conventional method, the information from Lissajous plot is used to estimate the values of these parameters. However, the experimental setup for this purpose can only predict the parameters at one operating frequency and there is no guarantee that it results in the highest ozone gas yield. This paper proposes a new approach to determine the parameters using a search and optimization technique known as Differential Evolution (DE). The desired objective function of DE is set at the resonance condition and the chamber parameter values can be searched regardless of experimental constraints. The chamber parameters obtained from the DE technique are validated by experiment.

  9. Growth factor of Fe-doped semi-insulating InP by LP-MOCVD

    NASA Astrophysics Data System (ADS)

    Yan, Xuejin; Zhu, Hongliang; Wang, Wei; Xu, Guoyang; Zhou, Fan; Ma, Chaohua; Wang, Xiaojie; Tian, Huijiang; Zhang, Jingyuan; Wu, Rong Han; Wang, Qiming

    1998-08-01

    The semi-insulating InP has been grown using ferrocene as a dopant source by low pressure MOCVD. Fe doped semi-insulating InP material whose resistivity is equal to 2.0 X 108(Omega) *cm and the breakdown field is greater than 4.0 X 104Vcm-1 has been achieved. It is found that the magnitude of resistivity increases with growing pressure enhancement under keeping TMIn, PH3, ferrocene [Fe(C5H5)2] flow constant at 620 degrees Celsius growth temperature. Moreover, the experimental results which resistivity varies with ferrocene mole fraction are given. It is estimated that active Fe doping efficiency, (eta) , is equal to 8.7 X 10-4 at 20 mbar growth pressure and 620 degrees Celsius growth temperature by the comparison of calculated and experimental results.

  10. A tesselation-based model for intensity estimation and laser plasma interactions calculations in three dimensions

    NASA Astrophysics Data System (ADS)

    Colaïtis, A.; Chapman, T.; Strozzi, D.; Divol, L.; Michel, P.

    2018-03-01

    A three-dimensional laser propagation model for computation of laser-plasma interactions is presented. It is focused on indirect drive geometries in inertial confinement fusion and formulated for use at large temporal and spatial scales. A modified tesselation-based estimator and a relaxation scheme are used to estimate the intensity distribution in plasma from geometrical optics rays. Comparisons with reference solutions show that this approach is well-suited to reproduce realistic 3D intensity field distributions of beams smoothed by phase plates. It is shown that the method requires a reduced number of rays compared to traditional rigid-scale intensity estimation. Using this field estimator, we have implemented laser refraction, inverse-bremsstrahlung absorption, and steady-state crossed-beam energy transfer with a linear kinetic model in the numerical code Vampire. Probe beam amplification and laser spot shapes are compared with experimental results and pf3d paraxial simulations. These results are promising for the efficient and accurate computation of laser intensity distributions in holhraums, which is of importance for determining the capsule implosion shape and risks of laser-plasma instabilities such as hot electron generation and backscatter in multi-beam configurations.

  11. a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar

    NASA Astrophysics Data System (ADS)

    Dehnavi, S.; Maghsoudi, Y.

    2015-12-01

    Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.

  12. New Hybrid Algorithms for Estimating Tree Stem Diameters at Breast Height Using a Two Dimensional Terrestrial Laser Scanner

    PubMed Central

    Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli

    2015-01-01

    In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726

  13. Development of PARMA: PHITS-based analytical radiation model in the atmosphere.

    PubMed

    Sato, Tatsuhiko; Yasuda, Hiroshi; Niita, Koji; Endo, Akira; Sihver, Lembit

    2008-08-01

    Estimation of cosmic-ray spectra in the atmosphere has been essential for the evaluation of aviation doses. We therefore calculated these spectra by performing Monte Carlo simulation of cosmic-ray propagation in the atmosphere using the PHITS code. The accuracy of the simulation was well verified by experimental data taken under various conditions, even near sea level. Based on a comprehensive analysis of the simulation results, we proposed an analytical model for estimating the cosmic-ray spectra of neutrons, protons, helium ions, muons, electrons, positrons and photons applicable to any location in the atmosphere at altitudes below 20 km. Our model, named PARMA, enables us to calculate the cosmic radiation doses rapidly with a precision equivalent to that of the Monte Carlo simulation, which requires much more computational time. With these properties, PARMA is capable of improving the accuracy and efficiency of the cosmic-ray exposure dose estimations not only for aircrews but also for the public on the ground.

  14. Efficient estimation of three-dimensional covariance and its application in the analysis of heterogeneous samples in cryo-electron microscopy

    PubMed Central

    Liao, Hstau Y.; Hashem, Yaser; Frank, Joachim

    2015-01-01

    Summary Single-particle cryogenic electron microscopy (cryo-EM) is a powerful tool for the study of macromolecular structures at high resolution. Classification allows multiple structural states to be extracted and reconstructed from the same sample. One classification approach is via the covariance matrix, which captures the correlation between every pair of voxels. Earlier approaches employ computing-intensive resampling and estimate only the eigenvectors of the matrix, which are then used in a separate fast classification step. We propose an iterative scheme to explicitly estimate the covariance matrix in its entirety. In our approach, the flexibility in choosing the solution domain allows us to examine a part of the molecule in greater detail. 3D covariance maps obtained in this way from experimental data (cryo-EM images of the eukaryotic pre-initiation complex) prove to be in excellent agreement with conclusions derived by using traditional approaches, revealing in addition the interdependencies of ligand bindings and structural changes. PMID:25982529

  15. Efficient estimation of three-dimensional covariance and its application in the analysis of heterogeneous samples in cryo-electron microscopy.

    PubMed

    Liao, Hstau Y; Hashem, Yaser; Frank, Joachim

    2015-06-02

    Single-particle cryogenic electron microscopy (cryo-EM) is a powerful tool for the study of macromolecular structures at high resolution. Classification allows multiple structural states to be extracted and reconstructed from the same sample. One classification approach is via the covariance matrix, which captures the correlation between every pair of voxels. Earlier approaches employ computing-intensive resampling and estimate only the eigenvectors of the matrix, which are then used in a separate fast classification step. We propose an iterative scheme to explicitly estimate the covariance matrix in its entirety. In our approach, the flexibility in choosing the solution domain allows us to examine a part of the molecule in greater detail. Three-dimensional covariance maps obtained in this way from experimental data (cryo-EM images of the eukaryotic pre-initiation complex) prove to be in excellent agreement with conclusions derived by using traditional approaches, revealing in addition the interdependencies of ligand bindings and structural changes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  17. Evaluation of a Zirconium Recycle Scrubber System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Barry B.; Bruffey, Stephanie H.

    2017-04-01

    A hot-cell demonstration of the zirconium recycle process is planned as part of the Materials Recovery and Waste Forms Development (MRWFD) campaign. The process treats Zircaloy® cladding recovered from used nuclear fuel with chlorine gas to recover the zirconium as volatile ZrCl4. This releases radioactive tritium trapped in the alloy, converting it to volatile tritium chloride (TCl). To meet regulatory requirements governing radioactive emissions from nuclear fuel treatment operations, the capture and retention of a portion of this TCl may be required prior to discharge of the off-gas stream to the environment. In addition to demonstrating tritium removal from amore » synthetic zirconium recycle off-gas stream, the recovery and quantification of tritium may refine estimates of the amount of tritium present in the Zircaloy cladding of used nuclear fuel. To support these objectives, a bubbler-type scrubber was fabricated to remove the TCl from the zirconium recycle off-gas stream. The scrubber was fabricated from glass and polymer components that are resistant to chlorine and hydrochloric acid solutions. Because of concerns that the scrubber efficiency is not quantitative, tests were performed using DCl as a stand-in to experimentally measure the scrubbing efficiency of this unit. Scrubbing efficiency was ~108% ± 3% with water as the scrubber solution. Variations were noted when 1 M NaOH scrub solution was used, values ranged from 64% to 130%. The reason for the variations is not known. It is recommended that the equipment be operated with water as the scrubbing solution. Scrubbing efficiency is estimated at 100%.« less

  18. Effective channel estimation and efficient symbol detection for multi-input multi-output underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Ling, Jun

    Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.

  19. Estimating the SCS runoff curve number in forest catchments of Korea

    NASA Astrophysics Data System (ADS)

    Choi, Hyung Tae; Kim, Jaehoon; Lim, Hong-geun

    2016-04-01

    To estimate flood runoff discharge is a very important work in design for many hydraulic structures in streams, rivers and lakes such as dams, bridges, culverts, and so on. So, many researchers have tried to develop better methods for estimating flood runoff discharge. The SCS runoff curve number is an empirical parameter determined by empirical analysis of runoff from small catchments and hillslope plots monitored by the USDA. This method is an efficient method for determining the approximate amount of runoff from a rainfall even in a particular area, and is very widely used all around the world. However, there is a quite difference between the conditions of Korea and USA in topography, geology and land use. Therefore, examinations in adaptability of the SCS runoff curve number need to raise the accuracy of runoff prediction using SCS runoff curve number method. The purpose of this study is to find the SCS runoff curve number based on the analysis of observed data from several experimental forest catchments monitored by the National Institute of Forest Science (NIFOS), as a pilot study to modify SCS runoff curve number for forest lands in Korea. Rainfall and runoff records observed in Gwangneung coniferous and broad leaves forests, Sinwol, Hwasoon, Gongju and Gyeongsan catchments were selected to analyze the variability of flood runoff coefficients during the last 5 years. This study shows that runoff curve numbers of the experimental forest catchments range from 55 to 65. SCS Runoff Curve number method is a widely used method for estimating design discharge for small ungauged watersheds. Therefore, this study can be helpful technically to estimate the discharge for forest watersheds in Korea with more accuracy.

  20. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  1. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  2. Optimal Learning for Efficient Experimentation in Nanotechnology and Biochemistry

    DTIC Science & Technology

    2015-12-22

    AFRL-AFOSR-VA-TR-2016-0018 Optimal Learning for Efficient Experimentation in Nanotechnology , Biochemistry Warren Powell TRUSTEES OF PRINCETON...3. DATES COVERED (From - To) 01-07-2012 to 30-09-2015 4. TITLE AND SUBTITLE Optimal Learning for Efficient Experimentation in Nanotechnology and...in Nanotechnology and Biochemistry Principal Investigators: Warren B. Powell Princeton University Department of Operations Research and

  3. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  4. Proton exchange membrane fuel cells cold startup global strategy for fuel cell plug-in hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Henao, Nilson; Kelouwani, Sousso; Agbossou, Kodjo; Dubé, Yves

    2012-12-01

    This paper investigates the Proton Exchange Membrane Fuel Cell (PEMFC) Cold Startup problem within the specific context of the Plugin Hybrid Electric Vehicles (PHEV). A global strategy which aims at providing an efficient method to minimize the energy consumption during the startup of a PEMFC is proposed. The overall control system is based on a supervisory architecture in which the Energy Management System (EMS) plays the role of the power flow supervisor. The EMS estimates in advance, the time to start the fuel cell (FC) based upon the battery energy usage during the trip. Given this estimation and the amount of additional energy required, the fuel cell temperature management strategy computes the most appropriate time to start heating the stack in order to reduce heat loss through the natural convection. As the cell temperature rises, the PEMFC is started and the reaction heat is used as a self-heating power source to further increase the stack temperature. A time optimal self-heating approach based on the Pontryagin minimum principle is proposed and tested. The experimental results have shown that the proposed approach is efficient and can be implemented in real-time on FC-PHEVs.

  5. Computationally efficient modeling of proprioceptive signals in the upper limb for prostheses: a simulation study

    PubMed Central

    Williams, Ian; Constandinou, Timothy G.

    2014-01-01

    Accurate models of proprioceptive neural patterns could 1 day play an important role in the creation of an intuitive proprioceptive neural prosthesis for amputees. This paper looks at combining efficient implementations of biomechanical and proprioceptor models in order to generate signals that mimic human muscular proprioceptive patterns for future experimental work in prosthesis feedback. A neuro-musculoskeletal model of the upper limb with 7 degrees of freedom and 17 muscles is presented and generates real time estimates of muscle spindle and Golgi Tendon Organ neural firing patterns. Unlike previous neuro-musculoskeletal models, muscle activation and excitation levels are unknowns in this application and an inverse dynamics tool (static optimization) is integrated to estimate these variables. A proprioceptive prosthesis will need to be portable and this is incompatible with the computationally demanding nature of standard biomechanical and proprioceptor modeling. This paper uses and proposes a number of approximations and optimizations to make real time operation on portable hardware feasible. Finally technical obstacles to mimicking natural feedback for an intuitive proprioceptive prosthesis, as well as issues and limitations with existing models, are identified and discussed. PMID:25009463

  6. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    PubMed

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Experimental estimation of migration and transfer of organic substances from consumer articles to cotton wipes: Evaluation of underlying mechanisms.

    PubMed

    Clausen, Per Axel; Spaan, Suzanne; Brouwer, Derk H; Marquart, Hans; le Feber, Maaike; Engel, Roel; Geerts, Lieve; Jensen, Keld Alstrup; Kofoed-Sørensen, Vivi; Hansen, Brian; De Brouwere, Katleen

    2016-01-01

    The aim of this work was to identify the key mechanisms governing transport of organic chemical substances from consumer articles to cotton wipes. The results were used to establish a mechanistic model to improve assessment of dermal contact exposure. Four types of PVC flooring, 10 types of textiles and one type of inkjet printed paper were used to establish the mechanisms and model. Kinetic extraction studies in methanol demonstrated existence of matrix diffusion and indicated the presence of a substance surface layer on some articles. Consequently, the proposed substance transfer model considers mechanical transport from a surface film and matrix diffusion in an article with a known initial total substance concentration. The estimated chemical substance transfer values to cotton wipes were comparable to the literature data (relative transfer ∼ 2%), whereas relative transfer efficiencies from spiked substrates were high (∼ 50%). For consumer articles, high correlation (r(2)=0.92) was observed between predicted and measured transfer efficiencies, but concentrations were overpredicted by a factor of 10. Adjusting the relative transfer from about 50% used in the model to about 2.5% removed overprediction. Further studies are required to confirm the model for generic use.

  8. Performance Estimation for Two-Dimensional Brownian Rotary Ratchet Systems

    NASA Astrophysics Data System (ADS)

    Tutu, Hiroki; Horita, Takehiko; Ouchi, Katsuya

    2015-04-01

    Within the context of the Brownian ratchet model, a molecular rotary system that can perform unidirectional rotations induced by linearly polarized ac fields and produce positive work under loads was studied. The model is based on the Langevin equation for a particle in a two-dimensional (2D) three-tooth ratchet potential of threefold symmetry. The performance of the system is characterized by the coercive torque, i.e., the strength of the load competing with the torque induced by the ac driving field, and the energy efficiency in force conversion from the driving field to the torque. We propose a master equation for coarse-grained states, which takes into account the boundary motion between states, and develop a kinetic description to estimate the mean angular momentum (MAM) and powers relevant to the energy balance equation. The framework of analysis incorporates several 2D characteristics and is applicable to a wide class of models of smooth 2D ratchet potential. We confirm that the obtained expressions for MAM, power, and efficiency of the model can enable us to predict qualitative behaviors. We also discuss the usefulness of the torque/power relationship for experimental analyses, and propose a characteristic for 2D ratchet systems.

  9. Note: Fast neutron efficiency in CR-39 nuclear track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavallaro, S.

    2015-03-15

    CR-39 samples are commonly employed for fast neutron detection in fusion reactors and in inertial confinement fusion experiments. The literature reported efficiencies are strongly depending on experimental conditions and, in some cases, highly dispersed. The present note analyses the dependence of efficiency as a function of various parameters and experimental conditions in both the radiator-assisted and the stand-alone CR-39 configurations. Comparisons of literature experimental data with Monte Carlo calculations and optimized efficiency values are shown and discussed.

  10. An evaluation of multipass electrofishing for estimating the abundance of stream-dwelling salmonids

    Treesearch

    James T. Peterson; Russell F. Thurow; John W. Guzevich

    2004-01-01

    Failure to estimate capture efficiency, defined as the probability of capturing individual fish, can introduce a systematic error or bias into estimates of fish abundance. We evaluated the efficacy of multipass electrofishing removal methods for estimating fish abundance by comparing estimates of capture efficiency from multipass removal estimates to capture...

  11. An energy- and resource-saving technology for utilizing the sludge from thermal power station water treatment facilities

    NASA Astrophysics Data System (ADS)

    Nikolaeva, L. A.; Khusaenova, A. Z.

    2014-05-01

    A method for utilizing production wastes is considered, and a process circuit arrangement is proposed for utilizing a mixture of activated silt and sludge from chemical water treatment by incinerating it with possible heat recovery. The sorption capacity of the products from combusting a mixture of activated silt and sludge with respect to gaseous emissions is experimentally determined. A periodic-duty adsorber charged with a fixed bed of sludge is calculated, and the heat-recovery boiler efficiency is estimated together with the technical-economic indicators of the proposed utilization process circuit arrangement.

  12. Efficient Density Functional Approximation for Electronic Properties of Conjugated Systems

    NASA Astrophysics Data System (ADS)

    Caldas, Marília J.; Pinheiro, José Maximiano, Jr.; Blum, Volker; Rinke, Patrick

    2014-03-01

    There is on-going discussion about reliable prediction of electronic properties of conjugated oligomers and polymers, such as ionization potential IP and energy gap. Several exchange-correlation (XC) functionals are being used by the density functional theory community, with different success for different properties. In this work we follow a recent proposal: a fraction α of exact exchange is added to the semi-local PBE XC aiming consistency, for a given property, with the results obtained by many-body perturbation theory within the G0W0 approximation. We focus the IP, taken as the negative of the highest occupied molecular orbital energy. We choose α from a study of the prototype family trans-acetylene, and apply this same α to a set of oligomers for which there is experimental data available (acenes, phenylenes and others). Our results indicate we can have excellent estimates, within 0,2eV mean ave. dev. from the experimental values, better than through complete EN - 1 -EN calculations from the starting PBE functional. We also obtain good estimates for the electrical gap and orbital energies close to the band edge. Work supported by FAPESP, CNPq, and CAPES, Brazil, and DAAD, Germany.

  13. Designing and Interpreting Limiting Dilution Assays: General Principles and Applications to the Latent Reservoir for Human Immunodeficiency Virus-1.

    PubMed

    Rosenbloom, Daniel I S; Elliott, Oliver; Hill, Alison L; Henrich, Timothy J; Siliciano, Janet M; Siliciano, Robert F

    2015-12-01

    Limiting dilution assays are widely used in infectious disease research. These assays are crucial for current human immunodeficiency virus (HIV)-1 cure research in particular. In this study, we offer new tools to help investigators design and analyze dilution assays based on their specific research needs. Limiting dilution assays are commonly used to measure the extent of infection, and in the context of HIV they represent an essential tool for studying latency and potential curative strategies. Yet standard assay designs may not discern whether an intervention reduces an already miniscule latent infection. This review addresses challenges arising in this setting and in the general use of dilution assays. We illustrate the major statistical method for estimating frequency of infectious units from assay results, and we offer an online tool for computing this estimate. We recommend a procedure for customizing assay design to achieve desired sensitivity and precision goals, subject to experimental constraints. We consider experiments in which no viral outgrowth is observed and explain how using alternatives to viral outgrowth may make measurement of HIV latency more efficient. Finally, we discuss how biological complications, such as probabilistic growth of small infections, alter interpretations of experimental results.

  14. Bell nonlocality and fully entangled fraction measured in an entanglement-swapping device without quantum state tomography

    NASA Astrophysics Data System (ADS)

    Bartkiewicz, Karol; Lemr, Karel; Černoch, Antonín; Miranowicz, Adam

    2017-03-01

    We propose and experimentally implement an efficient procedure based on entanglement swapping to determine the Bell nonlocality measure of Horodecki et al. [Phys. Lett. A 200, 340 (1995), 10.1016/0375-9601(95)00214-N] and the fully entangled fraction of Bennett et al. [Phys. Rev. A 54, 3824 (1996), 10.1103/PhysRevA.54.3824] of an arbitrary two-qubit polarization-encoded state. The nonlocality measure corresponds to the amount of the violation of the Clauser-Horne-Shimony-Holt (CHSH) optimized over all measurement settings. By using simultaneously two copies of a given state, we measure directly only six parameters. This is an experimental determination of these quantities without quantum state tomography or continuous monitoring of all measurement bases in the usual CHSH inequality tests. We analyze how well the measured degrees of Bell nonlocality and other entanglement witnesses (including the fully entangled fraction and a nonlinear entropic witness) of an arbitrary two-qubit state can estimate its entanglement. In particular, we measure these witnesses and estimate the negativity of various two-qubit Werner states. Our approach could especially be useful for quantum communication protocols based on entanglement swapping.

  15. SIMPLE estimate of the free energy change due to aliphatic mutations: superior predictions based on first principles.

    PubMed

    Bueno, Marta; Camacho, Carlos J; Sancho, Javier

    2007-09-01

    The bioinformatics revolution of the last decade has been instrumental in the development of empirical potentials to quantitatively estimate protein interactions for modeling and design. Although computationally efficient, these potentials hide most of the relevant thermodynamics in 5-to-40 parameters that are fitted against a large experimental database. Here, we revisit this longstanding problem and show that a careful consideration of the change in hydrophobicity, electrostatics, and configurational entropy between the folded and unfolded state of aliphatic point mutations predicts 20-30% less false positives and yields more accurate predictions than any published empirical energy function. This significant improvement is achieved with essentially no free parameters, validating past theoretical and experimental efforts to understand the thermodynamics of protein folding. Our first principle analysis strongly suggests that both the solute-solute van der Waals interactions in the folded state and the electrostatics free energy change of exposed aliphatic mutations are almost completely compensated by similar interactions operating in the unfolded ensemble. Not surprisingly, the problem of properly accounting for the solvent contribution to the free energy of polar and charged group mutations, as well as of mutations that disrupt the protein backbone remains open. 2007 Wiley-Liss, Inc.

  16. Diagnostics of Cold-Sprayed Particle Velocities Approaching Critical Deposition Conditions

    NASA Astrophysics Data System (ADS)

    Mauer, G.; Singh, R.; Rauwald, K.-H.; Schrüfer, S.; Wilson, S.; Vaßen, R.

    2017-10-01

    In cold spraying, the impact particle velocity plays a key role for successful deposition. It is well known that only those particles can achieve successful bonding which have an impact velocity exceeding a particular threshold. This critical velocity depends on the thermomechanical properties of the impacting particles at impacting temperature. The latter depends on the gas temperature in the torch but also on stand-off distance and gas pressure. In the past, some semiempirical approaches have been proposed to estimate particle impact and critical velocities. Besides that, there are a limited number of available studies on particle velocity measurements in cold spraying. In the present work, particle velocity measurements were performed using a cold spray meter, where a laser beam is used to illuminate the particles ensuring sufficiently detectable radiant signal intensities. Measurements were carried out for INCONEL® alloy 718-type powders with different particle sizes. These experimental investigations comprised mainly subcritical spray parameters for this material to have a closer look at the conditions of initial deposition. The critical velocities were identified by evaluating the deposition efficiencies and correlating them to the measured particle velocity distributions. In addition, the experimental results were compared with some values estimated by model calculations.

  17. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  18. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  19. Measurement of acoustic velocity components in a turbulent flow using LDV and high-repetition rate PIV

    NASA Astrophysics Data System (ADS)

    Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank

    2017-06-01

    The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.

  20. Stochastic spectral projection of electrochemical thermal model for lithium-ion cell state estimation

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin

    2017-03-01

    A novel approach for integrating a pseudo-two dimensional electrochemical thermal (P2D-ECT) model and data assimilation algorithm is presented for lithium-ion cell state estimation. This approach refrains from making any simplifications in the P2D-ECT model while making it amenable for online state estimation. Though deterministic, uncertainty in the initial states induces stochasticity in the P2D-ECT model. This stochasticity is resolved by spectrally projecting the stochastic P2D-ECT model on a set of orthogonal multivariate Hermite polynomials. Volume averaging in the stochastic dimensions is proposed for efficient numerical solution of the resultant model. A state estimation framework is developed using a transformation of the orthogonal basis to assimilate the measurables with this system of equations. Effectiveness of the proposed method is first demonstrated by assimilating the cell voltage and temperature data generated using a synthetic test bed. This validated method is used with the experimentally observed cell voltage and temperature data for state estimation at different operating conditions and drive cycle protocols. The results show increased prediction accuracy when the data is assimilated every 30s. High accuracy of the estimated states is exploited to infer temperature dependent behavior of the lithium-ion cell.

  1. Efficiency assessment of using satellite data for crop area estimation in Ukraine

    NASA Astrophysics Data System (ADS)

    Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga

    2014-06-01

    The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.

  2. Neuro-fuzzy computing for vibration-based damage localization and severity estimation in an experimental wind turbine blade with superimposed operational effects

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2016-04-01

    Fueled by increasing demand for carbon neutral energy, erections of ever larger wind turbines (WTs), with WT blades (WTBs) with higher flexibilities and lower buckling capacities lead to increasing operation and maintenance costs. This can be counteracted with efficient structural health monitoring (SHM), which allows scheduling maintenance actions according to the structural state and preventing dramatic failures. The present study proposes a novel multi-step approach for vibration-based structural damage localization and severity estimation for application in operating WTs. First, partial autocorrelation coefficients (PACCs) are estimated from vibrational responses. Second, principal component analysis is applied to PACCs from the healthy structure in order to calculate scores. Then, the scores are ranked with respect to their ability to differentiate different damage scenarios. This ranking information is used for constructing hierarchical adaptive neuro-fuzzy inference systems (HANFISs), where cross-validation is used to identify optimal numbers of hierarchy levels. Different HANFISs are created for the purposes of structural damage localization and severity estimation. For demonstrating the applicability of the approach, experimental data are superimposed with signals from numerical simulations to account for characteristics of operational noise. For the physical experiments, a small scale WTB is excited with a domestic fan and damage scenarios are introduced non-destructively by attaching small masses. Numerical simulations are also performed for a representative fully functional small WT operating in turbulent wind. The obtained results are promising for future applications of vibration-based SHM to facilitate improved safety and reliability of WTs at lower costs.

  3. Molecular system identification for enzyme directed evolution and design

    NASA Astrophysics Data System (ADS)

    Guan, Xiangying; Chakrabarti, Raj

    2017-09-01

    The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.

  4. Bayesian approach to the analysis of neutron Brillouin scattering data on liquid metals

    NASA Astrophysics Data System (ADS)

    De Francesco, A.; Guarini, E.; Bafile, U.; Formisano, F.; Scaccia, L.

    2016-08-01

    When the dynamics of liquids and disordered systems at mesoscopic level is investigated by means of inelastic scattering (e.g., neutron or x ray), spectra are often characterized by a poor definition of the excitation lines and spectroscopic features in general and one important issue is to establish how many of these lines need to be included in the modeling function and to estimate their parameters. Furthermore, when strongly damped excitations are present, commonly used and widespread fitting algorithms are particularly affected by the choice of initial values of the parameters. An inadequate choice may lead to an inefficient exploration of the parameter space, resulting in the algorithm getting stuck in a local minimum. In this paper, we present a Bayesian approach to the analysis of neutron Brillouin scattering data in which the number of excitation lines is treated as unknown and estimated along with the other model parameters. We propose a joint estimation procedure based on a reversible-jump Markov chain Monte Carlo algorithm, which efficiently explores the parameter space, producing a probabilistic measure to quantify the uncertainty on the number of excitation lines as well as reliable parameter estimates. The method proposed could turn out of great importance in extracting physical information from experimental data, especially when the detection of spectral features is complicated not only because of the properties of the sample, but also because of the limited instrumental resolution and count statistics. The approach is tested on generated data set and then applied to real experimental spectra of neutron Brillouin scattering from a liquid metal, previously analyzed in a more traditional way.

  5. Comparison of modeled estimates of inhalation exposure to aerosols during use of consumer spray products.

    PubMed

    Park, Jihoon; Yoon, Chungsik; Lee, Kiyoung

    2018-05-30

    In the field of exposure science, various exposure assessment models have been developed to complement experimental measurements; however, few studies have been published on their validity. This study compares the estimated inhaled aerosol doses of several inhalation exposure models to experimental measurements of aerosols released from consumer spray products, and then compares deposited doses within different parts of the human respiratory tract according to deposition models. Exposure models, including the European Center for Ecotoxicology of Chemicals Targeted Risk Assessment (ECETOC TRA), the Consumer Exposure Model (CEM), SprayExpo, ConsExpo Web and ConsExpo Nano, were used to estimate the inhaled dose under various exposure scenarios, and modeled and experimental estimates were compared. The deposited dose in different respiratory regions was estimated using the International Commission on Radiological Protection model and multiple-path particle dosimetry models under the assumption of polydispersed particles. The modeled estimates of the inhaled doses were accurate in the short term, i.e., within 10 min of the initial spraying, with a differences from experimental estimates ranging from 0 to 73% among the models. However, the estimates for long-term exposure, i.e., exposure times of several hours, deviated significantly from the experimental estimates in the absence of ventilation. The differences between the experimental and modeled estimates of particle number and surface area were constant over time under ventilated conditions. ConsExpo Nano, as a nano-scale model, showed stable estimates of short-term exposure, with a difference from the experimental estimates of less than 60% for all metrics. The deposited particle estimates were similar among the deposition models, particularly in the nanoparticle range for the head airway and alveolar regions. In conclusion, the results showed that the inhalation exposure models tested in this study are suitable for estimating short-term aerosol exposure (within half an hour), but not for estimating long-term exposure. Copyright © 2018 Elsevier GmbH. All rights reserved.

  6. Simulation study of amplitude-modulated (AM) harmonic motion imaging (HMI) for stiffness contrast quantification with experimental validation.

    PubMed

    Maleke, Caroline; Luo, Jianwen; Gamarnik, Viktor; Lu, Xin L; Konofagou, Elisa E

    2010-07-01

    The objective of this study is to show that Harmonic Motion Imaging (HMI) can be used as a reliable tumor-mapping technique based on the tumor's distinct stiffness at the early onset of disease. HMI is a radiation-force-based imaging method that generates a localized vibration deep inside the tissue to estimate the relative tissue stiffness based on the resulting displacement amplitude. In this paper, a finite-element model (FEM) study is presented, followed by an experimental validation in tissue-mimicking polyacrylamide gels and excised human breast tumors ex vivo. This study compares the resulting tissue motion in simulations and experiments at four different gel stiffnesses and three distinct spherical inclusion diameters. The elastic moduli of the gels were separately measured using mechanical testing. Identical transducer parameters were used in both the FEM and experimental studies, i.e., a 4.5-MHz single-element focused ultrasound (FUS) and a 7.5-MHz diagnostic (pulse-echo) transducer. In the simulation, an acoustic pressure field was used as the input stimulus to generate a localized vibration inside the target. Radiofrequency (rf) signals were then simulated using a 2D convolution model. A one-dimensional cross-correlation technique was performed on the simulated and experimental rf signals to estimate the axial displacement resulting from the harmonic radiation force. In order to measure the reliability of the displacement profiles in estimating the tissue stiffness distribution, the contrast-transfer efficiency (CTE) was calculated. For tumor mapping ex vivo, a harmonic radiation force was applied using a 2D raster-scan technique. The 2D HMI images of the breast tumor ex vivo could detect a malignant tumor (20 x 10 mm2) surrounded by glandular and fat tissues. The FEM and experimental results from both gels and breast tumors ex vivo demonstrated that HMI was capable of detecting and mapping the tumor or stiff inclusion with various diameters or stiffnesses. HMI may thus constitute a promising technique in tumor detection (>3 mm in diameter) and mapping based on its distinct stiffness.

  7. Transmission and dose–response experiments for social animals: a reappraisal of the colonization biology of Campylobacter jejuni in chickens

    PubMed Central

    Conlan, Andrew J. K.; Line, John E.; Hiett, Kelli; Coward, Chris; Van Diemen, Pauline M.; Stevens, Mark P.; Jones, Michael A.; Gog, Julia R.; Maskell, Duncan J.

    2011-01-01

    Dose–response experiments characterize the relationship between infectious agents and their hosts. These experiments are routinely used to estimate the minimum effective infectious dose for an infectious agent, which is most commonly characterized by the dose at which 50 per cent of challenged hosts become infected—the ID50. In turn, the ID50 is often used to compare between different agents and quantify the effect of treatment regimes. The statistical analysis of dose–response data typically makes the assumption that hosts within a given dose group are independent. For social animals, in particular avian species, hosts are routinely housed together in groups during experimental studies. For experiments with non-infectious agents, this poses no practical or theoretical problems. However, transmission of infectious agents between co-housed animals will modify the observed dose–response relationship with implications for the estimation of the ID50 and the comparison between different agents and treatments. We derive a simple correction to the likelihood for standard dose–response models that allows us to estimate dose–response and transmission parameters simultaneously. We use this model to show that: transmission between co-housed animals reduces the apparent value of the ID50 and increases the variability between replicates leading to a distinctive all-or-nothing response; in terms of the total number of animals used, individual housing is always the most efficient experimental design for ascertaining dose–response relationships; estimates of transmission from previously published experimental data for Campylobacter spp. in chickens suggest that considerable transmission occurred, greatly increasing the uncertainty in the estimates of dose–response parameters reported in the literature. Furthermore, we demonstrate that accounting for transmission in the analysis of dose–response data for Campylobacter spp. challenges our current understanding of the differing response of chickens with respect to host-age and in vivo passage of bacteria. Our findings suggest that the age-dependence of transmissibility between hosts—rather than their susceptibility to colonization—is the mechanism behind the ‘lag-phase’ reported in commercial flocks, which are typically found to be Campylobacter free for the first 14–21 days of life. PMID:21593028

  8. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Moser, D.E.; Suggs, Robb M.; Cooke, W.J.

    2010-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, Eta is a unique function of velocity with an extremely large variation in the laboratory range of under 8 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from laboratory impacts and from astronomical determinations and scaling factors are estimated. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors

  9. Highly efficient lithium composite anode with hydrophobic molten salt in seawater

    NASA Astrophysics Data System (ADS)

    Zhang, Yancheng; Urquidi-Macdonald, Mirna

    A lithium composite anode (lithium/1-butyl-3-methyl-imidazoleum hexafluorophosphate (BMI +PF 6-)/4-VLZ) for primary lithium/seawater semi-fuel-cells is proposed to reduce lithium-water parasitic reaction and, hence, increase the lithium anodic efficiency up to 100%. The lithium composite anode was activated when in contact with artificial seawater (3% NaCl solution) and the output was a stable anodic current density at 0.2 mA/cm 2, which lasted about 10 h under potentiostatic polarization at +0.5 V versus open circuit potential (OCP); the anodic efficiency was indirectly measured to be 100%. With time, a small traces of water diffused through the hydrophobic molten salt, BMI +PF 6-, reached the lithium interface and formed a double layer film (LiH/LiOH). Accordingly, the current density decreased and the anodic efficiency was estimated to be 90%. The hypothesis of small traces of water penetrating the molten salt and reaching the lithium anode—after several hours of operation—is supported by the collected experimental current density and hydrogen evolution, electrochemical impedance spectrum analysis, and non-mechanistic interface film modeling of lithium/BMI +PF 6-.

  10. Leveraging genome-wide datasets to quantify the functional role of the anti-Shine-Dalgarno sequence in regulating translation efficiency.

    PubMed

    Hockenberry, Adam J; Pah, Adam R; Jewett, Michael C; Amaral, Luís A N

    2017-01-01

    Studies dating back to the 1970s established that sequence complementarity between the anti-Shine-Dalgarno (aSD) sequence on prokaryotic ribosomes and the 5' untranslated region of mRNAs helps to facilitate translation initiation. The optimal location of aSD sequence binding relative to the start codon, the full extents of the aSD sequence and the functional form of the relationship between aSD sequence complementarity and translation efficiency have not been fully resolved. Here, we investigate these relationships by leveraging the sequence diversity of endogenous genes and recently available genome-wide estimates of translation efficiency. We show that-after accounting for predicted mRNA structure-aSD sequence complementarity increases the translation of endogenous mRNAs by roughly 50%. Further, we observe that this relationship is nonlinear, with translation efficiency maximized for mRNAs with intermediate levels of aSD sequence complementarity. The mechanistic insights that we observe are highly robust: we find nearly identical results in multiple datasets spanning three distantly related bacteria. Further, we verify our main conclusions by re-analysing a controlled experimental dataset. © 2017 The Authors.

  11. A Novel Approach To Improve the Efficiency of Block Freeze Concentration Using Ice Nucleation Proteins with Altered Ice Morphology.

    PubMed

    Jin, Jue; Yurkow, Edward J; Adler, Derek; Lee, Tung-Ching

    2017-03-22

    Freeze concentration is a separation process with high success in product quality. The remaining challenge is to achieve high efficiency with low cost. This study aims to evaluate the potential of using ice nucleation proteins (INPs) as an effective method to improve the efficiency of block freeze concentration while also exploring the related mechanism of ice morphology. Our results show that INPs are able to significantly improve the efficiency of block freeze concentration in a desalination model. Using this experimental system, we estimate that approximately 50% of the energy cost can be saved by the inclusion of INPs in desalination cycles while still meeting the EPA standard of drinking water (<500 ppm). Our investigative tools for ice morphology include optical microscopy and X-ray computed tomography imaging analysis. Their use indicates that INPs promote the development of a lamellar structured ice matrix with larger hydraulic diameters, which facilitates brine drainage and contains less brine entrapment as compared to control samples. These results suggest great potential for applying INPs to develop an energy-saving freeze concentration method via the alteration of ice morphology.

  12. Potential Use of BEST® Sediment Trap in Splash - Saltation Transport Process by Simultaneous Wind and Rain Tests

    PubMed Central

    Basaran, Mustafa; Uzun, Oguzhan; Cornelis, Wim; Gabriels, Donald; Erpul, Gunay

    2016-01-01

    The research on wind-driven rain (WDR) transport process of the splash-saltation has increased over the last twenty years as wind tunnel experimental studies provide new insights into the mechanisms of simultaneous wind and rain (WDR) transport. The present study was conducted to investigate the efficiency of the BEST® sediment traps in catching the sand particles transported through the splash-saltation process under WDR conditions. Experiments were conducted in a wind tunnel rainfall simulator facility with water sprayed through sprinkler nozzles and free-flowing wind at different velocities to simulate the WDR conditions. Not only for vertical sediment distribution, but a series of experimental tests for horizontal distribution of sediments was also performed using BEST® collectors to obtain the actual total sediment mass flow by the splash-saltation in the center of the wind tunnel test section. Total mass transport (kg m-2) were estimated by analytically integrating the exponential functional relationship using the measured sediment amounts at the set trap heights for every run. Results revealed the integrated efficiency of the BEST® traps at 6, 9, 12 and 15 m s-1 wind velocities under 55.8, 50.5, 55.0 and 50.5 mm h-1 rain intensities were, respectively, 83, 106, 105, and 102%. Results as well showed that the efficiencies of BEST® did not change much as compared with those under rainless wind condition. PMID:27898716

  13. Parameterizing ecosystem light use efficiency and water use efficiency to estimate maize gross primary production and evapotranspiration using MODIS EVI

    USDA-ARS?s Scientific Manuscript database

    Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...

  14. Free energy simulations with the AMOEBA polarizable force field and metadynamics on GPU platform.

    PubMed

    Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Guohui

    2016-03-05

    The free energy calculation library PLUMED has been incorporated into the OpenMM simulation toolkit, with the purpose to perform enhanced sampling MD simulations using the AMOEBA polarizable force field on GPU platform. Two examples, (I) the free energy profile of water pair separation (II) alanine dipeptide dihedral angle free energy surface in explicit solvent, are provided here to demonstrate the accuracy and efficiency of our implementation. The converged free energy profiles could be obtained within an affordable MD simulation time when the AMOEBA polarizable force field is employed. Moreover, the free energy surfaces estimated using the AMOEBA polarizable force field are in agreement with those calculated from experimental data and ab initio methods. Hence, the implementation in this work is reliable and would be utilized to study more complicated biological phenomena in both an accurate and efficient way. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  15. Microcomponents manufacturing for precise devices by copper vapor laser

    NASA Astrophysics Data System (ADS)

    Gorny, Sergey; Nikonchuk, Michail O.; Polyakov, Igor V.

    2001-06-01

    This paper presents investigation results of drilling of metal microcomponents by copper vapor laser. The laser consists of master oscillator - spatial filter - amplifier system, electronics switching with digital control of laser pulse repetition rate and quantity of pulses, x-y stage with computer control system. Mass of metal, removed by one laser pulse, is measured and defined by means of diameter and depth of holes. Interaction of next pulses on drilled material is discussed. The difference between light absorption and metal evaporation processes is considered for drilling and cutting. Efficiency of drilling is estimated by ratio of evaporation heat and used laser energy. Maximum efficiency of steel cutting is calculated with experimental data of drilling. Applications of copper vapor laser for manufacturing is illustrated by such microcomponents as pin guide plate for printers, stents for cardio surgery, encoded disks for security systems and multiple slit masks for spectrophotometers.

  16. Photosensitized generation of singlet oxygen in porous silicon studied by simultaneous measurements of luminescence of nanocrystals and oxygen molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongalsky, M. B.; Kharin, A. Yu.; Zagorodskikh, S. A.

    2011-07-01

    Photosensitization of singlet oxygen generation in porous silicon (PSi) was investigated by simultaneous measurements of the photoluminescence (PL) of silicon nanocrystals (nc-Si) and the infrared emission of the {sup 1}{Delta}-state of oxygen molecules at 1270 nm (0.98 eV) at room temperature. Photodegradation of the nc-Si PL properties was found to correlate with the efficiency of singlet oxygen generation. The quantum efficiency of singlet oxygen generation in PSi was estimated to be about 1%, while the lifetime of singlet oxygen was about fifteen ms. The kinetics of nc-Si PL intensity under cw excitation undergoes a power law dependence with the exponentmore » dependent on the photon energy of luminescence. The experimental results are explained with a model of photodegradation controlled by the diffusion of singlet oxygen molecules in a disordered structure of porous silicon.« less

  17. Stability of Proteins in Carbohydrates and Other Additives during Freezing: The Human Growth Hormone as a Case Study.

    PubMed

    Arsiccio, Andrea; Pisano, Roberto

    2017-09-21

    Molecular dynamics is here used to elucidate the mechanism of protein stabilization by carbohydrates and other additives during freezing. More specifically, we used molecular dynamics simulations to obtain a quantitative estimation of the capability of various cryoprotectants to preserve a model protein, the human growth hormone, against freezing stresses. Three mechanisms were investigated, preferential exclusion, water replacement, and vitrification. Model simulations were finally validated upon experimental data in terms of the ability of excipients to prevent protein aggregation. Overall, we found that the preferential exclusion and vitrification mechanisms are important during the whole freezing process, while water replacement becomes dominant only toward the end of the cryoconcentration phase. The disaccharides were found to be the most efficient excipients, in regard to both preferential exclusion and water replacement. Moreover, sugars were in general more efficient than other excipients, such as glycine or sorbitol.

  18. A variational eigenvalue solver on a photonic quantum processor

    PubMed Central

    Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.

    2014-01-01

    Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053

  19. Ultrahigh capacity 2 × 2 MIMO RoF system at 60  GHz employing single-sideband single-carrier modulation.

    PubMed

    Lin, Chun-Ting; Ho, Chun-Hung; Huang, Hou-Tzu; Cheng, Yu-Hsuan

    2014-03-15

    This article proposes and experimentally demonstrates a radio-over-fiber system employing single-sideband single-carrier (SSB-SC) modulation at 60 GHz. SSB-SC modulation has a lower peak-to-average-power ratio than orthogonal frequency division multiplex (OFDM) modulation; therefore, the SSB-SC signals provide superior nonlinear tolerance, compared to OFDM signals. Moreover, multiple-input multiple-output (MIMO) technology was used extensively to enhance spectral efficiency. A least-mean-square-based equalizer was implemented, including MIMO channel estimation, frequency response equalization, and I/Q imbalance compensation to recover the MIMO signals. Thus, using 2×2 MIMO technology and 64-QAM SSB-SC signals, we achieved the highest data rate of 84 Gbps with 12  bit/s/Hz spectral efficiency using the 7-GHz license-free band at 60 GHz.

  20. 520-µJ mid-infrared femtosecond laser at 2.8 µm by 1-kHz KTA optical parametric amplifier

    NASA Astrophysics Data System (ADS)

    He, Huijun; Wang, Zhaohua; Hu, Chenyang; Jiang, Jianwang; Qin, Shuang; He, Peng; Zhang, Ninghua; Yang, Peilong; Li, Zhiyuan; Wei, Zhiyi

    2018-02-01

    We report on a 520-µJ, 1-kHz mid-infrared femtosecond optical parametric amplifier system driven by a Ti:sapphire laser system. The seeding signal was generated from white-light continuum in YAG plate and then amplified in four non-collinear amplification stages and the idler was obtained in the last stage with central wavelength at 2.8 µm and bandwidth of 525 nm. To maximize the bandwidth of the idler, a theoretical method was developed to give an optimum non-collinear angle and estimate the conversion efficiency and output spectrum. As an experimental result, laser pulse energy up to 1.8 mJ for signal wave and 520 µJ for idler wave were obtained in the last stage under 10-mJ pump energy, corresponding to a pump-to-idler conversion efficiency of 5.2%, which meets well with the numerical calculation.

  1. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  2. Characterization of a 5-eV neutral atomic oxygen beam facility

    NASA Technical Reports Server (NTRS)

    Vaughn, J. A.; Linton, R. C.; Carruth, M. R., Jr.; Whitaker, A. F.; Cuthbertson, J. W.; Langer, W. D.; Motley, R. W.

    1991-01-01

    An experimental effort to characterize an existing 5-eV neutral atomic oxygen beam facility being developed at Princeton Plasma Physics Laboratory is described. This characterization effort includes atomic oxygen flux and flux distribution measurements using a catalytic probe, energy determination using a commercially designed quadrupole mass spectrometer (QMS), and the exposure of oxygen-sensitive materials in this beam facility. Also, comparisons were drawn between the reaction efficiencies of materials exposed in plasma ashers, and the reaction efficiencies previously estimated from space flight experiments. The results of this study show that the beam facility is capable of producing a directional beam of neutral atomic oxygen atoms with the needed flux and energy to simulate low Earth orbit (LEO) conditions for real time accelerated testing. The flux distribution in this facility is uniform to +/- 6 percent of the peak flux over a beam diameter of 6 cm.

  3. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  4. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  5. Estimating the Size of a Large Network and its Communities from a Random Sample

    PubMed Central

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W.

    2017-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios. PMID:28867924

  6. Estimating the Size of a Large Network and its Communities from a Random Sample.

    PubMed

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = ( V, E ) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G ( W ) be the induced subgraph in G of the vertices in W . In addition to G ( W ), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K , and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  7. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    NASA Astrophysics Data System (ADS)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  8. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  9. Predicting phenotype from genotype: Improving accuracy through more robust experimental and computational modeling

    PubMed Central

    Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne‐Marie; Bouvier, Michel

    2017-01-01

    Abstract Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype–phenotype relationship. PMID:28230923

  10. Predicting phenotype from genotype: Improving accuracy through more robust experimental and computational modeling.

    PubMed

    Gallion, Jonathan; Koire, Amanda; Katsonis, Panagiotis; Schoenegge, Anne-Marie; Bouvier, Michel; Lichtarge, Olivier

    2017-05-01

    Computational prediction yields efficient and scalable initial assessments of how variants of unknown significance may affect human health. However, when discrepancies between these predictions and direct experimental measurements of functional impact arise, inaccurate computational predictions are frequently assumed as the source. Here, we present a methodological analysis indicating that shortcomings in both computational and biological data can contribute to these disagreements. We demonstrate that incomplete assaying of multifunctional proteins can affect the strength of correlations between prediction and experiments; a variant's full impact on function is better quantified by considering multiple assays that probe an ensemble of protein functions. Additionally, many variants predictions are sensitive to protein alignment construction and can be customized to maximize relevance of predictions to a specific experimental question. We conclude that inconsistencies between computation and experiment can often be attributed to the fact that they do not test identical hypotheses. Aligning the design of the computational input with the design of the experimental output will require cooperation between computational and biological scientists, but will also lead to improved estimations of computational prediction accuracy and a better understanding of the genotype-phenotype relationship. © 2017 The Authors. **Human Mutation published by Wiley Periodicals, Inc.

  11. Refined method for predicting electrochemical windows of ionic liquids and experimental validation studies.

    PubMed

    Zhang, Yong; Shi, Chaojun; Brennecke, Joan F; Maginn, Edward J

    2014-06-12

    A combined classical molecular dynamics (MD) and ab initio MD (AIMD) method was developed for the calculation of electrochemical windows (ECWs) of ionic liquids. In the method, the liquid phase of ionic liquid is explicitly sampled using classical MD. The electrochemical window, estimated by the energy difference between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), is calculated at the density functional theory (DFT) level based on snapshots obtained from classical MD trajectories. The snapshots were relaxed using AIMD and quenched to their local energy minima, which assures that the HOMO/LUMO calculations are based on stable configurations on the same potential energy surface. The new procedure was applied to a group of ionic liquids for which the ECWs were also experimentally measured in a self-consistent manner. It was found that the predicted ECWs not only agree with the experimental trend very well but also the values are quantitatively accurate. The proposed method provides an efficient way to compare ECWs of ionic liquids in the same context, which has been difficult in experiments or simulation due to the fact that ECW values sensitively depend on experimental setup and conditions.

  12. Characterizing Drainage Multiphase Flow in Heterogeneous Sandstones

    NASA Astrophysics Data System (ADS)

    Jackson, Samuel J.; Agada, Simeon; Reynolds, Catriona A.; Krevor, Samuel

    2018-04-01

    In this work, we analyze the characterization of drainage multiphase flow properties on heterogeneous rock cores using a rich experimental data set and mm-m scale numerical simulations. Along with routine multiphase flow properties, 3-D submeter scale capillary pressure heterogeneity is characterized by combining experimental observations and numerical calibration, resulting in a 3-D numerical model of the rock core. The uniqueness and predictive capability of the numerical models are evaluated by accurately predicting the experimentally measured relative permeability of N2—DI water and CO2—brine systems in two distinct sandstone rock cores across multiple fractional flow regimes and total flow rates. The numerical models are used to derive equivalent relative permeabilities, which are upscaled functions incorporating the effects of submeter scale capillary pressure. The functions are obtained across capillary numbers which span four orders of magnitude, representative of the range of flow regimes that occur in subsurface CO2 injection. Removal of experimental boundary artifacts allows the derivation of equivalent functions which are characteristic of the continuous subsurface. We also demonstrate how heterogeneities can be reorientated and restructured to efficiently estimate flow properties in rock orientations differing from the original core sample. This analysis shows how combined experimental and numerical characterization of rock samples can be used to derive equivalent flow properties from heterogeneous rocks.

  13. Core conditions for alpha heating attained in direct-drive inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, A.; Woo, K. M.; Betti, R.

    It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117,more » 025001 (2016)] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.« less

  14. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  15. The vacuum ultraviolet spectrum of krypton and xenon excimers excited in a cooled dc discharge

    NASA Astrophysics Data System (ADS)

    Gerasimov, G.; Krylov, B.; Loginov, A.; Zvereva, G.; Hallin, R.; Arnesen, A.; Heijkenskjöld, F.

    1998-01-01

    We present results of an experimental and theoretical study of the VUV spectra of krypton and xenon excimers excited by a dc discharge in a capillary tube cooled by liquid nitrogen. The studied spectral regions of 115-170 nm and 140-195 nm for krypton and xenon respectively correspond to transitions between the lowest excited dimer states 1u, 0u+ and the weakly bound ground state 0g+. A semiempirical method was suggested and applied to describe the experimental spectra and to estimate the temperature of the radiating plasma volume. Electron impact, transferring dimers from the ground state to the excited states, is shown to be an efficient excitation mechanism in the 100-850 hPa and the 10-50 mA pressure and discharge current ranges. The spectra obtained as well as the results of calculations corroborate the high rate of this mechanism.

  16. Core conditions for alpha heating attained in direct-drive inertial confinement fusion

    DOE PAGES

    Bose, A.; Woo, K. M.; Betti, R.; ...

    2016-07-07

    It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117,more » 025001 (2016)] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.« less

  17. An experimental and theoretical investigation on torrefaction of a large wet wood particle.

    PubMed

    Basu, Prabir; Sadhukhan, Anup Kumar; Gupta, Parthapratim; Rao, Shailendra; Dhungana, Alok; Acharya, Bishnu

    2014-05-01

    A competitive kinetic scheme representing primary and secondary reactions is proposed for torrefaction of large wet wood particles. Drying and diffusive, convective and radiative mode of heat transfer is considered including particle shrinking during torrefaction. The model prediction compares well with the experimental results of both mass fraction residue and temperature profiles for biomass particles. The effect of temperature, residence time and particle size on torrefaction of cylindrical wood particles is investigated through model simulations. For large biomass particles heat transfer is identified as one of the controlling factor for torrefaction. The optimum torrefaction temperature, residence time and particle size are identified. The model may thus be integrated with CFD analysis to estimate the performance of an existing torrefier for a given feedstock. The performance analysis may also provide useful insight for design and development of an efficient torrefier. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Core conditions for alpha heating attained in direct-drive inertial confinement fusion.

    PubMed

    Bose, A; Woo, K M; Betti, R; Campbell, E M; Mangino, D; Christopherson, A R; McCrory, R L; Nora, R; Regan, S P; Goncharov, V N; Sangster, T C; Forrest, C J; Frenje, J; Gatu Johnson, M; Glebov, V Yu; Knauer, J P; Marshall, F J; Stoeckl, C; Theobald, W

    2016-07-01

    It is shown that direct-drive implosions on the OMEGA laser have achieved core conditions that would lead to significant alpha heating at incident energies available on the National Ignition Facility (NIF) scale. The extrapolation of the experimental results from OMEGA to NIF energy assumes only that the implosion hydrodynamic efficiency is unchanged at higher energies. This approach is independent of the uncertainties in the physical mechanism that degrade implosions on OMEGA, and relies solely on a volumetric scaling of the experimentally observed core conditions. It is estimated that the current best-performing OMEGA implosion [Regan et al., Phys. Rev. Lett. 117, 025001 (2016)10.1103/PhysRevLett.117.025001] extrapolated to a 1.9 MJ laser driver with the same illumination configuration and laser-target coupling would produce 125 kJ of fusion energy with similar levels of alpha heating observed in current highest performing indirect-drive NIF implosions.

  19. A comparison of empirical and experimental O7+, O8+, and O/H values, with applications to terrestrial solar wind charge exchange

    NASA Astrophysics Data System (ADS)

    Whittaker, Ian C.; Sembay, Steve

    2016-07-01

    Solar wind charge exchange occurs at Earth between the neutral planetary exosphere and highly charged ions of the solar wind. The main challenge in predicting the resultant photon flux in the X-ray energy bands is due to the interaction efficiency, known as the α value. This study produces experimental α values at the Earth, for oxygen emission in the range of 0.5-0.7 keV. Thirteen years of data from the Advanced Composition Explorer are examined, comparing O7+ and O8+ abundances, as well as O/H to other solar wind parameters allowing all parameters in the αO7,8+ calculation to be estimated based on solar wind velocity. Finally, a table is produced for a range of solar wind speeds giving average O7+ and O8+ abundances, O/H, and αO7,8+ values.

  20. Weighted bi-prediction for light field image coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2017-09-01

    Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.

  1. Threat facilitates subsequent executive control during anxious mood.

    PubMed

    Birk, Jeffrey L; Dennis, Tracy A; Shin, Lisa M; Urry, Heather L

    2011-12-01

    Dual competition framework (DCF) posits that low-level threat may facilitate behavioral performance by influencing executive control functions. Anxiety is thought to strengthen this effect by enhancing threat's affective significance. To test these ideas directly, we examined the effects of low-level threat and experimentally induced anxiety on one executive control function, the efficiency of response inhibition. In Study 1, briefly presented stimuli that were mildly threatening (i.e., fearful faces) relative to nonthreatening (i.e., neutral faces) led to facilitated executive control efficiency during experimentally induced anxiety. No such effect was observed during an equally arousing, experimentally induced happy mood state. In Study 2, we assessed the effects of low-level threat, experimentally induced anxiety, and individual differences in trait anxiety on executive control efficiency. Consistent with Study 1, fearful relative to neutral faces led to facilitated executive control efficiency during experimentally induced anxiety. No such effect was observed during an experimentally induced neutral mood state. Moreover, individual differences in trait anxiety did not moderate the effects of threat and anxiety on executive control efficiency. The findings are partially consistent with the predictions of DCF in that low-level threat improved executive control, at least during a state of anxiety. (c) 2011 APA, all rights reserved.

  2. Design and evaluation of an ultra-slim objective for in-vivo deep optical biopsy

    PubMed Central

    Landau, Sara M.; Liang, Chen; Kester, Robert T.; Tkaczyk, Tomasz S.; Descour, Michael R.

    2010-01-01

    An estimated 1.6 million breast biopsies are performed in the US each year. In order to provide real-time, in-vivo imaging with sub-cellular resolution for optical biopsies, we have designed an ultra-slim objective to fit inside the 1-mm-diameter hypodermic needles currently used for breast biopsies to image tissue stained by the fluorescent probe proflavine. To ensure high-quality imaging performance, experimental tests were performed to characterize fiber bundle’s light-coupling efficiency and simulations were performed to evaluate the impact of candidate lens materials’ autofluorescence. A prototype of NA = 0.4, 250-µm field of view, ultra-slim objective optics was built and tested, yielding diffraction-limited performance and estimated resolution of 0.9 µm. When used in conjunction with a commercial coherent fiber bundle to relay the image formed by the objective, the measured resolution was 2.5 µm. PMID:20389489

  3. Calculations of rate constants for the three-body recombination of H2 in the presence of H2

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.

    1988-01-01

    A new global potential energy hypersurface for H2 + H2 is constructed and quasiclassical trajectory calculations performed using the resonance complex theory and energy transfer mechanism to estimate the rate of three body recombination over the temperature range 100 to 5000 K. The new potential is a faithful representation of ab initio electron structure calculations, is unchanged under the operation of exchanging H atoms, and reproduces the accurate H3 potential as one H atom is pulled away. Included in the fitting procedure are geometries expected to be important when one H2 is near or above the dissociation limit. The dynamics calculations explicitly include the motion of all four atoms and are performed efficiently using a vectorized variable-stepsize integrator. The predicted rate constants are approximately a factor of two smaller than experimental estimates over a broad temperature range.

  4. Condition monitoring of an electro-magnetic brake using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Gofran, T.; Neugebauer, P.; Schramm, D.

    2017-10-01

    This paper presents a data-driven approach to Condition Monitoring of Electromagnetic brakes without use of additional sensors. For safe and efficient operation of electric motor a regular evaluation and replacement of the friction surface of the brake is required. One such evaluation method consists of direct or indirect sensing of the air-gap between pressure plate and magnet. A larger gap is generally indicative of worn surface(s). Traditionally this has been accomplished by the use of additional sensors - making existing systems complex, cost- sensitive and difficult to maintain. In this work a feed-forward Artificial Neural Network (ANN) is learned with the electrical data of the brake by supervised learning method to estimate the air-gap. The ANN model is optimized on the training set and validated using the test set. The experimental results of estimated air-gap with accuracy of over 95% demonstrate the validity of the proposed approach.

  5. Channel Temperature Model for Microwave AlGaN/GaN HEMTs on SiC and Sapphire MMICs in High Power, High Efficiency SSPAs

    NASA Technical Reports Server (NTRS)

    Freeman, Jon C.

    2004-01-01

    A key parameter in the design trade-offs made during AlGaN/GaN HEMTs development for microwave power amplifiers is the channel temperature. An accurate determination can, in general, only be found using detailed software; however, a quick estimate is always helpful, as it speeds up the design cycle. This paper gives a simple technique to estimate the channel temperature of a generic microwave AlGaN/GaN HEMT on SiC or Sapphire, while incorporating the temperature dependence of the thermal conductivity. The procedure is validated by comparing its predictions with the experimentally measured temperatures in microwave devices presented in three recently published articles. The model predicts the temperature to within 5 to 10 percent of the true average channel temperature. The calculation strategy is extended to determine device temperature in power combining MMICs for solid-state power amplifiers (SSPAs).

  6. Forecasting Construction Cost Index based on visibility graph: A network approach

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong

    2018-03-01

    Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.

  7. Non-invasive heart rate monitoring system using giant magneto resistance sensor.

    PubMed

    Kalyan, Kubera; Chugh, Vinit Kumar; Anoop, C S

    2016-08-01

    A simple heart rate (HR) monitoring system designed and developed using the Giant Magneto-Resistance (GMR) sensor is presented in this paper. The GMR sensor is placed on the wrist of the human and it provides the magneto-plethysmographic signal. This signal is processed by the simple analog and digital instrumentation stages to render the heart rate indication. A prototype of the system has been built and test results on 26 volunteers have been reported. The error in HR estimation of the system is merely 1 beat per minute. The performance of the system when layer of cloth is present between the sensor and the human body is investigated. The capability of the system as a HR variability estimator has also been established through experimentation. The proposed technique can be used as an efficient alternative to conventional HR monitors and is well suited for remote and continuous monitoring of HR.

  8. Real-time caries diagnostics by optical PNC method

    NASA Astrophysics Data System (ADS)

    Masychev, Victor I.; Alexandrov, Michail T.

    2000-11-01

    The results of hard tooth tissues research by the optical PNC- method in experimental and clinical conditions are presented. In the experiment under 90 test-sample of tooth slices with thickness about 1mm (enamel, dentine and cement) were researched. The results of the experiment were processed by the method of correlation analyze. Clinical researches were executed on teeth of 210 patients. The regions of tooth tissue diseases with initial, moderate and deep caries were investigated. Spectral characteristics of intact and pathologically changed tooth tissues are presented and their peculiar features are discussed. The results the optical PNC-method application while processing tooth carious cavities are presented in order to estimate efficiency of the mechanical and antiseptic processing of teeth. It is revealed that the PNC-method can be sued as for differential diagnostics of a degree dental carious stage, as for estimating of carefulness of tooth cavity processing before filling.

  9. Express diagnostics of intact and pathological dental hard tissues by optical PNC method

    NASA Astrophysics Data System (ADS)

    Masychev, Victor I.; Alexandrov, Michail T.

    2000-03-01

    The results of hard tooth tissues research by the optical PNC- method in experimental and clinical conditions are presented. In the experiment under 90 test-sample of tooth slices with thickness about 1 mm (enamel, dentine and cement) were researched. The results of the experiment were processed by the method of correlation analyze. Clinical researches were executed on teeth of 210 patients. The regions of tooth tissue diseases with initial, moderate and deep caries were investigated. Spectral characteristics of intact and pathologically changed tooth tissues are presented and their peculiar features are discussed. The results the optical PNC- method application while processing tooth carious cavities are presented in order to estimate efficiency of the mechanical and antiseptic processing of teeth. It is revealed that the PNC-method can be used as for differential diagnostics of a degree dental carious stage, as for estimating of carefulness of tooth cavity processing before filling.

  10. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  11. Characterization and improvement of RNA-Seq precision in quantitative transcript expression profiling.

    PubMed

    Łabaj, Paweł P; Leparc, Germán G; Linggi, Bryan E; Markillie, Lye Meng; Wiley, H Steven; Kreil, David P

    2011-07-01

    Measurement precision determines the power of any analysis to reliably identify significant signals, such as in screens for differential expression, independent of whether the experimental design incorporates replicates or not. With the compilation of large-scale RNA-Seq datasets with technical replicate samples, however, we can now, for the first time, perform a systematic analysis of the precision of expression level estimates from massively parallel sequencing technology. This then allows considerations for its improvement by computational or experimental means. We report on a comprehensive study of target identification and measurement precision, including their dependence on transcript expression levels, read depth and other parameters. In particular, an impressive recall of 84% of the estimated true transcript population could be achieved with 331 million 50 bp reads, with diminishing returns from longer read lengths and even less gains from increased sequencing depths. Most of the measurement power (75%) is spent on only 7% of the known transcriptome, however, making less strongly expressed transcripts harder to measure. Consequently, <30% of all transcripts could be quantified reliably with a relative error<20%. Based on established tools, we then introduce a new approach for mapping and analysing sequencing reads that yields substantially improved performance in gene expression profiling, increasing the number of transcripts that can reliably be quantified to over 40%. Extrapolations to higher sequencing depths highlight the need for efficient complementary steps. In discussion we outline possible experimental and computational strategies for further improvements in quantification precision. rnaseq10@boku.ac.at

  12. Characterization and Uncertainty Analysis of a Reference Pressure Measurement System for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley

    2004-01-01

    This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.

  13. Spatiotemporal movement planning and rapid adaptation for manual interaction.

    PubMed

    Huber, Markus; Kupferberg, Aleksandra; Lenz, Claus; Knoll, Alois; Brandt, Thomas; Glasauer, Stefan

    2013-01-01

    Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot's joint configuration. Modeling the task was based on the assumption that the receiver's decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.

  14. Physical and electrical characteristics of Si/SiC quantum dot superlattice solar cells with passivation layer of aluminum oxide

    NASA Astrophysics Data System (ADS)

    Tsai, Yi-Chia; Li, Yiming; Samukawa, Seiji

    2017-12-01

    In this work, we numerically simulate the silicon (Si)/silicon carbide (SiC) quantum dot superlattice solar cell (SiC-QDSL) with aluminum oxide (Al2O3-QDSL) passivation. By exploiting the passivation layer of Al2O3, the high photocurrent and the conversion efficiency can be achieved without losing the effective bandgap. Based on the two-photon transition mechanism in an AM1.5 and a one sun illumination, the simulated short-circuit current (J sc) of 4.77 mA cm-2 is very close to the experimentally measured 4.75 mA cm-2, which is higher than those of conventional SiC-QDSLs. Moreover, the efficiency fluctuation caused by the structural variation is less sensitive by using the passivation layer. A high conversion efficiency of 17.4% is thus estimated by adopting the QD’s geometry used in the experiment; and, it can be further boosted by applying a hexagonal QD formation with an inter-dot spacing of 0.3 nm.

  15. Physical and electrical characteristics of Si/SiC quantum dot superlattice solar cells with passivation layer of aluminum oxide.

    PubMed

    Tsai, Yi-Chia; Li, Yiming; Samukawa, Seiji

    2017-12-01

    In this work, we numerically simulate the silicon (Si)/silicon carbide (SiC) quantum dot superlattice solar cell (SiC-QDSL) with aluminum oxide (Al 2 O 3 -QDSL) passivation. By exploiting the passivation layer of Al 2 O 3 , the high photocurrent and the conversion efficiency can be achieved without losing the effective bandgap. Based on the two-photon transition mechanism in an AM1.5 and a one sun illumination, the simulated short-circuit current (J sc ) of 4.77 mA cm -2 is very close to the experimentally measured 4.75 mA cm -2 , which is higher than those of conventional SiC-QDSLs. Moreover, the efficiency fluctuation caused by the structural variation is less sensitive by using the passivation layer. A high conversion efficiency of 17.4% is thus estimated by adopting the QD's geometry used in the experiment; and, it can be further boosted by applying a hexagonal QD formation with an inter-dot spacing of 0.3 nm.

  16. A fixed tilt solar collector employing reversible vee-trough reflectors and vacuum tube receivers for solar heating and cooling systems

    NASA Technical Reports Server (NTRS)

    Selcuk, M. K.

    1977-01-01

    The usefulness of vee-trough concentrators in improving the efficiency and reducing the cost of collectors assembled from evacuated tube receivers was studied in the vee-trough/vacuum tube collector (VTVTC) project. The VTVTC was analyzed rigorously and various mathematical models were developed to calculate the optical performance of the vee-trough concentrator and the thermal performance of the evacuated tube receiver. A test bed was constructed to verify the mathematical analyses and compare reflectors made out of glass, Alzak and aluminized FEP Teflon. Tests were run at temperatures ranging from 95 to 180 C. Vee-trough collector efficiencies of 35 to 40% were observed at an operating temperature of about 175 C. Test results compared well with the calculated values. Predicted daily useful heat collection and efficiency values are presented for a year's duration of operation temperatures ranging from 65 to 230 C. Estimated collector costs and resulting thermal energy costs are presented. Analytical and experimental results are discussed along with a complete economic evaluation.

  17. A Computationally-Efficient Inverse Approach to Probabilistic Strain-Based Damage Diagnosis

    NASA Technical Reports Server (NTRS)

    Warner, James E.; Hochhalter, Jacob D.; Leser, William P.; Leser, Patrick E.; Newman, John A

    2016-01-01

    This work presents a computationally-efficient inverse approach to probabilistic damage diagnosis. Given strain data at a limited number of measurement locations, Bayesian inference and Markov Chain Monte Carlo (MCMC) sampling are used to estimate probability distributions of the unknown location, size, and orientation of damage. Substantial computational speedup is obtained by replacing a three-dimensional finite element (FE) model with an efficient surrogate model. The approach is experimentally validated on cracked test specimens where full field strains are determined using digital image correlation (DIC). Access to full field DIC data allows for testing of different hypothetical sensor arrangements, facilitating the study of strain-based diagnosis effectiveness as the distance between damage and measurement locations increases. The ability of the framework to effectively perform both probabilistic damage localization and characterization in cracked plates is demonstrated and the impact of measurement location on uncertainty in the predictions is shown. Furthermore, the analysis time to produce these predictions is orders of magnitude less than a baseline Bayesian approach with the FE method by utilizing surrogate modeling and effective numerical sampling approaches.

  18. Characterization and modeling of microstructured chalcogenide fibers for efficient mid-infrared wavelength conversion.

    PubMed

    Xing, Sida; Grassani, Davide; Kharitonov, Svyatoslav; Billat, Adrien; Brès, Camille-Sophie

    2016-05-02

    We experimentally demonstrate wavelength conversion in the 2 µm region by four-wave mixing in an AsSe and a GeAsSe chalcogenide photonic crystal fibers. A maximum conversion efficiency of -25.4 dB is measured for 112 mW of coupled continuous wave pump in a 27 cm long fiber. We estimate the dispersion parameters and the nonlinear refractive indexes of the chalcogenide PCFs, establishing a good agreement with the values expected from simulations. The different fiber geometries and glass compositions are compared in terms of performance, showing that GeAsSe is a more suited candidate for nonlinear optics at 2 µm. Building from the fitted parameters we then propose a new tapered GeAsSe PCF geometry to tailor the waveguide dispersion and lower the zero dispersion wavelength (ZDW) closer to the 2 µm pump wavelength. Numerical simulations shows that the new design allows both an increased conversion efficiency and bandwidth, and the generation of idler waves further in the mid-IR regions, by tuning the pump wavelength in the vicinity of the fiber ZDW.

  19. Optimization and characterization of liposome formulation by mixture design.

    PubMed

    Maherani, Behnoush; Arab-tehrany, Elmira; Kheirolomoom, Azadeh; Reshetov, Vadzim; Stebe, Marie José; Linder, Michel

    2012-02-07

    This study presents the application of the mixture design technique to develop an optimal liposome formulation by using the different lipids in type and percentage (DOPC, POPC and DPPC) in liposome composition. Ten lipid mixtures were generated by the simplex-centroid design technique and liposomes were prepared by the extrusion method. Liposomes were characterized with respect to size, phase transition temperature, ζ-potential, lamellarity, fluidity and efficiency in loading calcein. The results were then applied to estimate the coefficients of mixture design model and to find the optimal lipid composition with improved entrapment efficiency, size, transition temperature, fluidity and ζ-potential of liposomes. The response optimization of experiments was the liposome formulation with DOPC: 46%, POPC: 12% and DPPC: 42%. The optimal liposome formulation had an average diameter of 127.5 nm, a phase-transition temperature of 11.43 °C, a ζ-potential of -7.24 mV, fluidity (1/P)(TMA-DPH)((¬)) value of 2.87 and an encapsulation efficiency of 20.24%. The experimental results of characterization of optimal liposome formulation were in good agreement with those predicted by the mixture design technique.

  20. Experimental Demonstration of a Cheap and Accurate Phase Estimation

    DOE PAGES

    Rudinger, Kenneth; Kimmel, Shelby; Lobser, Daniel; ...

    2017-05-11

    We demonstrate an experimental implementation of robust phase estimation (RPE) to learn the phase of a single-qubit rotation on a trapped Yb + ion qubit. Here, we show this phase can be estimated with an uncertainty below 4 × 10 -4 rad using as few as 176 total experimental samples, and our estimates exhibit Heisenberg scaling. Unlike standard phase estimation protocols, RPE neither assumes perfect state preparation and measurement, nor requires access to ancillae. We crossvalidate the results of RPE with the more resource-intensive protocol of gate set tomography.

  1. Functioning efficiency of intermediate coolers of multistage steam-jet ejectors of steam turbines

    NASA Astrophysics Data System (ADS)

    Aronson, K. E.; Ryabchikov, A. Yu.; Brodov, Yu. M.; Zhelonkin, N. V.; Murmanskii, I. B.

    2017-03-01

    Designs of various types of intermediate coolers of multistage ejectors are analyzed and thermal effectiveness and gas-dynamic resistance of coolers are estimated. Data on quantity of steam condensed from steam-air mixture in stage I of an ejector cooler was obtained on the basis of experimental results. It is established that the amount of steam condensed in the cooler constitutes 0.6-0.7 and is almost independent of operating steam pressure (and, consequently, of steam flow) and air amount in steam-air mixture. It is suggested to estimate the amount of condensed steam in a cooler of stage I based on comparison of computed and experimental characteristics of stage II. Computation taking this hypothesis for main types of mass produced multistage ejectors into account shows that 0.60-0.85 of steam amount should be condensed in stage I of the cooler. For ejectors with "pipe-in-pipe" type coolers (EPO-3-200) and helical coolers (EO-30), amount of condensed steam may reach 0.93-0.98. Estimation of gas-dynamic resistance of coolers shows that resistance from steam side in coolers with built-in and remote pipe bundle constitutes 100-300 Pa. Gas-dynamic resistance of "pipein- pipe" and helical type coolers is significantly higher (3-6 times) compared with pipe bundle. However, performance by "dry" (atmospheric) air is higher for ejectors with relatively high gas-dynamic resistance of coolers than those with low resistance at approximately equal operating flow values of ejectors.

  2. Individual phase constitutive properties of a TRIP-assisted QP980 steel from a combined synchrotron X-ray diffraction and crystal plasticity approach

    DOE PAGES

    Hu, Xiao Hua; Sun, X.; Hector, Jr., L. G.; ...

    2017-04-21

    Here, microstructure-based constitutive models for multiphase steels require accurate constitutive properties of the individual phases for component forming and performance simulations. We address this requirement with a combined experimental/theoretical methodology which determines the critical resolved shear stresses and hardening parameters of the constituent phases in QP980, a TRIP assisted steel subject to a two-step quenching and partitioning heat treatment. High energy X-Ray diffraction (HEXRD) from a synchrotron source provided the average lattice strains of the ferrite, martensite, and austenite phases from the measured volume during in situ tensile deformation. The HEXRD data was then input to a computationally efficient, elastic-plasticmore » self-consistent (EPSC) crystal plasticity model which estimated the constitutive parameters of different slip systems for the three phases via a trial-and-error approach. The EPSC-estimated parameters are then input to a finite element crystal plasticity (CPFE) model representing the QP980 tensile sample. The predicted lattice strains and global stress versus strain curves are found to be 8% lower that the EPSC model predicted values and from the HEXRD measurements, respectively. This discrepancy, which is attributed to the stiff secant assumption in the EPSC formulation, is resolved with a second step in which CPFE is used to iteratively refine the EPSC-estimated parameters. Remarkably close agreement is obtained between the theoretically-predicted and experimentally derived flow curve for the QP980 material.« less

  3. Individual phase constitutive properties of a TRIP-assisted QP980 steel from a combined synchrotron X-ray diffraction and crystal plasticity approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, X. H.; Sun, X.; Hector, L. G.

    2017-06-01

    Microstructure-based constitutive models for multiphase steels require accurate constitutive properties of the individual phases for component forming and performance simulations. We address this requirement with a combined experimental/theoretical methodology which determines the critical resolved shear stresses and hardening parameters of the constituent phases in QP980, a TRIP assisted steel subject to a two-step quenching and partitioning heat treatment. High energy X-Ray diffraction (HEXRD) from a synchrotron source provided the average lattice strains of the ferrite, martensite, and austenite phases from the measured volume during in situ tensile deformation. The HEXRD data was then input to a computationally efficient, elastic-plastic self-consistentmore » (EPSC) crystal plasticity model which estimated the constitutive parameters of different slip systems for the three phases via a trial-and-error approach. The EPSC-estimated parameters are then input to a finite element crystal plasticity (CPFE) model representing the QP980 tensile sample. The predicted lattice strains and global stress versus strain curves are found to be 8% lower that the EPSC model predicted values and from the HEXRD measurements, respectively. This discrepancy, which is attributed to the stiff secant assumption in the EPSC formulation, is resolved with a second step in which CPFE is used to iteratively refine the EPSC-estimated parameters. Remarkably close agreement is obtained between the theoretically-predicted and experimentally derived flow curve for the QP980 material.« less

  4. The Single Cigarette Economy in India--a Back of the Envelope Survey to Estimate its Magnitude.

    PubMed

    Lal, Pranay; Kumar, Ravinder; Ray, Shreelekha; Sharma, Narinder; Bhattarcharya, Bhaktimay; Mishra, Deepak; Sinha, Mukesh K; Christian, Anant; Rathinam, Arul; Singh, Gurbinder

    2015-01-01

    Sale of single cigarettes is an important factor for early experimentation, initiation and persistence of tobacco use and a vital factor in the smoking epidemic in India as it is globally. Single cigarettes also promote the sale of illicit cigarettes and neutralises the effect of pack warnings and effective taxation, making tobacco more accessible and affordable to minors. This is the first study to our knowledge which estimates the size of the single stick market in India. In February 2014, a 10 jurisdiction survey was conducted across India to estimate the sale of cigarettes in packs and sticks, by brands and price over a full business day. We estimate that nearly 75% of all cigarettes are sold as single sticks annually, which translates to nearly half a billion US dollars or 30 percent of the India's excise revenues from all cigarettes. This is the price which the consumers pay but is not captured through tax and therefore pervades into an informal economy. Tracking the retail price of single cigarettes is an efficient way to determine the willingness to pay by cigarette smokers and is a possible method to determine the tax rates in the absence of any other rationale.

  5. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm.

    PubMed

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  6. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  7. Designing a stable feedback control system for blind image deconvolution.

    PubMed

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. A comparison of Monte Carlo-based Bayesian parameter estimation methods for stochastic models of genetic networks

    PubMed Central

    Zaikin, Alexey; Míguez, Joaquín

    2017-01-01

    We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087

  9. Gap Detection and Temporal Modulation Transfer Function as Behavioral Estimates of Auditory Temporal Acuity Using Band-Limited Stimuli in Young and Older Adults

    PubMed Central

    Shen, Yi

    2015-01-01

    Purpose Gap detection and the temporal modulation transfer function (TMTF) are 2 common methods to obtain behavioral estimates of auditory temporal acuity. However, the agreement between the 2 measures is not clear. This study compares results from these 2 methods and their dependencies on listener age and hearing status. Method Gap detection thresholds and the parameters that describe the TMTF (sensitivity and cutoff frequency) were estimated for young and older listeners who were naive to the experimental tasks. Stimuli were 800-Hz-wide noises with upper frequency limits of 2400 Hz, presented at 85 dB SPL. A 2-track procedure (Shen & Richards, 2013) was used for the efficient estimation of the TMTF. Results No significant correlation was found between gap detection threshold and the sensitivity or the cutoff frequency of the TMTF. No significant effect of age and hearing loss on either the gap detection threshold or the TMTF cutoff frequency was found, while the TMTF sensitivity improved with increasing hearing threshold and worsened with increasing age. Conclusion Estimates of temporal acuity using gap detection and TMTF paradigms do not seem to provide a consistent description of the effects of listener age and hearing status on temporal envelope processing. PMID:25087722

  10. Energy-efficient quantum frequency estimation

    NASA Astrophysics Data System (ADS)

    Liuzzo-Scorpo, Pietro; Correa, Luis A.; Pollock, Felix A.; Górecka, Agnieszka; Modi, Kavan; Adesso, Gerardo

    2018-06-01

    The problem of estimating the frequency of a two-level atom in a noisy environment is studied. Our interest is to minimise both the energetic cost of the protocol and the statistical uncertainty of the estimate. In particular, we prepare a probe in a ‘GHZ-diagonal’ state by means of a sequence of qubit gates applied on an ensemble of n atoms in thermal equilibrium. Noise is introduced via a phenomenological time-non-local quantum master equation, which gives rise to a phase-covariant dissipative dynamics. After an interval of free evolution, the n-atom probe is globally measured at an interrogation time chosen to minimise the error bars of the final estimate. We model explicitly a measurement scheme which becomes optimal in a suitable parameter range, and are thus able to calculate the total energetic expenditure of the protocol. Interestingly, we observe that scaling up our multipartite entangled probes offers no precision enhancement when the total available energy {\\boldsymbol{ \\mathcal E }} is limited. This is at stark contrast with standard frequency estimation, where larger probes—more sensitive but also more ‘expensive’ to prepare—are always preferred. Replacing {\\boldsymbol{ \\mathcal E }} by the resource that places the most stringent limitation on each specific experimental setup, would thus help to formulate more realistic metrological prescriptions.

  11. Modelling and analysis of a direct ascorbic acid fuel cell

    NASA Astrophysics Data System (ADS)

    Zeng, Yingzhi; Fujiwara, Naoko; Yamazaki, Shin-ichi; Tanimoto, Kazumi; Wu, Ping

    L-Ascorbic acid (AA), also known as vitamin C, is an environmentally-benign and biologically-friendly compound that can be used as an alternative fuel for direct oxidation fuel cells. While direct ascorbic acid fuel cells (DAAFCs) have been studied experimentally, modelling and simulation of these devices have been overlooked. In this work, we develop a mathematical model to describe a DAAFC and validate it with experimental data. The model is formulated by integrating the mass and charge balances, and model parameters are estimated by best-fitting to experimental data of current-voltage curves. By comparing the transient voltage curves predicted by dynamic simulation and experiments, the model is further validated. Various parameters that affect the power generation are studied by simulation. The cathodic reaction is found to be the most significant determinant of power generation, followed by fuel feed concentration and the mass-transfer coefficient of ascorbic acid. These studies also reveal that the power density steadily increases with respect to the fuel feed concentration. The results may guide future development and operation of a more efficient DAAFC.

  12. Optimization of the synthesis process of an iron oxide nanocatalyst supported on activated carbon for the inactivation of Ascaris eggs in water using the heterogeneous Fenton-like reaction.

    PubMed

    Morales-Pérez, Ariadna A; Maravilla, Pablo; Solís-López, Myriam; Schouwenaars, Rafael; Durán-Moreno, Alfonso; Ramírez-Zamora, Rosa-María

    2016-01-01

    An experimental design methodology was used to optimize the synthesis of an iron-supported nanocatalyst as well as the inactivation process of Ascaris eggs (Ae) using this material. A factor screening design was used for identifying the significant experimental factors for nanocatalyst support (supported %Fe, (w/w), temperature and time of calcination) and for the inactivation process called the heterogeneous Fenton-like reaction (H2O2 dose, mass ratio Fe/H2O2, pH and reaction time). The optimization of the significant factors was carried out using a face-centered central composite design. The optimal operating conditions for both processes were estimated with a statistical model and implemented experimentally with five replicates. The predicted value of the Ae inactivation rate was close to the laboratory results. At the optimal operating conditions of the nanocatalyst production and Ae inactivation process, the Ascaris ova showed genomic damage to the point that no cell reparation was possible showing that this advanced oxidation process was highly efficient for inactivating this pathogen.

  13. A practical limit to trials needed in one-person randomized controlled experiments.

    PubMed

    Alemi, Roshan; Alemi, Farrokh

    2007-01-01

    Recently in this journal, J. Olsson and colleagues suggested the use of factorial experimental designs to guide a patient's efforts to choose among multiple interventions. These authors argue that factorial design, where every possible combination of the interventions is tried, is superior to sequential trial and errors. Factorial design is efficient in identifying the effectiveness of interventions (factor effect). Most patients care only about feeling better and not why their conditions are improving. If the goal of the patient is to get better and not to estimate the factor effect, then no control groups are needed. In this article, we show a modification in the factorial design of experiments proposed by Olsson and colleagues where a full-factorial design is planned, but experimentation is stopped when the patient's condition improves. With this modification, the number of trials is radically fewer than those needed by factorial design. For example, a patient trying out 4 different interventions with a median probability of success of .50 is expected to need 2 trials before stopping the experimentation in comparison with 32 in a full-factorial design.

  14. Experimental designs for a Benign Paroxysmal Positional Vertigo model

    PubMed Central

    2013-01-01

    Background The pathology of the Benign Paroxysmal Positional Vertigo (BPPV) is detected by a clinician through maneuvers consisting of a series of consecutive head turns that trigger the symptoms of vertigo in patient. A statistical model based on a new maneuver has been developed in order to calculate the volume of endolymph displaced after the maneuver. Methods A simplification of the Navier‐Stokes problem from the fluids theory has been used to construct the model. In addition, the same cubic splines that are commonly used in kinematic control of robots were used to obtain an appropriate description of the different maneuvers. Then experimental designs were computed to obtain an optimal estimate of the model. Results D‐optimal and c‐optimal designs of experiments have been calculated. These experiments consist of a series of specific head turns of duration Δt and angle α that should be performed by the clinician on the patient. The experimental designs obtained indicate the duration and angle of the maneuver to be performed as well as the corresponding proportion of replicates. Thus, in the D‐optimal design for 100 experiments, the maneuver consisting of a positive 30° pitch from the upright position, followed by a positive 30° roll, both with a duration of one and a half seconds is repeated 47 times. Then the maneuver with 60° /6° pitch/roll during half a second is repeated 16 times and the maneuver 90° /90° pitch/roll during half a second is repeated 37 times. Other designs with significant differences are computed and compared. Conclusions A biomechanical model was derived to provide a quantitative basis for the detection of BPPV. The robustness study for the D‐optimal design, with respect to the choice of the nominal values of the parameters, shows high efficiencies for small variations and provides a guide to the researcher. Furthermore, c‐optimal designs give valuable assistance to check how efficient the D‐optimal design is for the estimation of each of the parameters. The experimental designs provided in this paper allow the physician to validate the model. The authors of the paper have held consultations with an ENT consultant in order to align the outline more closely to practical scenarios. PMID:23509996

  15. A Quantitative Model for the Prediction of Sooting Tendency from Molecular Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St. John, Peter C.; Kairys, Paul; Das, Dhrubajyoti D.

    Particulate matter emissions negatively affect public health and global climate, yet newer fuel-efficient gasoline direct injection engines tend to produce more soot than their port-fuel injection counterparts. Fortunately, the search for sustainable biomass-based fuel blendstocks provides an opportunity to develop fuels that suppress soot formation in more efficient engine designs. However, as emissions tests are experimentally cumbersome and the search space for potential bioblendstocks is vast, new techniques are needed to estimate the sooting tendency of a diverse range of compounds. In this study, we develop a quantitative structure-activity relationship (QSAR) model of sooting tendency based on the experimental yieldmore » sooting index (YSI), which ranks molecules on a scale from n-hexane, 0, to benzene, 100. The model includes a rigorously defined applicability domain, and the predictive performance is checked using both internal and external validation. Model predictions for compounds in the external test set had a median absolute error of ~3 YSI units. An investigation of compounds that are poorly predicted by the model lends new insight into the complex mechanisms governing soot formation. Predictive models of soot formation can therefore be expected to play an increasingly important role in the screening and development of next-generation biofuels.« less

  16. Miniature open channel scrubbers for gas collection.

    PubMed

    Toda, Kei; Koga, Tomoko; Tanaka, Toshinori; Ohira, Shin-Ichi; Berg, Jordan M; Dasgupta, Purnendu K

    2010-10-15

    An open channel scrubber is proposed as a miniature fieldable gas collector. The device is 100mm in length, 26 mm in width and 22 mm in thickness. The channel bottom is rendered hydrophilic and liquid flows as a thin layer on the bottom. Air sample flows atop the appropriately chosen flowing liquid film and analyte molecules are absorbed into the liquid. There is no membrane at the air-liquid interface: they contact directly each other. Analyte species collected over a 10 min interval are determined by fluorometric flow analysis or ion chromatography. A calculation algorithm was developed to estimate the collection efficiency a priori; experimental and simulated results agreed well. The characteristics of the open channel scrubber are discussed in this paper from both theoretical and experimental points of view. In addition to superior collection efficiencies at relatively high sample air flow rates, this geometry is particularly attractive that there is no change in collection performance due to membrane fouling. We demonstrate field use for analysis of ambient SO(2) near an active volcano. This is basic investigation of membraneless miniature scrubber and is expected to lead development of an excellent micro-gas analysis system integrated with a detector for continuous measurements. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. An integrative computational approach for prioritization of genomic variants

    DOE PAGES

    Dubchak, Inna; Balasubramanian, Sandhya; Wang, Sheng; ...

    2014-12-15

    An essential step in the discovery of molecular mechanisms contributing to disease phenotypes and efficient experimental planning is the development of weighted hypotheses that estimate the functional effects of sequence variants discovered by high-throughput genomics. With the increasing specialization of the bioinformatics resources, creating analytical workflows that seamlessly integrate data and bioinformatics tools developed by multiple groups becomes inevitable. Here we present a case study of a use of the distributed analytical environment integrating four complementary specialized resources, namely the Lynx platform, VISTA RViewer, the Developmental Brain Disorders Database (DBDB), and the RaptorX server, for the identification of high-confidence candidatemore » genes contributing to pathogenesis of spina bifida. The analysis resulted in prediction and validation of deleterious mutations in the SLC19A placental transporter in mothers of the affected children that causes narrowing of the outlet channel and therefore leads to the reduced folate permeation rate. The described approach also enabled correct identification of several genes, previously shown to contribute to pathogenesis of spina bifida, and suggestion of additional genes for experimental validations. This study demonstrates that the seamless integration of bioinformatics resources enables fast and efficient prioritization and characterization of genomic factors and molecular networks contributing to the phenotypes of interest.« less

  18. Effect of Wind Flow on Convective Heat Losses from Scheffler Solar Concentrator Receivers

    NASA Astrophysics Data System (ADS)

    Nene, Anita Arvind; Ramachandran, S.; Suyambazhahan, S.

    2018-05-01

    Receiver is an important element of solar concentrator system. In a Scheffler concentrator, solar rays get concentrated at focus of parabolic dish. While radiation losses are more predictable and calculable since strongly related to receiver temperature, convective looses are difficult to estimate in view of additional factors such as wind flow direction, speed, receiver geometry, prior to current work. Experimental investigation was carried out on two geometries of receiver namely cylindrical and conical with 2.7 m2 Scheffler to find optimum condition of tilt to provide best efficiency. Experimental results showed that as compared to cylindrical receiver, conical receiver gave maximum efficiency at 45° tilt angle. However effect of additional factors like wind speed, wind direction on especially convective losses could not be separately seen. The current work was undertaken to investigate further the same two geometries using computation fluid dynamics using FLUENT to compute convective losses considering all variables such at tilt angle of receiver, wind velocity and wind direction. For cylindrical receiver, directional heat transfer coefficient (HTC) is remarkably high to tilt condition meaning this geometry is critical to tilt leading to higher convective heat losses. For conical receiver, directional average HTC is remarkably less to tilt condition leading to lower convective heat loss.

  19. A fresh approach to forecasting in astroparticle physics and dark matter searches

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas D. P.; Weniger, Christoph

    2018-02-01

    We present a toolbox of new techniques and concepts for the efficient forecasting of experimental sensitivities. These are applicable to a large range of scenarios in (astro-)particle physics, and based on the Fisher information formalism. Fisher information provides an answer to the question 'what is the maximum extractable information from a given observation?'. It is a common tool for the forecasting of experimental sensitivities in many branches of science, but rarely used in astroparticle physics or searches for particle dark matter. After briefly reviewing the Fisher information matrix of general Poisson likelihoods, we propose very compact expressions for estimating expected exclusion and discovery limits ('equivalent counts method'). We demonstrate by comparison with Monte Carlo results that they remain surprisingly accurate even deep in the Poisson regime. We show how correlated background systematics can be efficiently accounted for by a treatment based on Gaussian random fields. Finally, we introduce the novel concept of Fisher information flux. It can be thought of as a generalization of the commonly used signal-to-noise ratio, while accounting for the non-local properties and saturation effects of background and instrumental uncertainties. It is a powerful and flexible tool ready to be used as core concept for informed strategy development in astroparticle physics and searches for particle dark matter.

  20. A Quantitative Model for the Prediction of Sooting Tendency from Molecular Structure

    DOE PAGES

    St. John, Peter C.; Kairys, Paul; Das, Dhrubajyoti D.; ...

    2017-07-24

    Particulate matter emissions negatively affect public health and global climate, yet newer fuel-efficient gasoline direct injection engines tend to produce more soot than their port-fuel injection counterparts. Fortunately, the search for sustainable biomass-based fuel blendstocks provides an opportunity to develop fuels that suppress soot formation in more efficient engine designs. However, as emissions tests are experimentally cumbersome and the search space for potential bioblendstocks is vast, new techniques are needed to estimate the sooting tendency of a diverse range of compounds. In this study, we develop a quantitative structure-activity relationship (QSAR) model of sooting tendency based on the experimental yieldmore » sooting index (YSI), which ranks molecules on a scale from n-hexane, 0, to benzene, 100. The model includes a rigorously defined applicability domain, and the predictive performance is checked using both internal and external validation. Model predictions for compounds in the external test set had a median absolute error of ~3 YSI units. An investigation of compounds that are poorly predicted by the model lends new insight into the complex mechanisms governing soot formation. Predictive models of soot formation can therefore be expected to play an increasingly important role in the screening and development of next-generation biofuels.« less

  1. Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data

    PubMed Central

    CHEN, SHUAI; ZHAO, HONGWEI

    2013-01-01

    Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869

  2. Infrared upconversion for astronomical applications. [laser applications to astronomical spectroscopy of infrared spectra

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Kostiuk, T.; Ogilvie, K. W.

    1975-01-01

    The performance of an upconversion system is examined for observation of astronomical sources in the low to middle infrared spectral range. Theoretical values for the performance parameters of an upconversion system for astronomical observations are evaluated in view of the conversion efficiencies, spectral resolution, field of view, minimum detectable source brightness and source flux. Experimental results of blackbody measurements and molecular absorption spectrum measurements using a lithium niobate upconverter with an argon-ion laser as the pump are presented. Estimates of the expected optimum sensitivity of an upconversion device which may be built with the presently available components are given.

  3. Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation

    PubMed Central

    Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.

    2013-01-01

    Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205

  4. A note on windowing for the waveform relaxation

    NASA Technical Reports Server (NTRS)

    Zhang, Hong

    1994-01-01

    The technique of windowing has been often used in the implementation of the waveform relaxations for solving ODE's or time dependent PDE's. Its efficiency depends upon problem stiffness and operator splitting. Using model problems, the estimates for window length and convergence rate are derived. The electiveness of windowing is then investigated for non-stiff and stiff cases respectively. lt concludes that for the former, windowing is highly recommended when a large discrepancy exists between the convergence rate on a time interval and the ones on its subintervals. For the latter, windowing does not provide any computational advantage if machine features are disregarded. The discussion is supported by experimental results.

  5. OPTOELECTRONICS, FIBER OPTICS, AND OTHER ASPECTS OF QUANTUM ELECTRONICS: Effective matching of a microwave modulator to a laser diode in a selected band of gigahertz frequencies

    NASA Astrophysics Data System (ADS)

    Bliskavitskiĭ, A. A.; Vladimirov, Yu K.; Tambiev, Yu A.; Shelkov, N. V.

    1989-08-01

    Theoretical and experimental investigations were made of wide-band low-loss matching of an InGaAsP heterolaser to a microwave modulator in the gigahertz range. The results of panoramic measurements of the standing-wave ratio of the laser were used to estimate the components of the equivalent electrical circuit of the laser and to synthesize a passive microstrip matching circuit which increased by more than 10 dB the efficiency of modulation of the laser radiation intensity in a 2-3.4 GHz band of modulating frequencies.

  6. Investigation of the ignition of liquid hydrocarbon fuels with nanoadditives

    NASA Astrophysics Data System (ADS)

    Bakulin, V. N.; Velikodnyi, V. Yu.; Levin, Yu. K.; Popov, V. V.

    2017-12-01

    During our experimental studies we showed a high efficiency of the influence of nanoparticle additives on the stability of the ignition of hydrocarbon fuels and the stabilization of their combustion in a highfrequency high-voltage discharge. We detected the effects of a jet deceleration, an increase in the volume of the combustible mixture, and a reduction in the inflammation delay time. These effects have been estimated quantitatively by digitally processing the video frames of the ignition of a bubbled kerosene jet with 0.5% graphene nanoparticle additives and without these additives. This effect has been explained by the influence of electrodynamic processes.

  7. Comparison of planar, PET and well-counter measurements of total tumor radioactivity in a mouse xenograft model.

    PubMed

    Green, Michael V; Seidel, Jurgen; Williams, Mark R; Wong, Karen J; Ton, Anita; Basuli, Falguni; Choyke, Peter L; Jagoda, Elaine M

    2017-10-01

    Quantitative small animal radionuclide imaging studies are often carried out with the intention of estimating the total radioactivity content of various tissues such as the radioactivity content of mouse xenograft tumors exposed to putative diagnostic or therapeutic agents. We show that for at least one specific application, positron projection imaging (PPI) and PET yield comparable estimates of absolute total tumor activity and that both of these estimates are highly correlated with direct well-counting of these same tumors. These findings further suggest that in this particular application, PPI is a far more efficient data acquisition and processing methodology than PET. Forty-one athymic mice were implanted with PC3 human prostate cancer cells transfected with prostate-specific membrane antigen (PSMA (+)) and one additional animal (for a total of 42) with a control blank vector (PSMA (-)). All animals were injected with [ 18 F] DCFPyl, a ligand for PSMA, and imaged for total tumor radioactivity with PET and PPI. The tumors were then removed, assayed by well counting for total radioactivity and the values between these methods intercompared. PET, PPI and well-counter estimates of total tumor radioactivity were highly correlated (R 2 >0.98) with regression line slopes near unity (0.95

  8. Retrieving high-resolution surface solar radiation with cloud parameters derived by combining MODIS and MTSAT data

    NASA Astrophysics Data System (ADS)

    Tang, Wenjun; Qin, Jun; Yang, Kun; Liu, Shaomin; Lu, Ning; Niu, Xiaolei

    2016-03-01

    Cloud parameters (cloud mask, effective particle radius, and liquid/ice water path) are the important inputs in estimating surface solar radiation (SSR). These parameters can be derived from MODIS with high accuracy, but their temporal resolution is too low to obtain high-temporal-resolution SSR retrievals. In order to obtain hourly cloud parameters, an artificial neural network (ANN) is applied in this study to directly construct a functional relationship between MODIS cloud products and Multifunctional Transport Satellite (MTSAT) geostationary satellite signals. In addition, an efficient parameterization model for SSR retrieval is introduced and, when driven with MODIS atmospheric and land products, its root mean square error (RMSE) is about 100 W m-2 for 44 Baseline Surface Radiation Network (BSRN) stations. Once the estimated cloud parameters and other information (such as aerosol, precipitable water, ozone) are input to the model, we can derive SSR at high spatiotemporal resolution. The retrieved SSR is first evaluated against hourly radiation data at three experimental stations in the Haihe River basin of China. The mean bias error (MBE) and RMSE in hourly SSR estimate are 12.0 W m-2 (or 3.5 %) and 98.5 W m-2 (or 28.9 %), respectively. The retrieved SSR is also evaluated against daily radiation data at 90 China Meteorological Administration (CMA) stations. The MBEs are 9.8 W m-2 (or 5.4 %); the RMSEs in daily and monthly mean SSR estimates are 34.2 W m-2 (or 19.1 %) and 22.1 W m-2 (or 12.3 %), respectively. The accuracy is comparable to or even higher than two other radiation products (GLASS and ISCCP-FD), and the present method is more computationally efficient and can produce hourly SSR data at a spatial resolution of 5 km.

  9. A novel estimating method for steering efficiency of the driver with electromyography signals

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Ji, Xuewu; Hayama, Ryouhei; Mizuno, Takahiro

    2014-05-01

    The existing research of steering efficiency mainly focuses on the mechanism efficiency of steering system, aiming at designing and optimizing the mechanism of steering system. In the development of assist steering system especially the evaluation of its comfort, the steering efficiency of driver physiological output usually are not considered, because this physiological output is difficult to measure or to estimate, and the objective evaluation of steering comfort therefore cannot be conducted with movement efficiency perspective. In order to take a further step to the objective evaluation of steering comfort, an estimating method for the steering efficiency of the driver was developed based on the research of the relationship between the steering force and muscle activity. First, the steering forces in the steering wheel plane and the electromyography (EMG) signals of the primary muscles were measured. These primary muscles are the muscles in shoulder and upper arm which mainly produced the steering torque, and their functions in steering maneuver were identified previously. Next, based on the multiple regressions of the steering force and EMG signals, both the effective steering force and the total force capacity of driver in steering maneuver were calculated. Finally, the steering efficiency of driver was estimated by means of the estimated effective force and the total force capacity, which represented the information of driver physiological output of the primary muscles. This research develops a novel estimating method for driver steering efficiency of driver physiological output, including the estimation of both steering force and the force capacity of primary muscles with EMG signals, and will benefit to evaluate the steering comfort with an objective perspective.

  10. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    PubMed

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  11. Potential Organ-Donor Supply and Efficiency of Organ Procurement Organizations

    PubMed Central

    Guadagnoli, Edward; Christiansen, Cindy L.; Beasley, Carol L.

    2003-01-01

    The authors estimated the supply of organ donors in the U.S. and also according to organ procurement organizations (OPOs). They estimated the number of donors in the U.S. to be 16,796. Estimates of the number of potential donors for each OPO were used to calculate the level of donor efficiency (actual donors as a percent of potential donors). Overall, donor efficiency for OPOs was 35 percent; the majority was between 30- and 40-percent efficient. Although there is room to improve donor efficiency in the U.S., even a substantial improvement will not meet the Nation's demand for organs. PMID:14628403

  12. Potential organ-donor supply and efficiency of organ procurement organizations.

    PubMed

    Guadagnoli, Edward; Christiansen, Cindy L; Beasley, Carol L

    2003-01-01

    The authors estimated the supply of organ donors in the U.S. and also according to organ procurement organizations (OPOs). They estimated the number of donors in the U.S. to be 16,796. Estimates of the number of potential donors for each OPO were used to calculate the level of donor efficiency (actual donors as a percent of potential donors). Overall, donor efficiency for OPOs was 35 percent; the majority was between 30- and 40-percent efficient. Although there is room to improve donor efficiency in the U.S., even a substantial improvement will not meet the Nation's demand for organs.

  13. Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications

    DOEpatents

    Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI

    2012-05-29

    A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.

  14. On-board monitoring of 2-D spatially-resolved temperatures in cylindrical lithium-ion batteries: Part I. Low-order thermal modelling

    NASA Astrophysics Data System (ADS)

    Richardson, Robert R.; Zhao, Shi; Howey, David A.

    2016-09-01

    Estimating the temperature distribution within Li-ion batteries during operation is critical for safety and control purposes. Although existing control-oriented thermal models - such as thermal equivalent circuits (TEC) - are computationally efficient, they only predict average temperatures, and are unable to predict the spatially resolved temperature distribution throughout the cell. We present a low-order 2D thermal model of a cylindrical battery based on a Chebyshev spectral-Galerkin (SG) method, capable of predicting the full temperature distribution with a similar efficiency to a TEC. The model accounts for transient heat generation, anisotropic heat conduction, and non-homogeneous convection boundary conditions. The accuracy of the model is validated through comparison with finite element simulations, which show that the 2-D temperature field (r, z) of a large format (64 mm diameter) cell can be accurately modelled with as few as 4 states. Furthermore, the performance of the model for a range of Biot numbers is investigated via frequency analysis. For larger cells or highly transient thermal dynamics, the model order can be increased for improved accuracy. The incorporation of this model in a state estimation scheme with experimental validation against thermocouple measurements is presented in the companion contribution (http://www.sciencedirect.com/science/article/pii/S0378775316308163)

  15. Bubble Proliferation or Dissolution of Cavitation Nuclei in the Beam Path of a Shock-Wave Lithotripter

    NASA Astrophysics Data System (ADS)

    Frank, Spencer; Lautz, Jaclyn; Sankin, Georgy N.; Szeri, Andrew J.; Zhong, Pei

    2015-03-01

    It is hypothesized that the decreased treatment efficiency in contemporary shock-wave lithotripters is related to tensile wave attenuation due to cavitation in the prefocal beam path. Utilizing high-speed imaging of the beam path and focal pressure waveform measurements, tensile attenuation is associated with bubble proliferation. By systematically testing different combinations of pulse-repetition frequency and gas concentration, we modulate the bubble-dissolution time to identify which conditions lead to bubble proliferation and show that reducing bubble proliferation in the beam path significantly improves acoustic transmission and stone comminution efficiency in vitro. In addition to experiments, a bubble-proliferation model is developed that takes gas diffusion across the bubble wall and bubble fragmentation into account. By aligning the model with experimental observations, the number of daughter bubbles produced after a single lithotripter bubble collapse is estimated to be in the range of 253 ˜510 . This finding is on the same order of magnitude with previous measurements of an isolated bubble collapse in a lithotripter field by Pishchalnikov, McAteer, and Williams [BJU Int. 102, 1681 (2008), 10.1111/j.1464-410X.2008.07896.x], and this estimate improves the general understanding of lithotripsy bubble dynamics in the beam path.

  16. Development of Analytical Algorithm for the Performance Analysis of Power Train System of an Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon

    Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.

  17. An Efficient Deterministic Approach to Model-based Prediction Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Saxena, Abhinav; Goebel, Kai

    2012-01-01

    Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the future inputs to the system. Prognostics algorithm must account for this inherent uncertainty. In addition, these algorithms never know exactly the state of the system at the desired time of prediction, or the exact model describing the future evolution of the system, accumulating additional uncertainty into the predicted EOL. Prediction algorithms that do not account for these sources of uncertainty are misrepresenting the EOL and can lead to poor decisions based on their results. In this paper, we explore the impact of uncertainty in the prediction problem. We develop a general model-based prediction algorithm that incorporates these sources of uncertainty, and propose a novel approach to efficiently handle uncertainty in the future input trajectories of a system by using the unscented transformation. Using this approach, we are not only able to reduce the computational load but also estimate the bounds of uncertainty in a deterministic manner, which can be useful to consider during decision-making. Using a lithium-ion battery as a case study, we perform several simulation-based experiments to explore these issues, and validate the overall approach using experimental data from a battery testbed.

  18. Continuous diffusion signal, EAP and ODF estimation via Compressive Sensing in diffusion MRI.

    PubMed

    Merlet, Sylvain L; Deriche, Rachid

    2013-07-01

    In this paper, we exploit the ability of Compressed Sensing (CS) to recover the whole 3D Diffusion MRI (dMRI) signal from a limited number of samples while efficiently recovering important diffusion features such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF). Some attempts to use CS in estimating diffusion signals have been done recently. However, this was mainly an experimental insight of CS capabilities in dMRI and the CS theory has not been fully exploited. In this work, we also propose to study the impact of the sparsity, the incoherence and the RIP property on the reconstruction of diffusion signals. We show that an efficient use of the CS theory enables to drastically reduce the number of measurements commonly used in dMRI acquisitions. Only 20-30 measurements, optimally spread on several b-value shells, are shown to be necessary, which is less than previous attempts to recover the diffusion signal using CS. This opens an attractive perspective to measure the diffusion signals in white matter within a reduced acquisition time and shows that CS holds great promise and opens new and exciting perspectives in diffusion MRI (dMRI). Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Water scarcity, market-based incentives, and consumer response

    NASA Astrophysics Data System (ADS)

    Krause, K.; Chermak, J. M.; Brookshire, D. S.

    2003-04-01

    Water is an increasingly scarce resource and the future viability of many regions will depend in large part on how efficiently resources are utilized. A key factor to this success will be a thorough understanding of consumers and the characteristics that drive their water use. In this research test and find support for the hypothesis that residential water consumers are heterogeneous. We combine experimental and survey responses to test for statistically significant consumer characteristics that are observable factors of demand for water. Significant factors include "stage of life" (i.e., student versus workforce versus retired), as well as various social and cultural factors including age, ethnicity, political affiliation and religious affiliation. Identification of these characteristics allows us to econometrically estimate disaggregated water demand for a sample of urban water consumers in Albuquerque, New Mexico, USA. The results provide unique parameter estimates for different consumer types. Using these results we design an incentive compatible, non-linear pricing program that allows individual consumers to choose a fixed fee/commodity charge from a menu that not only allows the individual to maximize his or her utility, while meeting the conservation goals of the program. We show that this program, with the attention to consumer differences is more efficient than the traditional "one size fits all" programs commonly employed by many water utilities.

  20. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    PubMed

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  1. Sequential causal inference: Application to randomized trials of adaptive treatment strategies

    PubMed Central

    Dawson, Ree; Lavori, Philip W.

    2009-01-01

    SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714

  2. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  3. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    PubMed

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  4. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  5. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin

    2011-12-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.

  6. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  7. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  8. Tritium internal dose estimation from measurements with liquid scintillators.

    PubMed

    Pántya, A; Dálnoki, Á; Imre, A R; Zagyvai, P; Pázmándi, T

    2018-07-01

    Tritium may exist in several chemical and physical forms in workplaces, common occurrences are in vapor or liquid form (as tritiated water) and in organic form (e.g. thymidine) which can get into the body by inhalation or by ingestion. For internal dose assessment it is usually assumed that urine samples for tritium analysis are obtained after the tritium concentration inside the body has reached equilibrium following intake. Comparison was carried out for two types of vials, two efficiency calculation methods and two available liquid scintillation devices to highlight the errors of the measurements. The results were used for dose estimation with MONDAL-3 software. It has been shown that concerning the accuracy of the final internal dose assessment, the uncertainties of the assumptions used in the dose assessment (for example the date and route of intake, the physical and chemical form) can be more influential than the errors of the measured data. Therefore, the improvement of the experimental accuracy alone is not the proper way to improve the accuracy of the internal dose estimation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. QUANTIFYING ALTERNATIVE SPLICING FROM PAIRED-END RNA-SEQUENCING DATA.

    PubMed

    Rossell, David; Stephan-Otto Attolini, Camille; Kroiss, Manuel; Stöcker, Almond

    2014-03-01

    RNA-sequencing has revolutionized biomedical research and, in particular, our ability to study gene alternative splicing. The problem has important implications for human health, as alternative splicing may be involved in malfunctions at the cellular level and multiple diseases. However, the high-dimensional nature of the data and the existence of experimental biases pose serious data analysis challenges. We find that the standard data summaries used to study alternative splicing are severely limited, as they ignore a substantial amount of valuable information. Current data analysis methods are based on such summaries and are hence sub-optimal. Further, they have limited flexibility in accounting for technical biases. We propose novel data summaries and a Bayesian modeling framework that overcome these limitations and determine biases in a non-parametric, highly flexible manner. These summaries adapt naturally to the rapid improvements in sequencing technology. We provide efficient point estimates and uncertainty assessments. The approach allows to study alternative splicing patterns for individual samples and can also be the basis for downstream analyses. We found a several fold improvement in estimation mean square error compared popular approaches in simulations, and substantially higher consistency between replicates in experimental data. Our findings indicate the need for adjusting the routine summarization and analysis of alternative splicing RNA-seq studies. We provide a software implementation in the R package casper.

  10. Sonoporation generator design and performance evaluation

    NASA Astrophysics Data System (ADS)

    Svilainis, L.; Chaziachmetovas, A.; Jurkonis, R.; Kybartas, D.

    2012-05-01

    We propose to perform the sonoporation by use of direct excitation employing the square wave pulser. Addition of the arbitrary waveform generator and programmable high voltage power supply to the pulser should allow for more economical experiment arrangement. Excitation stage has to be capable of transmitting high voltage signal into capacitive load. This paper reports the generator topology and performance evaluation experimental results. Transformer push-pull topology was suggested. Thanks to proposed pulser structure both unipolar and bipolar pulses can be obtained. Energy per pulse was suggested as performance parameter: any combination of achievable bust duration and repetition frequency can be estimated. Comparison of experimental results to Pspice modeling and energy delivered to load is presented. Energy per pulse at 300 V (600 Vpp) 2.7 MHz output into 3000 pF load was 1.1 mJ. Using 5 W power supplies this would allow for 3 kHz pulse repetition frequency single pulse of 100 Hz pulse repetition at 40 pulses burst. Focused 2.7 MHz center frequency transducer was targeted as load. Transducer impedance was measured to estimate the load and power delivery efficiency. It was found that 5 Ω is the optimal generator output impedance at 2.7 MHz. Using 2.7 MHz transducer we were able to achieve 1 MPa peak negative pressure at 250 V power supply.

  11. Burst Ductility of Zirconium Clads: The Defining Role of Residual Stress

    NASA Astrophysics Data System (ADS)

    Kumar, Gulshan; Kanjarla, A. K.; Lodh, Arijit; Singh, Jaiveer; Singh, Ramesh; Srivastava, D.; Dey, G. K.; Saibaba, N.; Doherty, R. D.; Samajdar, Indradev

    2016-08-01

    Closed end burst tests, using room temperature water as pressurizing medium, were performed on a number of industrially produced zirconium (Zr) clads. A total of 31 samples were selected based on observed differences in burst ductility. The latter was represented as total circumferential elongation or TCE. The selected samples, with a range of TCE values (5 to 35 pct), did not show any correlation with mechanical properties along axial direction, microstructural parameters, crystallographic textures, and outer tube-surface normal ( σ 11) and shear ( τ 13) components of the residual stress matrix. TCEs, however, had a clear correlation with hydrostatic residual stress ( P h), as estimated from tri-axial stress analysis on the outer tube surface. Estimated P h also scaled with measured normal stress ( σ 33) at the tube cross section. An elastic-plastic finite element model with ductile damage failure criterion was developed to understand the burst mechanism of zirconium clads. Experimentally measured P h gradients were imposed on a solid element continuum finite element (FE) simulation to mimic the residual stresses present prior to pressurization. Trends in experimental TCEs were also brought out with computationally efficient shell element-based FE simulations imposing the outer tube-surface P h values. Suitable components of the residual stress matrix thus determined the burst performance of the Zr clads.

  12. Automatic Emboli Detection System for the Artificial Heart

    NASA Astrophysics Data System (ADS)

    Steifer, T.; Lewandowski, M.; Karwat, P.; Gawlikowski, M.

    In spite of the progress in material engineering and ventricular assist devices construction, thromboembolism remains the most crucial problem in mechanical heart supporting systems. Therefore, the ability to monitor the patient's blood for clot formation should be considered an important factor in development of heart supporting systems. The well-known methods for automatic embolus detection are based on the monitoring of the ultrasound Doppler signal. A working system utilizing ultrasound Doppler is being developed for the purpose of flow estimation and emboli detection in the clinical artificial heart ReligaHeart EXT. Thesystem will be based on the existing dual channel multi-gate Doppler device with RF digital processing. A specially developed clamp-on cannula probe, equipped with 2 - 4 MHz piezoceramic transducers, enables easy system setup. We present the issuesrelated to the development of automatic emboli detection via Doppler measurements. We consider several algorithms for the flow estimation and emboli detection. We discuss their efficiency and confront them with the requirements of our experimental setup. Theoretical considerations are then met with preliminary experimental findings from a) flow studies with blood mimicking fluid and b) in-vitro flow studies with animal blood. Finally, we discuss some more methodological issues - we consider several possible approaches to the problem of verification of the accuracy of the detection system.

  13. Evaluating the Small-World-Ness of a Sampled Network: Functional Connectivity of Entorhinal-Hippocampal Circuitry

    NASA Astrophysics Data System (ADS)

    She, Qi; Chen, Guanrong; Chan, Rosa H. M.

    2016-02-01

    The amount of publicly accessible experimental data has gradually increased in recent years, which makes it possible to reconsider many longstanding questions in neuroscience. In this paper, an efficient framework is presented for reconstructing functional connectivity using experimental spike-train data. A modified generalized linear model (GLM) with L1-norm penalty was used to investigate 10 datasets. These datasets contain spike-train data collected from the entorhinal-hippocampal region in the brains of rats performing different tasks. The analysis shows that entorhinal-hippocampal network of well-trained rats demonstrated significant small-world features. It is found that the connectivity structure generated by distance-dependent models is responsible for the observed small-world features of the reconstructed networks. The models are utilized to simulate a subset of units recorded from a large biological neural network using multiple electrodes. Two metrics for quantifying the small-world-ness both suggest that the reconstructed network from the sampled nodes estimates a more prominent small-world-ness feature than that of the original unknown network when the number of recorded neurons is small. Finally, this study shows that it is feasible to adjust the estimated small-world-ness results based on the number of neurons recorded to provide a more accurate reference of the network property.

  14. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less

  15. Bond energies of ThO+ and ThC+: A guided ion beam and quantum chemical investigation of the reactions of thorium cation with O2 and CO

    NASA Astrophysics Data System (ADS)

    Cox, Richard M.; Citir, Murat; Armentrout, P. B.; Battey, Samuel R.; Peterson, Kirk A.

    2016-05-01

    Kinetic energy dependent reactions of Th+ with O2 and CO are studied using a guided ion beam tandem mass spectrometer. The formation of ThO+ in the reaction of Th+ with O2 is observed to be exothermic and barrierless with a reaction efficiency at low energies of k/kLGS = 1.21 ± 0.24 similar to the efficiency observed in ion cyclotron resonance experiments. Formation of ThO+ and ThC+ in the reaction of Th+ with CO is endothermic in both cases. The kinetic energy dependent cross sections for formation of these product ions were evaluated to determine 0 K bond dissociation energies (BDEs) of D0(Th+-O) = 8.57 ± 0.14 eV and D0(Th+-C) = 4.82 ± 0.29 eV. The present value of D0 (Th+-O) is within experimental uncertainty of previously reported experimental values, whereas this is the first report of D0 (Th+-C). Both BDEs are observed to be larger than those of their transition metal congeners, TiL+, ZrL+, and HfL+ (L = O and C), believed to be a result of lanthanide contraction. Additionally, the reactions were explored by quantum chemical calculations, including a full Feller-Peterson-Dixon composite approach with correlation contributions up to coupled-cluster singles and doubles with iterative triples and quadruples (CCSDTQ) for ThC, ThC+, ThO, and ThO+, as well as more approximate CCSD with perturbative (triples) [CCSD(T)] calculations where a semi-empirical model was used to estimate spin-orbit energy contributions. Finally, the ThO+ BDE is compared to other actinide (An) oxide cation BDEs and a simple model utilizing An+ promotion energies to the reactive state is used to estimate AnO+ and AnC+ BDEs. For AnO+, this model yields predictions that are typically within experimental uncertainty and performs better than density functional theory calculations presented previously.

  16. Rate of convergence of k-step Newton estimators to efficient likelihood estimators

    Treesearch

    Steve Verrill

    2007-01-01

    We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...

  17. Trophic transfer efficiency of DDT to lake trout (Salvelinus namaycush) from their prey

    USGS Publications Warehouse

    Madenjian, C.P.; O'Connor, D.V.

    2004-01-01

    The objective of our study was to determine the efficiency with which lake trout retain DDT from their natural food. Our estimate of DDT assimilation efficiency would represent the most realistic estimate, to date, for use in risk assessment models.

  18. Effect of condensed tannins in rations of lactating dairy cows on production variables and nitrogen use efficiency.

    PubMed

    Gerlach, K; Pries, M; Tholen, E; Schmithausen, A J; Büscher, W; Südekum, K-H

    2018-01-08

    The objective of this study was to evaluate the effect of supplemented condensed tannins (CT) from the bark of the Black Wattle tree (Acacia mearnsii) on production variables and N use efficiency in high yielding dairy cows. A feeding trial with 96 lactating German Holstein cows was conducted for a total of 169 days, divided into four periods. The animals were allotted to two groups (control (CON) and experimental (EXP) group) according to milk yield in previous lactation, days in milk (98), number of lactations and BW. The trial started and finished with a period (period 1 and 4) where both groups received the same ration (total-mixed ration based on grass and maize silage, ensiled sugar beet pulp, lucerne hay, mineral premix and concentrate, calculated for 37 kg energy-corrected milk). In between, the ration of EXP cows was supplemented with 1% (CT1, period 2) and 3% of dry matter (DM) (CT3, period 3) of a commercial A. mearnsii extract (containing 0.203 g CT/g DM) which was mixed into the concentrate. In period 3, samples of urine and faeces were collected from 10 cows of each group and analyzed to estimate N excretion. Except for a tendency for a reduced milk urea concentration with CT1, there was no difference between groups in period 2 (CON v. CT1; P>0.05). The CT3 significantly reduced (P<0.05) milk protein yield, the apparent N efficiency (kg milk N/k feed N) and milk urea concentration; but total milk yield and energy-corrected milk yield were not affected by treatment. Furthermore, as estimated from 10 cows per group and using urinary K as a marker to estimate the daily amount of urine voided, CT3 caused a minor shift of N compounds from urine to faeces, as urea-N in urine was reduced, whereas the N concentration in faeces increased. As an improvement in productivity was not achieved and N use efficiency was decreased by adding the CT product it can be concluded that under current circumstances the use in high yielding dairy cows is not advantageous.

  19. Direction of Arrival Estimation Using a Reconfigurable Array

    DTIC Science & Technology

    2005-05-06

    civilian world. Keywords: Direction-of-arrival Estimation MUSIC algorithm Reconfigurable Array Experimental Created by Neevia Personal...14. SUBJECT TERMS: Direction-of-arrival ; Estimation ; MUSIC algorithm ; Reconfigurable ; Array ; Experimental 16. PRICE CODE 17...9 1.5 MuSiC Algorithm

  20. Experimental study of a 1 MW, 170 GHz gyrotron oscillator

    NASA Astrophysics Data System (ADS)

    Kimura, Takuji

    A detailed experimental study is presented of a 1 MW, 170 GHz gyrotron oscillator whose design is consistent with the ECH requirements of the International Thermonuclear Experimental Reactor (ITER) for bulk heating and current drive. This work is the first to demonstrate that megawatt power level at 170 GHz can be achieved in a gyrotron with high efficiency for plasma heating applications. Maximum output power of 1.5 MW is obtained at 170.1 GHz in 85 kV, 50A operation for an efficiency of 35%. Although the experiment at MIT is conducted with short pulses (3 μs), the gyrotron is designed to be suitable for development by industry for continuous wave operation. The peak ohmic loss on the cavity wall for 1 MW of output power is calculated to be 2.3 kW/cm2, which can be handled using present cooling technology. Mode competition problems in a highly over-moded cavity are studied to maximize the efficiency. Various aspects of electron gun design are examined to obtain high quality electron beams with very low velocity spread. A triode magnetron injection gun is designed using the EGUN simulation code. A total perpendicular velocity spread of less than 8% is realized by designing a low- sensitivity, non-adiabatic gun. The RF power is generated in a short tapered cavity with an iris step. The operating mode is the TE28,8,1 mode. A mode converter is designed to convert the RF output to a Gaussian beam. Power and efficiency are measured in the design TE28,8,1 mode at 170.1 GHz as well as the TE27,8,1 mode at 166.6 GHz and TE29,8,1 mode at 173.5 GHz. Efficiencies between 34%-36% are consistently obtained over a wide range of operating parameters. These efficiencies agree with the highest values predicted by the multimode simulations. The startup scenario is investigated and observed to agree with the linear theory. The measured beam velocity ratio is consistent with EGUN simulation. Interception of reflected beam by the mod-anode is measured as a function of velocity ratio, from which the beam velocity spreads are estimated. A preliminary test of the mode converter shows that the radiation from the dimpled wall launcher is a Gaussian-like beam. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139- 4307. Ph. 617-253-5668; Fax 617-253-1690.)

Top