Sample records for estimated process efficiency

  1. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  2. Real-Time PCR Quantification Using A Variable Reaction Efficiency Model

    PubMed Central

    Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.

    2008-01-01

    Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886

  3. Age-Dependent Relationships between Prefrontal Cortex Activation and Processing Efficiency

    PubMed Central

    Motes, Michael A.; Biswal, Bharat B.; Rypma, Bart

    2012-01-01

    fMRI was used in the present study to examine the neural basis for age-related differences in processing efficiency, particularly targeting prefrontal cortex (PFC). During scanning, older and younger participants completed a processing efficiency task in which they determined on each trial whether a symbol-number pair appeared in a simultaneously presented array of nine symbol-number pairs. Estimates of task-related BOLD signal-change were obtained for each participant. These estimates were then correlated with the participants’ performance on the task. For younger participants, BOLD signal-change within PFC decreased with better performance, but for older participants, BOLD signal-change within PFC increased with better performance. The results support the hypothesis that the availability and use of PFC resources mediates age-related changes in processing efficiency. PMID:22792129

  4. Age-Dependent Relationships between Prefrontal Cortex Activation and Processing Efficiency.

    PubMed

    Motes, Michael A; Biswal, Bharat B; Rypma, Bart

    2011-01-01

    fMRI was used in the present study to examine the neural basis for age-related differences in processing efficiency, particularly targeting prefrontal cortex (PFC). During scanning, older and younger participants completed a processing efficiency task in which they determined on each trial whether a symbol-number pair appeared in a simultaneously presented array of nine symbol-number pairs. Estimates of task-related BOLD signal-change were obtained for each participant. These estimates were then correlated with the participants' performance on the task. For younger participants, BOLD signal-change within PFC decreased with better performance, but for older participants, BOLD signal-change within PFC increased with better performance. The results support the hypothesis that the availability and use of PFC resources mediates age-related changes in processing efficiency.

  5. Determination of GTA Welding Efficiencies

    DTIC Science & Technology

    1993-03-01

    continue on reverse if ncessary andidentify by block number) A method is developed for estimating welding efficiencies for moving arc GTAW processes...Dutta, Co-Advi r Department of Mechanical Engineering ii ABSTRACT A method is developed for estimating welding efficiencies for moving arc GTAW ...17 Figure 10. Miller Welding Equipment ............. ... 18 Figure 11. GTAW Torch Setup for Automatic Welding. . 19 Figure 12

  6. [Administrative efficiency in the Mexican Fund for the Prevention of Catastrophic Expenditures in Health].

    PubMed

    Orozco-Núñez, Emanuel; Alcalde-Rabanal, Jaqueline; Navarro, Juan; Lozano, Rafael

    2016-01-01

    To show that the administrative regime of specialized hospitals has some influence on the administrative processes to operate the Mexican Fund for Catastrophic Expenditures in Health (FPGC, in Spanish), for providing health care to breast cancer, cervical cancer and child leukemia. The variable for estimating administrative efficiency was the time estimated from case notification to reimbursement. For its estimation, semistructured interviews were applied to key actors involved in management of cancer care financed by FPGC. Additionally, a group of experts was organized to make recommendations for improving processes. Specialized hospitals with a decentralized scheme showed less time to solve the administrative process in comparison with the model on the hospitals dependent on State Health Services, where timing and intermediation levels were higher. Decentralized hospitals administrative scheme for specialized care is more efficient, because they tend to be more autonomous.

  7. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  8. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  9. Increase in the thermodynamic efficiency of the working process of spark-ignited engines on natural gas with the addition of hydrogen

    NASA Astrophysics Data System (ADS)

    Mikhailovna Smolenskaya, Natalia; Vladimirovich Smolenskii, Victor; Vladimirovich Korneev, Nicholas

    2018-02-01

    The work is devoted to the substantiation and practical implementation of a new approach for estimating the change in internal energy by pressure and volume. The pressure is measured with a calibrated sensor. The change in volume inside the cylinder is determined by changing the position of the piston. The position of the piston is precisely determined by the angle of rotation of the crankshaft. On the basis of the proposed approach, the thermodynamic efficiency of the working process of spark ignition engines on natural gas with the addition of hydrogen was estimated. Experimental studies were carried out on a single-cylinder unit UIT-85. Their analysis showed an increase in the thermodynamic efficiency of the working process with the addition of hydrogen in a compressed natural gas (CNG).The results obtained make it possible to determine the characteristic of heat release from the analysis of experimental data. The effect of hydrogen addition on the CNG combustion process is estimated.

  10. Fusiform Gyrus Dysfunction is Associated with Perceptual Processing Efficiency to Emotional Faces in Adolescent Depression: A Model-Based Approach.

    PubMed

    Ho, Tiffany C; Zhang, Shunan; Sacchet, Matthew D; Weng, Helen; Connolly, Colm G; Henje Blom, Eva; Han, Laura K M; Mobayed, Nisreen O; Yang, Tony T

    2016-01-01

    While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.

  11. Fusiform Gyrus Dysfunction is Associated with Perceptual Processing Efficiency to Emotional Faces in Adolescent Depression: A Model-Based Approach

    PubMed Central

    Ho, Tiffany C.; Zhang, Shunan; Sacchet, Matthew D.; Weng, Helen; Connolly, Colm G.; Henje Blom, Eva; Han, Laura K. M.; Mobayed, Nisreen O.; Yang, Tony T.

    2016-01-01

    While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed. PMID:26869950

  12. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  13. Investigating the Gap Between Estimated and Actual Energy Efficiency and Conservation Savings for Public Buildings Projects & Programs in United States

    NASA Astrophysics Data System (ADS)

    Qaddus, Muhammad Kamil

    The gap between estimated and actual savings in energy efficiency and conservation (EE&C) projects or programs forms the problem statement for the scope of public and government buildings. This gap has been analyzed first on impact and then on process-level. On the impact-level, the methodology leads to categorization of the gap as 'Realization Gap'. It then views the categorization of gap within the context of past and current narratives linked to realization gap. On process-level, the methodology leads to further analysis of realization gap on process evaluation basis. The process evaluation criterion, a product of this basis is then applied to two different programs (DESEU and NYC ACE) linked to the scope of this thesis. Utilizing the synergies of impact and process level analysis, it offers proposals on program development and its structure using our process evaluation criterion. Innovative financing and benefits distribution structure is thus developed and will remain part of the proposal. Restricted Stakeholder Crowd Financing and Risk-Free Incentivized return are the products of proposed financing and benefit distribution structure respectively. These products are then complimented by proposing an alternative approach in estimating EE&C savings. The approach advocates estimation based on range-allocation rather than currently utilized unique estimated savings approach. The Way Ahead section thus explores synergy between financial and engineering ranges of energy savings as a multi-discipline approach for future research. Moreover, it provides the proposed program structure with risk aversion and incentive allocation while dealing with uncertainty. This set of new approaches are believed to better fill the realization gap between estimated and actual energy efficiency savings.

  14. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  15. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  16. Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.

    PubMed

    Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2014-11-25

    In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.

  17. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  18. Mineral Carbonation Potential of CO2 from Natural and Industrial-based Alkalinity Sources

    NASA Astrophysics Data System (ADS)

    Wilcox, J.; Kirchofer, A.

    2014-12-01

    Mineral carbonation is a Carbon Capture and Storage (CSS) technology where gaseous CO2 is reacted with alkaline materials (such as silicate minerals and alkaline industrial wastes) and converted into stable and environmentally benign carbonate minerals (Metz et al., 2005). Here, we present a holistic, transparent life cycle assessment model of aqueous mineral carbonation built using a hybrid process model and economic input-output life cycle assessment approach. We compared the energy efficiency and the net CO2 storage potential of various mineral carbonation processes based on different feedstock material and process schemes on a consistent basis by determining the energy and material balance of each implementation (Kirchofer et al., 2011). In particular, we evaluated the net CO2 storage potential of aqueous mineral carbonation for serpentine, olivine, cement kiln dust, fly ash, and steel slag across a range of reaction conditions and process parameters. A preliminary systematic investigation of the tradeoffs inherent in mineral carbonation processes was conducted and guidelines for the optimization of the life-cycle energy efficiency are provided. The life-cycle assessment of aqueous mineral carbonation suggests that a variety of alkalinity sources and process configurations are capable of net CO2 reductions. The maximum carbonation efficiency, defined as mass percent of CO2 mitigated per CO2 input, was 83% for CKD at ambient temperature and pressure conditions. In order of decreasing efficiency, the maximum carbonation efficiencies for the other alkalinity sources investigated were: olivine, 66%; SS, 64%; FA, 36%; and serpentine, 13%. For natural alkalinity sources, availability is estimated based on U.S. production rates of a) lime (18 Mt/yr) or b) sand and gravel (760 Mt/yr) (USGS, 2011). The low estimate assumes the maximum sequestration efficiency of the alkalinity source obtained in the current work and the high estimate assumes a sequestration efficiency of 85%. The total CO2 storage potential for the alkalinity sources considered in the U.S. ranges from 1.3% to 23.7% of U.S. CO2 emissions, depending on the assumed availability of natural alkalinity sources and efficiency of the mineral carbonation processes.

  19. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity.

    PubMed

    Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P

    2018-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.

  20. Dynamic control of remelting processes

    DOEpatents

    Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.

    2000-01-01

    An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.

  1. How does spatial and temporal resolution of vegetation index impact crop yield estimation?

    USDA-ARS?s Scientific Manuscript database

    Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing data have long been used in crop yield estimation for decades. The process-based approach uses light use efficiency model to estimate crop yield. Vegetation index (VI) ...

  2. Implementing a combined polar-geostationary algorithm for smoke emissions estimation in near real time

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.

    2013-12-01

    Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.

  3. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

    PubMed Central

    Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael

    2014-01-01

    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489

  4. Nonparametric Transfer Function Models

    PubMed Central

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  5. Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation

    PubMed Central

    Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-01-01

    Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182

  6. Deductive derivation and turing-computerization of semiparametric efficient estimation.

    PubMed

    Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-12-01

    Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.

  7. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  8. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  9. Gaussian process models for reference ET estimation from alternative meteorological data sources

    USDA-ARS?s Scientific Manuscript database

    Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid regions where crop water demand exceeds rainfall. Daily grass or alfalfa reference ET values and crop coefficients are widely used to estimate crop water demand. ...

  10. Frontier-based techniques in measuring hospital efficiency in Iran: a systematic review and meta-regression analysis

    PubMed Central

    2013-01-01

    Background In recent years, there has been growing interest in measuring the efficiency of hospitals in Iran and several studies have been conducted on the topic. The main objective of this paper was to review studies in the field of hospital efficiency and examine the estimated technical efficiency (TE) of Iranian hospitals. Methods Persian and English databases were searched for studies related to measuring hospital efficiency in Iran. Ordinary least squares (OLS) regression models were applied for statistical analysis. The PRISMA guidelines were followed in the search process. Results A total of 43 efficiency scores from 29 studies were retrieved and used to approach the research question. Data envelopment analysis was the principal frontier efficiency method in the estimation of efficiency scores. The pooled estimate of mean TE was 0.846 (±0.134). There was a considerable variation in the efficiency scores between the different studies performed in Iran. There were no differences in efficiency scores between data envelopment analysis (DEA) and stochastic frontier analysis (SFA) techniques. The reviewed studies are generally similar and suffer from similar methodological deficiencies, such as no adjustment for case mix and quality of care differences. The results of OLS regression revealed that studies that included more variables and more heterogeneous hospitals generally reported higher TE. Larger sample size was associated with reporting lower TE. Conclusions The features of frontier-based techniques had a profound impact on the efficiency scores among Iranian hospital studies. These studies suffer from major methodological deficiencies and were of sub-optimal quality, limiting their validity and reliability. It is suggested that improving data collection and processing in Iranian hospital databases may have a substantial impact on promoting the quality of research in this field. PMID:23945011

  11. Congestion estimation technique in the optical network unit registration process.

    PubMed

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  12. Processing of Antenna-Array Signals on the Basis of the Interference Model Including a Rank-Deficient Correlation Matrix

    NASA Astrophysics Data System (ADS)

    Rodionov, A. A.; Turchin, V. I.

    2017-06-01

    We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.

  13. Gaussian processes-based predictive models to estimate reference ET from alternative meteorological data sources for irrigation scheduling

    USDA-ARS?s Scientific Manuscript database

    Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid irrigated regions where crop water demand exceeds rainfall. The impact of inaccurate ET estimates can be tremendous in both irrigation cost and the increased dema...

  14. Parmeterization of spectra

    NASA Technical Reports Server (NTRS)

    Cornish, C. R.

    1983-01-01

    Following reception and analog to digital conversion (A/D) conversion, atmospheric radar backscatter echoes need to be processed so as to obtain desired information about atmospheric processes and to eliminate or minimize contaminating contributions from other sources. Various signal processing techniques have been implemented at mesosphere-stratosphere-troposphere (MST) radar facilities to estimate parameters of interest from received spectra. Such estimation techniques need to be both accurate and sufficiently efficient to be within the capabilities of the particular data-processing system. The various techniques used to parameterize the spectra of received signals are reviewed herein. Noise estimation, electromagnetic interference, data smoothing, correlation, and the Doppler effect are among the specific points addressed.

  15. Academic Performance and Burnout: An Efficient Frontier Analysis of Resource Use Efficiency among Employed University Students

    ERIC Educational Resources Information Center

    Galbraith, Craig S.; Merrill, Gregory B.

    2015-01-01

    We examine the impact of university student burnout on academic achievement. With a longitudinal sample of working undergraduate university business and economics students, we use a two-step analytical process to estimate the efficient frontiers of student productivity given inputs of labour and capital and then analyse the potential determinants…

  16. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  17. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. A library of FORTRAN subroutines were developed to facilitate analyses of a variety of estimation problems. An easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage are presented. Subroutine inputs, outputs, usage and listings are given, along with examples of how these routines can be used. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  18. Efficient Research Design: Using Value-of-Information Analysis to Estimate the Optimal Mix of Top-down and Bottom-up Costing Approaches in an Economic Evaluation alongside a Clinical Trial.

    PubMed

    Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee

    2016-04-01

    In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset. © The Author(s) 2016.

  19. Streamflow characteristics from modelled runoff time series: Importance of calibration criteria selection

    USGS Publications Warehouse

    Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan

    2017-01-01

    Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.

  20. Advanced purification of petroleum refinery wastewater by catalytic vacuum distillation.

    PubMed

    Yan, Long; Ma, Hongzhu; Wang, Bo; Mao, Wei; Chen, Yashao

    2010-06-15

    In our work, a new process, catalytic vacuum distillation (CVD) was utilized for purification of petroleum refinery wastewater that was characteristic of high chemical oxygen demand (COD) and salinity. Moreover, various common promoters, like FeCl(3), kaolin, H(2)SO(4) and NaOH were investigated to improve the purification efficiency of CVD. Here, the purification efficiency was estimated by COD testing, electrolytic conductivity, UV-vis spectrum, gas chromatography-mass spectrometry (GC-MS) and pH value. The results showed that NaOH promoted CVD displayed higher efficiency in purification of refinery wastewater than other systems, where the pellucid effluents with low salinity and high COD removal efficiency (99%) were obtained after treatment, and the corresponding pH values of effluents varied from 7 to 9. Furthermore, environment estimation was also tested and the results showed that the effluent had no influence on plant growth. Thus, based on satisfied removal efficiency of COD and salinity achieved simultaneously, NaOH promoted CVD process is an effective approach to purify petroleum refinery wastewater. Copyright 2010 Elsevier B.V. All rights reserved.

  1. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  2. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  3. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  5. Pixel-By Estimation of Scene Motion in Video

    NASA Astrophysics Data System (ADS)

    Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.

    2017-05-01

    The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.

  6. Theoretical and observational assessments of flare efficiencies.

    PubMed

    Leahey, D M; Preston, K; Strosher, M

    2001-12-01

    Flaring of waste gases is a common practice in the processing of hydrocarbon (HC) materials. It is assumed that flaring achieves complete combustion with relatively innocuous byproducts such as CO2 and H2O. However, flaring is rarely successful in the attainment of complete combustion, because entrainment of air into the region of combusting gases restricts flame sizes to less than optimum values. The resulting flames are too small to dissipate the amount of heat associated with 100% combustion efficiency. Equations were employed to estimate flame lengths, areas, and volumes as functions of flare stack exit velocity, stoichiometric mixing ratio, and wind speed. Heats released as part of the combustion process were then estimated from a knowledge of the flame dimensions together with an assumed flame temperature of 1200 K. Combustion efficiencies were subsequently obtained by taking the ratio of estimated actual heat release values to those associated with 100% complete combustion. Results of the calculations showed that combustion efficiencies decreased rapidly as wind speed increased from 1 to 6 m/sec. As wind speeds increased beyond 6 m/sec, combustion efficiencies tended to level off at values between 10 and 15%. Propane and ethane tend to burn more efficiently than do methane or hydrogen sulfide because of their lower stoichiometric mixing ratios. Results of theoretical predictions were compared to nine values of local combustion efficiencies obtained as part of an observational study into flaring activity conducted by the Alberta Research Council (ARC). All values were obtained during wind speed conditions of less than 4 m/sec. There was generally good agreement between predicted and observed values. The mean and standard deviation of observed combustion efficiencies were 68 +/- 7%. Comparable predicted values were 69 +/- 7%.

  7. Quantitative basis for component factors of gas flow proportional counting efficiencies

    NASA Astrophysics Data System (ADS)

    Nichols, Michael C.

    This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.

  8. Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.

    PubMed

    Haber, Aleksandar; Verhaegen, Michel

    2016-11-15

    We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.

  9. NEW APPROACHES TO ESTIMATION OF SOLID-WASTE QUANTITY AND COMPOSITION

    EPA Science Inventory

    Efficient and statistically sound sampling protocols for estimating the quantity and composition of solid waste over a stated period of time in a given location, such as a landfill site or at a specific point in an industrial or commercial process, are essential to the design ...

  10. Real-Time Measurement of Machine Efficiency during Inertia Friction Welding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tung, Daniel Joseph; Mahaffey, David; Senkov, Oleg

    Process efficiency is a crucial parameter for inertia friction welding (IFW) that is largely unknown at the present time. A new method has been developed to determine the transient profile of the IFW process efficiency by comparing the workpiece torque used to heat and deform the joint region to the total torque. Particularly, the former is measured by a torque load cell attached to the non-rotating workpiece while the latter is calculated from the deceleration rate of flywheel rotation. The experimentally-measured process efficiency for IFW of AISI 1018 steel rods is validated independently by the upset length estimated from anmore » analytical equation of heat balance and the flash profile calculated from a finite element based thermal stress model. The transient behaviors of torque and efficiency during IFW are discussed based on the energy loss to machine bearings and the bond formation at the joint interface.« less

  11. A new method of SC image processing for confluence estimation.

    PubMed

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina

    2017-10-01

    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Can a Satellite-Derived Estimate of the Fraction of PAR Absorbed by Chlorophyll (FAPAR(sub chl)) Improve Predictions of Light-Use Efficiency and Ecosystem Photosynthesis for a Boreal Aspen Forest?

    NASA Technical Reports Server (NTRS)

    Zhang, Qingyuan; Middleton, Elizabeth M.; Margolis, Hank A.; Drolet, Guillaume G.; Barr, Alan A.; Black, T. Andrew

    2009-01-01

    Gross primary production (GPP) is a key terrestrial ecophysiological process that links atmospheric composition and vegetation processes. Study of GPP is important to global carbon cycles and global warming. One of the most important of these processes, plant photosynthesis, requires solar radiation in the 0.4-0.7 micron range (also known as photosynthetically active radiation or PAR), water, carbon dioxide (CO2), and nutrients. A vegetation canopy is composed primarily of photosynthetically active vegetation (PAV) and non-photosynthetic vegetation (NPV; e.g., senescent foliage, branches and stems). A green leaf is composed of chlorophyll and various proportions of nonphotosynthetic components (e.g., other pigments in the leaf, primary/secondary/tertiary veins, and cell walls). The fraction of PAR absorbed by whole vegetation canopy (FAPAR(sub canopy)) has been widely used in satellite-based Production Efficiency Models to estimate GPP (as a product of FAPAR(sub canopy)x PAR x LUE(sub canopy), where LUE(sub canopy) is light use efficiency at canopy level). However, only the PAR absorbed by chlorophyll (a product of FAPAR(sub chl) x PAR) is used for photosynthesis. Therefore, remote sensing driven biogeochemical models that use FAPAR(sub chl) in estimating GPP (as a product of FAPAR(sub chl x PAR x LUE(sub chl) are more likely to be consistent with plant photosynthesis processes.

  13. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  14. A neuro-data envelopment analysis approach for optimization of uncorrelated multiple response problems with smaller the better type controllable factors

    NASA Astrophysics Data System (ADS)

    Bashiri, Mahdi; Farshbaf-Geranmayeh, Amir; Mogouie, Hamed

    2013-11-01

    In this paper, a new method is proposed to optimize a multi-response optimization problem based on the Taguchi method for the processes where controllable factors are the smaller-the-better (STB)-type variables and the analyzer desires to find an optimal solution with smaller amount of controllable factors. In such processes, the overall output quality of the product should be maximized while the usage of the process inputs, the controllable factors, should be minimized. Since all possible combinations of factors' levels, are not considered in the Taguchi method, the response values of the possible unpracticed treatments are estimated using the artificial neural network (ANN). The neural network is tuned by the central composite design (CCD) and the genetic algorithm (GA). Then data envelopment analysis (DEA) is applied for determining the efficiency of each treatment. Although the important issue for implementation of DEA is its philosophy, which is maximization of outputs versus minimization of inputs, this important issue has been neglected in previous similar studies in multi-response problems. Finally, the most efficient treatment is determined using the maximin weight model approach. The performance of the proposed method is verified in a plastic molding process. Moreover a sensitivity analysis has been done by an efficiency estimator neural network. The results show efficiency of the proposed approach.

  15. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    NASA Astrophysics Data System (ADS)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  16. Manufacture of ammonium sulfate fertilizer from gypsum-rich byproduct of flue gas desulfurization - A prefeasibility cost estimate

    USGS Publications Warehouse

    Chou, I.-Ming; Rostam-Abadi, M.; Lytle, J.M.; Achorn, F.P.

    1996-01-01

    Costs for constructing and operating a conceptual plant based on a proposed process that converts flue gas desulfurization (FGD)-gypsum to ammonium sulfate fertilizer has been calculated and used to estimate a market price for the product. The average market price of granular ammonium sulfate ($138/ton) exceeds the rough estimated cost of ammonium sulfate from the proposed process ($111/ ton), by 25 percent, if granular size ammonium sulfate crystals of 1.2 to 3.3 millimeters in diameters can be produced by the proposed process. However, there was at least ??30% margin in the cost estimate calculations. The additional costs for compaction, if needed to create granules of the required size, would make the process uneconomical unless considerable efficiency gains are achieved to balance the additional costs. This study suggests the need both to refine the crystallization process and to find potential markets for the calcium carbonate produced by the process.

  17. Decentralized cooperative TOA/AOA target tracking for hierarchical wireless sensor networks.

    PubMed

    Chen, Ying-Chih; Wen, Chih-Yu

    2012-11-08

    This paper proposes a distributed method for cooperative target tracking in hierarchical wireless sensor networks. The concept of leader-based information processing is conducted to achieve object positioning, considering a cluster-based network topology. Random timers and local information are applied to adaptively select a sub-cluster for the localization task. The proposed energy-efficient tracking algorithm allows each sub-cluster member to locally estimate the target position with a Bayesian filtering framework and a neural networking model, and further performs estimation fusion in the leader node with the covariance intersection algorithm. This paper evaluates the merits and trade-offs of the protocol design towards developing more efficient and practical algorithms for object position estimation.

  18. Peer Review of March 2013 LDV Rebound Report By Small ...

    EPA Pesticide Factsheets

    The regulatory option of encouraging the adoption of advanced technologies for improving vehicle efficiency can result in significant fuel savings and GHG emissions benefits. At the same time, it is possible that some of these benefits might be offset by additional driving that is encouraged by the reduced costs of operating more efficient vehicles. This so called “rebound effect”, the increased driving that results from an improvement in the energy efficiency of a vehicle, must be determined in order to reliably estimate the overall benefits of GHG regulations for light-duty vehicles. Dr. Ken Small, an Economist at the Department of Economics, University of California at Irvine, with contributions by Dr. Kent Hymel, Department of Economics, California State University at Northridge, have developed a methodology to estimate the rebound effect for light-duty vehicles in the U.S. Specifically, rebound is estimated as the change in vehicle miles traveled (VMT) with respect to the change in per mile fuel costs that can occur, for example, when vehicle operating efficiency is improved. The model analyzes aggregate personal motor-vehicle travel within a simultaneous model of aggregate VMT, fleet size, fuel efficiency, and congestion formation. To use the peer review process to help assure that the methodologies considered by the U.S. EPA for estimating VMT rebound have been thoroughly examined.

  19. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  20. Optical correlation based pose estimation using bipolar phase grayscale amplitude spatial light modulators

    NASA Astrophysics Data System (ADS)

    Outerbridge, Gregory John, II

    Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.

  1. Theory and practical application of out of sequence measurements with results for multi-static tracking

    NASA Astrophysics Data System (ADS)

    Iny, David

    2007-09-01

    This paper addresses the out-of-sequence measurement (OOSM) problem associated with multiple platform tracking systems. The problem arises due to different transmission delays in communication of detection reports across platforms. Much of the literature focuses on the improvement to the state estimate by incorporating the OOSM. As the time lag increases, there is diminishing improvement to the state estimate. However, this paper shows that optimal processing of OOSMs may still be beneficial by improving data association as part of a multi-target tracker. This paper derives exact multi-lag algorithms with the property that the standard log likelihood track scoring is independent of the order in which the measurements are processed. The orthogonality principle is applied to generalize the method of Bar- Shalom in deriving the exact A1 algorithm for 1-lag estimation. Theory is also developed for optimal filtering of time averaged measurements and measurements correlated through periodic updates of a target aim-point. An alternative derivation of the multi-lag algorithms is also achieved using an efficient variant of the augmented state Kalman filter (AS-KF). This results in practical and reasonably efficient multi-lag algorithms. Results are compared to a well known ad hoc algorithm for incorporating OOSMs. Finally, the paper presents some simulated multi-target multi-static scenarios where there is a benefit to processing the data out of sequence in order to improve pruning efficiency.

  2. Analysis of the external and internal quantum efficiency of multi-emitter, white organic light emitting diodes

    NASA Astrophysics Data System (ADS)

    Furno, Mauro; Rosenow, Thomas C.; Gather, Malte C.; Lüssem, Björn; Leo, Karl

    2012-10-01

    We report on a theoretical framework for the efficiency analysis of complex, multi-emitter organic light emitting diodes (OLEDs). The calculation approach makes use of electromagnetic modeling to quantify the overall OLED photon outcoupling efficiency and a phenomenological description for electrical and excitonic processes. From the comparison of optical modeling results and measurements of the total external quantum efficiency, we obtain reliable estimates of internal quantum yield. As application of the model, we analyze high-efficiency stacked white OLEDs and comment on the various efficiency loss channels present in the devices.

  3. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  4. Optimization and economic evaluation of industrial gas production and combined heat and power generation from gasification of corn stover and distillers grains.

    PubMed

    Kumar, Ajay; Demirel, Yasar; Jones, David D; Hanna, Milford A

    2010-05-01

    Thermochemical gasification is one of the most promising technologies for converting biomass into power, fuels and chemicals. The objectives of this study were to maximize the net energy efficiency for biomass gasification, and to estimate the cost of producing industrial gas and combined heat and power (CHP) at a feedrate of 2000kg/h. Aspen Plus-based model for gasification was combined with a CHP generation model, and optimized using corn stover and dried distillers grains with solubles (DDGS) as the biomass feedstocks. The cold gas efficiencies for gas production were 57% and 52%, respectively, for corn stover and DDGS. The selling price of gas was estimated to be $11.49 and $13.08/GJ, respectively, for corn stover and DDGS. For CHP generation, the electrical and net efficiencies were as high as 37% and 88%, respectively, for corn stover and 34% and 78%, respectively, for DDGS. The selling price of electricity was estimated to be $0.1351 and $0.1287/kWh for corn stover and DDGS, respectively. Overall, high net energy efficiencies for gas and CHP production from biomass gasification can be achieved with optimized processing conditions. However, the economical feasibility of these conversion processes will depend on the relative local prices of fossil fuels. Copyright 2009 Elsevier Ltd. All rights reserved.

  5. Binaural noise reduction via cue-preserving MMSE filter and adaptive-blocking-based noise PSD estimation

    NASA Astrophysics Data System (ADS)

    Azarpour, Masoumeh; Enzner, Gerald

    2017-12-01

    Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.

  6. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  7. Hurst Estimation of Scale Invariant Processes with Stationary Increments and Piecewise Linear Drift

    NASA Astrophysics Data System (ADS)

    Modarresi, N.; Rezakhah, S.

    The characteristic feature of the discrete scale invariant (DSI) processes is the invariance of their finite dimensional distributions by dilation for certain scaling factor. DSI process with piecewise linear drift and stationary increments inside prescribed scale intervals is introduced and studied. To identify the structure of the process, first, we determine the scale intervals, their linear drifts and eliminate them. Then, a new method for the estimation of the Hurst parameter of such DSI processes is presented and applied to some period of the Dow Jones indices. This method is based on fixed number equally spaced samples inside successive scale intervals. We also present some efficient method for estimating Hurst parameter of self-similar processes with stationary increments. We compare the performance of this method with the celebrated FA, DFA and DMA on the simulated data of fractional Brownian motion (fBm).

  8. The method of constant stimuli is inefficient

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Fitzhugh, Andrew

    1990-01-01

    Simpson (1988) has argued that the method of constant stimuli is as efficient as adaptive methods of threshold estimation and has supported this claim with simulations. It is shown that Simpson's simulations are not a reasonable model of the experimental process and that more plausible simulations confirm that adaptive methods are much more efficient that the method of constant stimuli.

  9. Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.

    ERIC Educational Resources Information Center

    Simpson, William A.

    In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…

  10. Light Extraction From Solution-Based Processable Electrophosphorescent Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Krummacher, Benjamin C.; Mathai, Mathew; So, Franky; Choulis, Stelios; Choong, And-En, Vi

    2007-06-01

    Molecular dye dispersed solution processable blue emitting organic light-emitting devices have been fabricated and the resulting devices exhibit efficiency as high as 25 cd/A. With down-conversion phosphors, white emitting devices have been demonstrated with peak efficiency of 38 cd/A and luminous efficiency of 25 lm/W. The high efficiencies have been a product of proper tuning of carrier transport, optimization of the location of the carrier recombination zone and, hence, microcavity effect, efficient down-conversion from blue to white light, and scattering/isotropic remission due to phosphor particles. An optical model has been developed to investigate all these effects. In contrast to the common misunderstanding that light out-coupling efficiency is about 22% and independent of device architecture, our device data and optical modeling results clearly demonstrated that the light out-coupling efficiency is strongly dependent on the exact location of the recombination zone. Estimating the device internal quantum efficiencies based on external quantum efficiencies without considering the device architecture could lead to erroneous conclusions.

  11. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  12. Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.

    PubMed

    Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A

    2017-06-15

    Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  14. Application of independent component analysis for speech-music separation using an efficient score function estimation

    NASA Astrophysics Data System (ADS)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  15. Estimation of Qualitative and Quantitative Parameters of Air Cleaning by a Pulsed Corona Discharge Using Multicomponent Standard Mixtures

    NASA Astrophysics Data System (ADS)

    Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.

    2018-05-01

    The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.

  16. Gear and survey efficiency of patent tongs for oyster populations on restoration reefs.

    PubMed

    Schulte, David M; Lipcius, Romuald N; Burke, Russell P

    2018-01-01

    Surveys of restored oyster reefs need to produce accurate population estimates to assess the efficacy of restoration. Due to the complex structure of subtidal oyster reefs, one effective and efficient means to sample is by patent tongs, rather than SCUBA, dredges, or bottom cores. Restored reefs vary in relief and oyster density, either of which could affect survey efficiency. This study is the first to evaluate gear (the first full grab) and survey (which includes selecting a specific half portion of the first grab for further processing) efficiencies of hand-operated patent tongs as a function of reef height and oyster density on subtidal restoration reefs. In the Great Wicomico River, a tributary of lower Chesapeake Bay, restored reefs of high- and low-relief (25-45 cm, and 8-12 cm, respectively) were constructed throughout the river as the first large-scale oyster sanctuary reef restoration effort (sanctuary acreage > 20 ha at one site) in Chesapeake Bay. We designed a metal frame to guide a non-hydraulic mechanical patent tong repeatedly into the same plot on a restored reef until all oysters within the grab area were captured. Full capture was verified by an underwater remotely-operated vehicle. Samples (n = 19) were taken on nine different reefs, including five low- (n = 8) and four high-relief reefs (n = 11), over a two-year period. The gear efficiency of the patent tong was estimated to be 76% (± 5% standard error), whereas survey efficiency increased to 81% (± 10%) due to processing. Neither efficiency differed significantly between young-of-the-year oysters (spat) and adults, high- and low-relief reefs, or years. As this type of patent tong is a common and cost-effective tool to evaluate oyster restoration projects as well as population density on fished habitat, knowing the gear and survey efficiencies allows for accurate and precise population estimates.

  17. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

    NASA Astrophysics Data System (ADS)

    Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

    2016-12-01

    This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

  18. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  19. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  20. Microcidal effects of a new pelleting process.

    PubMed

    Ekperigin, H E; McCapes, R H; Redus, R; Ritchie, W L; Cameron, W J; Nagaraja, K V; Noll, S

    1990-09-01

    The microcidal efficiency of a new pelleting process was evaluated in four trials. Also, different methods of measuring temperature and moisture were compared and attempts were made to determine the influence on efficiency of pH changes occurring during processing. In the new process, the traditional boiler-conditioner was replaced by an Anaerobic Pasteurizing Conditioning (APC) System. Microcidal efficiency of the APC System, by itself or in conjunction with a pellet mill, appeared to be 100% against Escherichia coli and nonlactose-fermenters, 99% against aerobic mesophiles, and 90% against fungi. These levels of efficiency were attained when the temperature and moisture of feed conditioned in the APC System for 4.6 +/- .5 min were 82.9 +/- 2.4 C and 14.9 +/- .3%, respectively. On-line temperature probes were reliable and provided quick, accurate estimates of feed temperature. The near infrared scanner and microwave oven methods of measuring moisture were much quicker but less accurate than the in vacuo method. There were no differences among the pH of samples of raw, conditioned, and pelleted feed.

  1. Energy-Saving Melting and Revert Reduction Technology (E-SMARRT): Final Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Thornton C

    2014-03-31

    Energy-Saving Melting and Revert Reduction Technology (E-SMARRT) is a balanced portfolio of R&D tasks that address energy-saving opportunities in the metalcasting industry. E-SMARRT was created to: • Improve important capabilities of castings • Reduce carbon footprint of the foundry industry • Develop new job opportunities in manufacturing • Significantly reduce metalcasting process energy consumption and includes R&D in the areas of: • Improvements in Melting Efficiency • Innovative Casting Processes for Yield Improvement/Revert Reduction • Instrumentation and Control Improvement • Material properties for Casting or Tooling Design Improvement The energy savings and process improvements developed under E-SMARRT have been mademore » possible through the unique collaborative structure of the E-SMARRT partnership. The E-SMARRT team consisted of DOE’s Office of Industrial Technology, the three leading metalcasting technical associations in the U.S: the American Foundry Society; the North American Die Casting Association; and the Steel Founders’ Society of America; and SCRA Applied R&D, doing business as the Advanced Technology Institute (ATI), a recognized leader in distributed technology management. This team provided collaborative leadership to a complex industry composed of approximately 2,000 companies, 80% of which employ less than 100 people, and only 4% of which employ more than 250 people. Without collaboration, these new processes and technologies that enable energy efficiencies and environment-friendly improvements would have been slow to develop and had trouble obtaining a broad application. The E-SMARRT R&D tasks featured low-threshold energy efficiency improvements that are attractive to the domestic industry because they do not require major capital investment. The results of this portfolio of projects are significantly reducing metalcasting process energy consumption while improving the important capabilities of metalcastings. Through June 2014, the E-SMARRT program predicts an average annual estimated savings of 59 Trillion BTUs per year over a 10 year period through Advanced Melting Efficiencies and Innovative Casting Processes. Along with these energy savings, an estimated average annual estimate of CO2 reduction per year over a ten year period is 3.56 Million Metric Tons of Carbon Equivalent (MM TCE).« less

  2. Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Ju, Wenyun; Sun, Kai

    In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system ismore » closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.« less

  3. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics.

    PubMed

    Madi, Mahmoud K; Karameh, Fadi N

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements.

  4. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics

    PubMed Central

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements. PMID:28727850

  5. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  6. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  7. Compressive Channel Estimation and Tracking for Large Arrays in mm Wave Picocells

    DTIC Science & Technology

    2014-01-01

    abling sophisticated adaptation, including frequency-selective spatiotemporal processing (e.g., per subcarrier beamforming in OFDM systems). This approach...subarrays are certainly required for more advanced functionalities such as multiuser MIMO [17], spatial multiplexing [18], [19], [20], [21], [22], and...case, a regu- larly spaced 2D array), an estimate of the N2t,1D × N2r,1D MIMO channel matrix H can be efficiently arrived at by estimating the spatial

  8. Hapl-o-Mat: open-source software for HLA haplotype frequency estimation from ambiguous and heterogeneous data.

    PubMed

    Schäfer, Christian; Schmidt, Alexander H; Sauter, Jürgen

    2017-05-30

    Knowledge of HLA haplotypes is helpful in many settings as disease association studies, population genetics, or hematopoietic stem cell transplantation. Regarding the recruitment of unrelated hematopoietic stem cell donors, HLA haplotype frequencies of specific populations are used to optimize both donor searches for individual patients and strategic donor registry planning. However, the estimation of haplotype frequencies from HLA genotyping data is challenged by the large amount of genotype data, the complex HLA nomenclature, and the heterogeneous and ambiguous nature of typing records. To meet these challenges, we have developed the open-source software Hapl-o-Mat. It estimates haplotype frequencies from population data including an arbitrary number of loci using an expectation-maximization algorithm. Its key features are the processing of different HLA typing resolutions within a given population sample and the handling of ambiguities recorded via multiple allele codes or genotype list strings. Implemented in C++, Hapl-o-Mat facilitates efficient haplotype frequency estimation from large amounts of genotype data. We demonstrate its accuracy and performance on the basis of artificial and real genotype data. Hapl-o-Mat is a versatile and efficient software for HLA haplotype frequency estimation. Its capability of processing various forms of HLA genotype data allows for a straightforward haplotype frequency estimation from typing records usually found in stem cell donor registries.

  9. Estimating multi-period global cost efficiency and productivity change of systems with network structures

    NASA Astrophysics Data System (ADS)

    Tohidnia, S.; Tohidi, G.

    2018-02-01

    The current paper develops three different ways to measure the multi-period global cost efficiency for homogeneous networks of processes when the prices of exogenous inputs are known at all time periods. A multi-period network data envelopment analysis model is presented to measure the minimum cost of the network system based on the global production possibility set. We show that there is a relationship between the multi-period global cost efficiency of network system and its subsystems, and also its processes. The proposed model is applied to compute the global cost Malmquist productivity index for measuring the productivity changes of network system and each of its process between two time periods. This index is circular. Furthermore, we show that the productivity changes of network system can be defined as a weighted average of the process productivity changes. Finally, a numerical example will be presented to illustrate the proposed approach.

  10. Process modelling of biomass conversion to biofuels with combined heat and power.

    PubMed

    Sharma, Abhishek; Shinde, Yogesh; Pareek, Vishnu; Zhang, Dongke

    2015-12-01

    A process model has been developed to study the pyrolysis of biomass to produce biofuel with heat and power generation. The gaseous and solid products were used to generate heat and electrical power, whereas the bio-oil was stored and supplied for other applications. The overall efficiency of the base case model was estimated for conversion of biomass into useable forms of bio-energy. It was found that the proposed design is not only significantly efficient but also potentially suitable for distributed operation of pyrolysis plants having centralised post processing facilities for production of other biofuels and chemicals. It was further determined that the bio-oil quality improved using a multi-stage condensation system. However, the recycling of flue gases coming from combustor instead of non-condensable gases in the pyrolyzer led to increase in the overall efficiency of the process with degradation of bio-oil quality. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Combined Brayton-JT cycles with refrigerants for natural gas liquefaction

    NASA Astrophysics Data System (ADS)

    Chang, Ho-Myung; Park, Jae Hoon; Lee, Sanggyu; Choe, Kun Hyung

    2012-06-01

    Thermodynamic cycles for natural gas liquefaction with single-component refrigerants are investigated under a governmental project in Korea, aiming at new processes to meet the requirements on high efficiency, large capacity, and simple equipment. Based upon the optimization theory recently published by the present authors, it is proposed to replace the methane-JT cycle in conventional cascade process with a nitrogen-Brayton cycle. A variety of systems to combine nitrogen-Brayton, ethane-JT and propane-JT cycles are simulated with Aspen HYSYS and quantitatively compared in terms of thermodynamic efficiency, flow rate of refrigerants, and estimated size of heat exchangers. A specific Brayton-JT cycle is suggested with detailed thermodynamic data for further process development. The suggested cycle is expected to be more efficient and simpler than the existing cascade process, while still taking advantage of easy and robust operation with single-component refrigerants.

  12. Biocatalyzed processes for production of commodity chemicals: Assessment of future research advances for N-butanol production

    NASA Technical Reports Server (NTRS)

    Ingham, J. D.

    1984-01-01

    This report is a summary of assessments by Chem Systems Inc. and a further evaluation of the impacts of research advances on energy efficiency and the potential for future industrial production of acetone-butanol-ethanol (ABE) solvents and other products by biocatalyzed processes. Brief discussions of each of the assessments made by CSI, followed by estimates of minimum projected energy consumption and costs for production of solvents by ABE biocatalyzed processes are included. These assessments and further advances discussed in this report show that substantial decreases in energy consumption and costs are possible on the basis of specific research advances; therefore, it appears that a biocatalyzed process for ABE can be developed that will be competitive with conventional petrochemical processes for production of n-butanol and acetone. (In this work, the ABE process was selected and utilized only as an example for methodology development; other possible bioprocesses for production of commodity chemicals are not intended to be excluded.) It has been estimated that process energy consumption can be decreased by 50%, with a corresponding cost reduction of 15-30% (in comparison with a conventional petrochemical process) by increasing microorganism tolerance to n-butanol and efficient recovery of product solvents from the vapor phase.

  13. An Efficient Location Verification Scheme for Static Wireless Sensor Networks.

    PubMed

    Kim, In-Hwan; Kim, Bo-Sung; Song, JooSeok

    2017-01-24

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors.

  14. An Efficient Location Verification Scheme for Static Wireless Sensor Networks

    PubMed Central

    Kim, In-hwan; Kim, Bo-sung; Song, JooSeok

    2017-01-01

    In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors. PMID:28125007

  15. Spatio-Temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    NASA Astrophysics Data System (ADS)

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian; Wu, Jin; Wu, Xiaocui; Gioli, Beniamino; Wohlfahrt, Georg; Cescatti, Alessandro; van der Tol, Christiaan; Zhou, Sha; Gough, Christopher M.; Gentine, Pierre; Zhang, Yongguang; Steinbrecher, Rainer; Ardö, Jonas

    2018-04-01

    Light-use efficiency (LUE), which quantifies the plants' efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production estimation. Here we use satellite-based solar-induced chlorophyll fluorescence as a proxy for photosynthetically active radiation absorbed by chlorophyll (APARchl) and derive an estimation of the fraction of APARchl (fPARchl) from four remotely sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (ɛmaxchl), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPARchl, suggesting the corresponding ɛmaxchl to have less seasonal variation. This spatio-temporal convergence of LUE derived from fPARchl can be used to build simple but robust gross primary production models and to better constrain process-based models.

  16. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

    PubMed

    Jones, Reese E; Mandadapu, Kranthi K

    2012-04-21

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  17. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    NASA Astrophysics Data System (ADS)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  18. Adaptive correlation filter-based video stabilization without accumulative global motion estimation

    NASA Astrophysics Data System (ADS)

    Koh, Eunjin; Lee, Chanyong; Jeong, Dong Gil

    2014-12-01

    We present a digital video stabilization approach that provides both robustness and efficiency for practical applications. In this approach, we adopt a stabilization model that maintains spatio-temporal information of past input frames efficiently and can track original stabilization position. Because of the stabilization model, the proposed method does not need accumulative global motion estimation and can recover the original position even if there is a failure in interframe motion estimation. It can also intelligently overcome the situation of damaged or interrupted video sequences. Moreover, because it is simple and suitable to parallel scheme, we implement it on a commercial field programmable gate array and a graphics processing unit board with compute unified device architecture in a breeze. Experimental results show that the proposed approach is both fast and robust.

  19. Sensitivity analysis of the add-on price estimate for the silicon web growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1981-01-01

    The web growth process, a silicon-sheet technology option, developed for the flat plate solar array (FSA) project, was examined. Base case data for the technical and cost parameters for the technical and commercial readiness phase of the FSA project are projected. The process add on price, using the base case data for cost parameters such as equipment, space, direct labor, materials and utilities, and the production parameters such as growth rate and run length, using a computer program developed specifically to do the sensitivity analysis with improved price estimation are analyzed. Silicon price, sheet thickness and cell efficiency are also discussed.

  20. Measuring Efficiency of Tunisian Schools in the Presence of Quasi-Fixed Inputs: A Bootstrap Data Envelopment Analysis Approach

    ERIC Educational Resources Information Center

    Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane

    2010-01-01

    The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…

  1. Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    PubMed Central

    Guzmán, Pablo; Díaz, Javier; Agís, Rodrigo; Ros, Eduardo

    2010-01-01

    The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains. PMID:22319283

  2. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  3. Methodology for modeling the disinfection efficiency of fresh-cut leafy vegetables wash water applied on peracetic acid combined with lactic acid.

    PubMed

    Van Haute, S; López-Gálvez, F; Gómez-López, V M; Eriksson, Markus; Devlieghere, F; Allende, Ana; Sampers, I

    2015-09-02

    A methodology to i) assess the feasibility of water disinfection in fresh-cut leafy greens wash water and ii) to compare the disinfectant efficiency of water disinfectants was defined and applied for a combination of peracetic acid (PAA) and lactic acid (LA) and comparison with free chlorine was made. Standardized process water, a watery suspension of iceberg lettuce, was used for the experiments. First, the combination of PAA+LA was evaluated for water recycling. In this case disinfectant was added to standardized process water inoculated with Escherichia coli (E. coli) O157 (6logCFU/mL). Regression models were constructed based on the batch inactivation data and validated in industrial process water obtained from fresh-cut leafy green processing plants. The UV254(F) was the best indicator for PAA decay and as such for the E. coli O157 inactivation with PAA+LA. The disinfection efficiency of PAA+LA increased with decreasing pH. Furthermore, PAA+LA efficacy was assessed as a process water disinfectant to be used within the washing tank, using a dynamic washing process with continuous influx of E. coli O157 and organic matter in the washing tank. The process water contamination in the dynamic process was adequately estimated by the developed model that assumed that knowledge of the disinfectant residual was sufficient to estimate the microbial contamination, regardless the physicochemical load. Based on the obtained results, PAA+LA seems to be better suited than chlorine for disinfecting process wash water with a high organic load but a higher disinfectant residual is necessary due to the slower E. coli O157 inactivation kinetics when compared to chlorine. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  5. A geostatistical approach to estimate mining efficiency indicators with flexible meshes

    NASA Astrophysics Data System (ADS)

    Freixas, Genis; Garriga, David; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2014-05-01

    Geostatistics is a branch of statistics developed originally to predict probability distributions of ore grades for mining operations by considering the attributes of a geological formation at unknown locations as a set of correlated random variables. Mining exploitations typically aim to maintain acceptable mineral laws to produce commercial products based upon demand. In this context, we present a new geostatistical methodology to estimate strategic efficiency maps that incorporate hydraulic test data, the evolution of concentrations with time obtained from chemical analysis (packer tests and production wells) as well as hydraulic head variations. The methodology is applied to a salt basin in South America. The exploitation is based on the extraction of brines through vertical and horizontal wells. Thereafter, brines are precipitated in evaporation ponds to obtain target potassium and magnesium salts of economic interest. Lithium carbonate is obtained as a byproduct of the production of potassium chloride. Aside from providing an assemble of traditional geostatistical methods, the strength of this study falls with the new methodology developed, which focus on finding the best sites to exploit the brines while maintaining efficiency criteria. Thus, some strategic indicator efficiency maps have been developed under the specific criteria imposed by exploitation standards to incorporate new extraction wells in new areas that would allow maintain or improve production. Results show that the uncertainty quantification of the efficiency plays a dominant role and that the use flexible meshes, which properly describe the curvilinear features associated with vertical stratification, provides a more consistent estimation of the geological processes. Moreover, we demonstrate that the vertical correlation structure at the given salt basin is essentially linked to variations in the formation thickness, which calls for flexible meshes and non-stationarity stochastic processes.

  6. Evaluation of virus reduction efficiency in wastewater treatment unit processes as a credit value in the multiple-barrier system for wastewater reclamation and reuse.

    PubMed

    Ito, Toshihiro; Kato, Tsuyoshi; Hasegawa, Makoto; Katayama, Hiroyuki; Ishii, Satoshi; Okabe, Satoshi; Sano, Daisuke

    2016-12-01

    The virus reduction efficiency of each unit process is commonly determined based on the ratio of virus concentration in influent to that in effluent of a unit, but the virus concentration in wastewater has often fallen below the analytical quantification limit, which does not allow us to calculate the concentration ratio at each sampling event. In this study, left-censored datasets of norovirus (genogroup I and II), and adenovirus were used to calculate the virus reduction efficiency in unit processes of secondary biological treatment and chlorine disinfection. Virus concentration in influent, effluent from the secondary treatment, and chlorine-disinfected effluent of four municipal wastewater treatment plants were analyzed by a quantitative polymerase chain reaction (PCR) approach, and the probabilistic distributions of log reduction (LR) were estimated by a Bayesian estimation algorithm. The mean values of LR in the secondary treatment units ranged from 0.9 and 2.2, whereas those in the free chlorine disinfection units were from -0.1 and 0.5. The LR value in the secondary treatment was virus type and unit process dependent, which raised the importance for accumulating the data of virus LR values applicable to the multiple-barrier system, which is a global concept of microbial risk management in wastewater reclamation and reuse.

  7. Estimating Mixture of Gaussian Processes by Kernel Smoothing

    PubMed Central

    Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin

    2014-01-01

    When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675

  8. Increase of efficiency of finishing-cleaning and hardening processing of details based on rotor-screw technological systems

    NASA Astrophysics Data System (ADS)

    Lebedev, V. A.; Serga, G. V.; Khandozhko, A. V.

    2018-03-01

    The article proposes technical solutions for increasing the efficiency of finishing-cleaning and hardening processing of parts on the basis of rotor-screw technological systems. The essence, design features and technological capabilities of the rotor-screw technological system with a rotating container are disclosed, which allows one to expand the range of the resulting displacement vectors, granules of the abrasive medium and processed parts. Ways of intensification of the processing on their basis by means of vibration activation of the process providing a combined effect on the mass of loading of large and small amplitude low-frequency oscillations are proposed. The results of the experimental studies of the movement of bulk materials in a screw container are presented, which showed that Kv = 0.5-0.6 can be considered the optimal value of the container filling factor. The estimation of screw containers application efficiency proceeding from their design features is given.

  9. Costing for the Future: Exploring Cost Estimation With Unmanned Autonomous Systems

    DTIC Science & Technology

    2016-04-30

    account for how cost estimating for autonomy is different than current methodologies and to suggest ways it can be addressed through the integration and...The Development stage involves refining the system requirements, creating a solution description , and building a system. 3. The Operational Test...parameter describes the extent to which efficient fabrication methodologies and processes are used, and the automation of labor-intensive operations

  10. From sunlight to phytomass: on the potential efficiency of converting solar radiation to phyto-energy.

    PubMed

    Amthor, Jeffrey S

    2010-12-01

    The relationship between solar radiation capture and potential plant growth is of theoretical and practical importance. The key processes constraining the transduction of solar radiation into phyto-energy (i.e. free energy in phytomass) were reviewed to estimate potential solar-energy-use efficiency. Specifically, the out-put:input stoichiometries of photosynthesis and photorespiration in C(3) and C(4) systems, mobilization and translocation of photosynthate, and biosynthesis of major plant biochemical constituents were evaluated. The maintenance requirement, an area of important uncertainty, was also considered. For a hypothetical C(3) grain crop with a full canopy at 30°C and 350 ppm atmospheric [CO(2) ], theoretically potential efficiencies (based on extant plant metabolic reactions and pathways) were estimated at c. 0.041 J J(-1) incident total solar radiation, and c. 0.092 J J(-1) absorbed photosynthetically active radiation (PAR). At 20°C, the calculated potential efficiencies increased to 0.053 and 0.118 J J(-1) (incident total radiation and absorbed PAR, respectively). Estimates for a hypothetical C(4) cereal were c. 0.051 and c. 0.114 J J(-1), respectively. These values, which cannot be considered as precise, are less than some previous estimates, and the reasons for the differences are considered. Field-based data indicate that exceptional crops may attain a significant fraction of potential efficiency. © The Author (2010). Journal compilation © New Phytologist Trust (2010).

  11. High-throughput process development: determination of dynamic binding capacity using microtiter filter plates filled with chromatography resin.

    PubMed

    Bergander, Tryggve; Nilsson-Välimaa, Kristina; Oberg, Katarina; Lacki, Karol M

    2008-01-01

    Steadily increasing demand for more efficient and more affordable biomolecule-based therapies put a significant burden on biopharma companies to reduce the cost of R&D activities associated with introduction of a new drug to the market. Reducing the time required to develop a purification process would be one option to address the high cost issue. The reduction in time can be accomplished if more efficient methods/tools are available for process development work, including high-throughput techniques. This paper addresses the transitions from traditional column-based process development to a modern high-throughput approach utilizing microtiter filter plates filled with a well-defined volume of chromatography resin. The approach is based on implementing the well-known batch uptake principle into microtiter plate geometry. Two variants of the proposed approach, allowing for either qualitative or quantitative estimation of dynamic binding capacity as a function of residence time, are described. Examples of quantitative estimation of dynamic binding capacities of human polyclonal IgG on MabSelect SuRe and of qualitative estimation of dynamic binding capacity of amyloglucosidase on a prototype of Capto DEAE weak ion exchanger are given. The proposed high-throughput method for determination of dynamic binding capacity significantly reduces time and sample consumption as compared to a traditional method utilizing packed chromatography columns without sacrificing the accuracy of data obtained.

  12. Scheduling job shop - A case study

    NASA Astrophysics Data System (ADS)

    Abas, M.; Abbas, A.; Khan, W. A.

    2016-08-01

    The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.

  13. Pinhole induced efficiency variation in perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Agarwal, Sumanshu; Nair, Pradeep R.

    2017-10-01

    Process induced efficiency variation is a major concern for all thin film solar cells, including the emerging perovskite based solar cells. In this article, we address the effect of pinholes or process induced surface coverage aspects on the efficiency of such solar cells through detailed numerical simulations. Interestingly, we find that the pinhole size distribution affects the short circuit current and open circuit voltage in contrasting manners. Specifically, while the JS C is heavily dependent on the pinhole size distribution, surprisingly, the VO C seems to be only nominally affected by it. Further, our simulations also indicate that, with appropriate interface engineering, it is indeed possible to design a nanostructured device with efficiencies comparable to those of ideal planar structures. Additionally, we propose a simple technique based on terminal I-V characteristics to estimate the surface coverage in perovskite solar cells.

  14. Factors limiting device efficiency in organic photovoltaics.

    PubMed

    Janssen, René A J; Nelson, Jenny

    2013-04-04

    The power conversion efficiency of the most efficient organic photovoltaic (OPV) cells has recently increased to over 10%. To enable further increases, the factors limiting the device efficiency in OPV must be identified. In this review, the operational mechanism of OPV cells is explained and the detailed balance limit to photovoltaic energy conversion, as developed by Shockley and Queisser, is outlined. The various approaches that have been developed to estimate the maximum practically achievable efficiency in OPV are then discussed, based on empirical knowledge of organic semiconductor materials. Subsequently, approaches made to adapt the detailed balance theory to incorporate some of the fundamentally different processes in organic solar cells that originate from using a combination of two complementary, donor and acceptor, organic semiconductors using thermodynamic and kinetic approaches are described. The more empirical formulations to the efficiency limits provide estimates of 10-12%, but the more fundamental descriptions suggest limits of 20-24% to be reachable in single junctions, similar to the highest efficiencies obtained for crystalline silicon p-n junction solar cells. Closing this gap sets the stage for future materials research and development of OPV. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  16. Quantitative optical imaging and sensing by joint design of point spread functions and estimation algorithms

    NASA Astrophysics Data System (ADS)

    Quirin, Sean Albert

    The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.

  17. Saturation and energy-conversion efficiency of auroral kilometric radiation

    NASA Technical Reports Server (NTRS)

    Wu, C. S.; Tsai, S. T.; Xu, M. J.; Shen, J. W.

    1981-01-01

    A quasi-linear theory is used to study the saturation level of the auroral kilometric radiation. The investigation is based on the assumption that the emission is due to a cyclotron maser instability as suggested by Wu and Lee and Lee et al. The thermodynamic bound on the radiation energy is also estimated separately. The energy-conversion efficiency of the radiation process is discussed. The results are consistent with observations.

  18. Dynamics of ultrasonic additive manufacturing.

    PubMed

    Hehr, Adam; Dapino, Marcelo J

    2017-01-01

    Ultrasonic additive manufacturing (UAM) is a solid-state technology for joining similar and dissimilar metal foils near room temperature by scrubbing them together with ultrasonic vibrations under pressure. Structural dynamics of the welding assembly and work piece influence how energy is transferred during the process and ultimately, part quality. To understand the effect of structural dynamics during UAM, a linear time-invariant model is proposed to relate the inputs of shear force and electric current to resultant welder velocity and voltage. Measured frequency response and operating performance of the welder under no load is used to identify model parameters. Using this model and in-situ measurements, shear force and welder efficiency are estimated to be near 2000N and 80% when welding Al 6061-H18 weld foil, respectively. Shear force and welder efficiency have never been estimated before in UAM. The influence of processing conditions, i.e., welder amplitude, normal force, and weld speed, on shear force and welder efficiency are investigated. Welder velocity was found to strongly influence the shear force magnitude and efficiency while normal force and weld speed showed little to no influence. The proposed model is used to describe high frequency harmonic content in the velocity response of the welder during welding operations and coupling of the UAM build with the welder. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  20. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  1. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  2. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  3. Predictive information speeds up visual awareness in an individuation task by modulating threshold setting, not processing efficiency.

    PubMed

    De Loof, Esther; Van Opstal, Filip; Verguts, Tom

    2016-04-01

    Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  5. A structurally based analytic model for estimation of biomass and fuel loads of woodland trees

    Treesearch

    Robin J. Tausch

    2009-01-01

    Allometric/structural relationships in tree crowns are a consequence of the physical, physiological, and fluid conduction processes of trees, which control the distribution, efficient support, and growth of foliage in the crown. The structural consequences of these processes are used to develop an analytic model based on the concept of branch orders. A set of...

  6. An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.

    PubMed

    Liu, Jing; Huang, Kaiyu; Zhang, Guoxian

    2017-04-20

    We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.

  7. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  8. Practical experimental certification of computational quantum gates using a twirling procedure.

    PubMed

    Moussa, Osama; da Silva, Marcus P; Ryan, Colm A; Laflamme, Raymond

    2012-08-17

    Because of the technical difficulty of building large quantum computers, it is important to be able to estimate how faithful a given implementation is to an ideal quantum computer. The common approach of completely characterizing the computation process via quantum process tomography requires an exponential amount of resources, and thus is not practical even for relatively small devices. We solve this problem by demonstrating that twirling experiments previously used to characterize the average fidelity of quantum memories efficiently can be easily adapted to estimate the average fidelity of the experimental implementation of important quantum computation processes, such as unitaries in the Clifford group, in a practical and efficient manner with applicability in current quantum devices. Using this procedure, we demonstrate state-of-the-art coherent control of an ensemble of magnetic moments of nuclear spins in a single crystal solid by implementing the encoding operation for a 3-qubit code with only a 1% degradation in average fidelity discounting preparation and measurement errors. We also highlight one of the advances that was instrumental in achieving such high fidelity control.

  9. A Stream Tilling Approach to Surface Area Estimation for Large Scale Spatial Data in a Shared Memory System

    NASA Astrophysics Data System (ADS)

    Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua

    2017-12-01

    Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.

  10. Lean manufacturing analysis to reduce waste on production process of fan products

    NASA Astrophysics Data System (ADS)

    Siregar, I.; Nasution, A. A.; Andayani, U.; Sari, R. M.; Syahputri, K.; Anizar

    2018-02-01

    This research is based on case study that being on electrical company. One of the products that will be researched is the fan, which when running the production process there is a time that is not value-added, among others, the removal of material which is not efficient in the raw materials and component molding fan. This study aims to reduce waste or non-value added activities and shorten the total lead time by using the tools Value Stream Mapping. Lean manufacturing methods used to analyze and reduce the non-value added activities, namely the value stream mapping analysis tools, process mapping activity with 5W1H, and tools 5 whys. Based on the research note that no value-added activities in the production process of a fan of 647.94 minutes of total lead time of 725.68 minutes. Process cycle efficiency in the production process indicates that the fan is still very low at 11%. While estimates of the repair showed a decrease in total lead time became 340.9 minutes and the process cycle efficiency is greater by 24%, which indicates that the production process has been better.

  11. Estimating the Cross-Shelf Export of Riverine Materials: Part 2. Estimates of Global Freshwater and Nutrient Export

    NASA Astrophysics Data System (ADS)

    Izett, Jonathan G.; Fennel, Katja

    2018-02-01

    Rivers deliver large amounts of fresh water, nutrients, and other terrestrially derived materials to the coastal ocean. Where inputs accumulate on the shelf, harmful effects such as hypoxia and eutrophication can result. In contrast, where export to the open ocean is efficient riverine inputs contribute to global biogeochemical budgets. Assessing the fate of riverine inputs is difficult on a global scale. Global ocean models are generally too coarse to resolve the relatively small scale features of river plumes. High-resolution regional models have been developed for individual river plume systems, but it is impractical to apply this approach globally to all rivers. Recently, generalized parameterizations have been proposed to estimate the export of riverine fresh water to the open ocean (Izett & Fennel, 2018, https://doi.org/10.1002/2017GB005667; Sharples et al., 2017, https://doi.org/10.1002/2016GB005483). Here the relationships of Izett and Fennel, https://doi.org/10.1002/2017GB005667 are used to derive global estimates of open-ocean export of fresh water and dissolved inorganic silicate, dissolved organic carbon, and dissolved organic and inorganic phosphorus and nitrogen. We estimate that only 15-53% of riverine fresh water reaches the open ocean directly in river plumes; nutrient export is even less efficient because of processing on continental shelves. Due to geographic differences in riverine nutrient delivery, dissolved silicate is the most efficiently exported to the open ocean (7-56.7%), while dissolved inorganic nitrogen is the least efficiently exported (2.8-44.3%). These results are consistent with previous estimates and provide a simple way to parameterize export to the open ocean in global models.

  12. Comparison between remote sensing and a dynamic vegetation model for estimating terrestrial primary production of Africa.

    PubMed

    Ardö, Jonas

    2015-12-01

    Africa is an important part of the global carbon cycle. It is also a continent facing potential problems due to increasing resource demand in combination with climate change-induced changes in resource supply. Quantifying the pools and fluxes constituting the terrestrial African carbon cycle is a challenge, because of uncertainties in meteorological driver data, lack of validation data, and potentially uncertain representation of important processes in major ecosystems. In this paper, terrestrial primary production estimates derived from remote sensing and a dynamic vegetation model are compared and quantified for major African land cover types. Continental gross primary production estimates derived from remote sensing were higher than corresponding estimates derived from a dynamic vegetation model. However, estimates of continental net primary production from remote sensing were lower than corresponding estimates from the dynamic vegetation model. Variation was found among land cover classes, and the largest differences in gross primary production were found in the evergreen broadleaf forest. Average carbon use efficiency (NPP/GPP) was 0.58 for the vegetation model and 0.46 for the remote sensing method. Validation versus in situ data of aboveground net primary production revealed significant positive relationships for both methods. A combination of the remote sensing method with the dynamic vegetation model did not strongly affect this relationship. Observed significant differences in estimated vegetation productivity may have several causes, including model design and temperature sensitivity. Differences in carbon use efficiency reflect underlying model assumptions. Integrating the realistic process representation of dynamic vegetation models with the high resolution observational strength of remote sensing may support realistic estimation of components of the carbon cycle and enhance resource monitoring, providing suitable validation data is available.

  13. Spatio-temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    DOE PAGES

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian; ...

    2018-04-03

    Light-use efficiency (LUE), which quantifies the plants’ efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production (GPP) estimation. Here we use satellite-based solar-induced chlorophyll fluorescence (SIF) as a proxy for photosynthetically active radiation absorbed by chlorophyll (APAR chl) and derive an estimation of the fraction of APAR chl (fPAR chl) from four remotely-sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (εmore » $$chl\\atop{max}$$), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPAR chl, suggesting the corresponding (ε$$chl\\atop{max}$$}$) to have less seasonal variation. Finally, this spatio-temporal convergence of LUE derived from fPAR chl can be used to build simple but robust GPP models and to better constrain process-based models.« less

  14. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  15. Spatio-temporal Convergence of Maximum Daily Light-Use Efficiency Based on Radiation Absorption by Canopy Chlorophyll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yao; Xiao, Xiangming; Wolf, Sebastian

    Light-use efficiency (LUE), which quantifies the plants’ efficiency in utilizing solar radiation for photosynthetic carbon fixation, is an important factor for gross primary production (GPP) estimation. Here we use satellite-based solar-induced chlorophyll fluorescence (SIF) as a proxy for photosynthetically active radiation absorbed by chlorophyll (APAR chl) and derive an estimation of the fraction of APAR chl (fPAR chl) from four remotely-sensed vegetation indicators. By comparing maximum LUE estimated at different scales from 127 eddy flux sites, we found that the maximum daily LUE based on PAR absorption by canopy chlorophyll (εmore » $$chl\\atop{max}$$), unlike other expressions of LUE, tends to converge across biome types. The photosynthetic seasonality in tropical forests can also be tracked by the change of fPAR chl, suggesting the corresponding (ε$$chl\\atop{max}$$}$) to have less seasonal variation. Finally, this spatio-temporal convergence of LUE derived from fPAR chl can be used to build simple but robust GPP models and to better constrain process-based models.« less

  16. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    PubMed

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking.

  17. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds

    PubMed Central

    Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-01-01

    Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by combining network efficiency analysis with scoring function from molecular docking. PMID:21445339

  18. Neural network fusion capabilities for efficient implementation of tracking algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Amoozegar, Farid

    1996-05-01

    The ability to efficiently fuse information of different forms for facilitating intelligent decision-making is one of the major capabilities of trained multilayer neural networks that is being recognized int eh recent times. While development of innovative adaptive control algorithms for nonlinear dynamical plants which attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. In this paper we describe the capabilities and functionality of neural network algorithms for data fusion and implementation of nonlinear tracking filters. For a discussion of details and for serving as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes form the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. Such an approach results in an overall nonlinear tracking filter which has several advantages over the popular efforts at designing nonlinear estimation algorithms for tracking applications, the principle one being the reduction of mathematical and computational complexities. A system architecture that efficiently integrates the processing capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described in this paper.

  19. The thermodynamic efficiency of ATP synthesis in oxidative phosphorylation.

    PubMed

    Nath, Sunil

    2016-12-01

    As the chief energy source of eukaryotic cells, it is important to determine the thermodynamic efficiency of ATP synthesis in oxidative phosphorylation (OX PHOS). Previous estimates of the thermodynamic efficiency of this vital process have ranged from Lehninger's original back-of-the-envelope calculation of 38% to the often quoted value of 55-60% in current textbooks of biochemistry, to high values of 90% from recent information theoretic considerations, and reports of realizations of close to ideal 100% efficiencies by single molecule experiments. Hence this problem has been reinvestigated from first principles. The overall thermodynamic efficiency of ATP synthesis in the mitochondrial energy transduction OX PHOS process has been found to lie between 40 and 41% from four different approaches based on a) estimation using structural and biochemical data, b) fundamental nonequilibrium thermodynamic analysis, c) novel insights arising from Nath's torsional mechanism of energy transduction and ATP synthesis, and d) the overall balance of cellular energetics. The torsional mechanism also offers an explanation for the observation of a thermodynamic efficiency approaching 100% in some experiments. Applications of the unique, molecular machine mode of functioning of F 1 F O -ATP synthase involving direct inter-conversion of chemical and mechanical energies in the design and fabrication of novel, man-made mechanochemical devices have been envisaged, and some new ways to exorcise Maxwell's demon have been proposed. It is hoped that analysis of the fundamental problem of energy transduction in OX PHOS from a fresh perspective will catalyze new avenues of research in this interdisciplinary field. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Benefit-cost estimation for alternative drinking water maximum contaminant levels

    NASA Astrophysics Data System (ADS)

    Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.

    2001-08-01

    A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.

  1. A quantitative and spatially resolved analysis of the performance-bottleneck in high efficiency, planar hybrid perovskite solar cells

    DOE PAGES

    Draguta, Sergiu; Christians, Jeffrey A.; Morozov, Yurii V.; ...

    2018-01-01

    Hybrid perovskites represent a potential paradigm shift for the creation of low-cost solar cells. Current power conversion efficiencies (PCEs) exceed 22%. However, despite this, record PCEs are still far from their theoretical Shockley–Queisser limit of 31%. To increase these PCE values, there is a pressing need to understand, quantify and microscopically model charge recombination processes in full working devices. Here, we present a complete microscopic account of charge recombination processes in high efficiency (18–19% PCE) hybrid perovskite (mixed cation and methylammonium lead iodide) solar cells. We employ diffraction-limited optical measurements along with relevant kinetic modeling to establish, for the firstmore » time, local photoluminescence quantum yields, trap densities, trapping efficiencies, charge extraction efficiencies, quasi-Fermi-level splitting, and effective PCE estimates. Correlations between these spatially resolved parameters, in turn, allow us to conclude that intrinsic electron traps in the perovskite active layers limit the performance of these state-of-the-art hybrid perovskite solar cells.« less

  2. A quantitative and spatially resolved analysis of the performance-bottleneck in high efficiency, planar hybrid perovskite solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draguta, Sergiu; Christians, Jeffrey A.; Morozov, Yurii V.

    Hybrid perovskites represent a potential paradigm shift for the creation of low-cost solar cells. Current power conversion efficiencies (PCEs) exceed 22%. However, despite this, record PCEs are still far from their theoretical Shockley–Queisser limit of 31%. To increase these PCE values, there is a pressing need to understand, quantify and microscopically model charge recombination processes in full working devices. Here, we present a complete microscopic account of charge recombination processes in high efficiency (18–19% PCE) hybrid perovskite (mixed cation and methylammonium lead iodide) solar cells. We employ diffraction-limited optical measurements along with relevant kinetic modeling to establish, for the firstmore » time, local photoluminescence quantum yields, trap densities, trapping efficiencies, charge extraction efficiencies, quasi-Fermi-level splitting, and effective PCE estimates. Correlations between these spatially resolved parameters, in turn, allow us to conclude that intrinsic electron traps in the perovskite active layers limit the performance of these state-of-the-art hybrid perovskite solar cells.« less

  3. From Policy to Compliance: Federal Energy Efficient Product Procurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMates, Laurèn; Scodel, Anna

    Federal buyers are required to purchase energy-efficient products in an effort to minimize energy use in the federal sector, save the federal government money, and spur market development of efficient products. The Federal Energy Management Program (FEMP)’s Energy Efficient Product Procurement (EEPP) Program helps federal agencies comply with the requirement to purchase energy-efficient products by providing technical assistance and guidance and setting efficiency requirements for certain product categories. Past studies have estimated the savings potential of purchasing energy-efficient products at over $500 million per year in energy costs across federal agencies.1 Despite the strong policy support for EEPP and resourcesmore » available, energy-efficient product purchasing operates within complex decision-making processes and operational structures; implementation challenges exist that may hinder agencies’ ability to comply with purchasing requirements. The shift to purchasing green products, including energy-efficient products, relies on “buy in” from a variety of potential actors throughout different purchasing pathways. Challenges may be especially high for EEPP relative to other sustainable acquisition programs given that efficient products frequently have a higher first cost than non-efficient ones, which may be perceived as a conflict with fiscal responsibility, or more simply problematic for agency personnel trying to stretch limited budgets. Federal buyers may also face challenges in determining whether a given product is subject to EEPP requirements. Previous analysis on agency compliance with EEPP, conducted by the Alliance to Save Energy (ASE), shows that federal agencies are getting better at purchasing energy-efficient products. ASE conducted two reviews of relevant solicitations for product and service contracts listed on Federal Business Opportunities (FBO), the centralized website where federal agencies are required to post procurements greater than $25,000. In 2010, ASE estimated a compliance rate of 46% in 2010, up from an estimate of 12% in 2008. Our work updates and expands on ASE’s 2010 analysis to gauge agency compliance with EEPP requirements.« less

  4. The effect of nonadiabaticity on the efficiency of quantum memory based on an optical cavity

    NASA Astrophysics Data System (ADS)

    Veselkova, N. G.; Sokolov, I. V.

    2017-07-01

    Quantum efficiency is an important characteristic of quantum memory devices that are aimed at recording the quantum state of light signals and its storing and reading. In the case of memory based on an ensemble of cold atoms placed in an optical cavity, the efficiency is restricted, in particular, by relaxation processes in the system of active atomic levels. We show how the effect of the relaxation on the quantum efficiency can be determined in a regime of the memory usage in which the evolution of signals in time is not arbitrarily slow on the scale of the field lifetime in the cavity and when the frequently used approximation of the adiabatic elimination of the quantized cavity mode field cannot be applied. Taking into account the effect of the nonadiabaticity on the memory quality is of interest in view of the fact that, in order to increase the field-medium coupling parameter, a higher cavity quality factor is required, whereas storing and processing of sequences of many signals in the memory implies that their duration is reduced. We consider the applicability of the well-known efficiency estimates via the system cooperativity parameter and estimate a more general form. In connection with the theoretical description of the memory of the given type, we also discuss qualitative differences in the behavior of a random source introduced into the Heisenberg-Langevin equations for atomic variables in the cases of a large and a small number of atoms.

  5. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Moser, D.E.; Suggs, Robb M.; Cooke, W.J.

    2010-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, Eta is a unique function of velocity with an extremely large variation in the laboratory range of under 8 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from laboratory impacts and from astronomical determinations and scaling factors are estimated. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors

  6. Mode extraction on wind turbine blades via phase-based video motion estimation

    NASA Astrophysics Data System (ADS)

    Sarrafi, Aral; Poozesh, Peyman; Niezrecki, Christopher; Mao, Zhu

    2017-04-01

    In recent years, image processing techniques are being applied more often for structural dynamics identification, characterization, and structural health monitoring. Although as a non-contact and full-field measurement method, image processing still has a long way to go to outperform other conventional sensing instruments (i.e. accelerometers, strain gauges, laser vibrometers, etc.,). However, the technologies associated with image processing are developing rapidly and gaining more attention in a variety of engineering applications including structural dynamics identification and modal analysis. Among numerous motion estimation and image-processing methods, phase-based video motion estimation is considered as one of the most efficient methods regarding computation consumption and noise robustness. In this paper, phase-based video motion estimation is adopted for structural dynamics characterization on a 2.3-meter long Skystream wind turbine blade, and the modal parameters (natural frequencies, operating deflection shapes) are extracted. Phase-based video processing adopted in this paper provides reliable full-field 2-D motion information, which is beneficial for manufacturing certification and model updating at the design stage. The phase-based video motion estimation approach is demonstrated through processing data on a full-scale commercial structure (i.e. a wind turbine blade) with complex geometry and properties, and the results obtained have a good correlation with the modal parameters extracted from accelerometer measurements, especially for the first four bending modes, which have significant importance in blade characterization.

  7. State road fund revenue collection processes : differences and opportunities of improved efficiency

    DOT National Transportation Integrated Search

    2001-07-01

    Research regarding the administration and collection of road fund revenues has focused on gaining an understanding of the motivations for tax evasion, methods of evasion, and estimates of the magnitude of evasion for individual states. To our knowled...

  8. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  9. Age and gender estimation using Region-SIFT and multi-layered SVM

    NASA Astrophysics Data System (ADS)

    Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun

    2018-04-01

    In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.

  10. In-line agglomeration degree estimation in fluidized bed pellet coating processes using visual imaging.

    PubMed

    Mehle, Andraž; Kitak, Domen; Podrekar, Gregor; Likar, Boštjan; Tomaževič, Dejan

    2018-05-09

    Agglomeration of pellets in fluidized bed coating processes is an undesirable phenomenon that affects the yield and quality of the product. In scope of PAT guidance, we present a system that utilizes visual imaging for in-line monitoring of the agglomeration degree. Seven pilot-scale Wurster coating processes were executed under various process conditions, providing a wide spectrum of process outcomes. Images of pellets were acquired during the coating processes in a contactless manner through an observation window of the coating apparatus. Efficient image analysis methods were developed for automatic recognition of discrete pellets and agglomerates in the acquired images. In-line obtained agglomeration degree trends revealed the agglomeration dynamics in distinct phases of the coating processes. We compared the in-line estimated agglomeration degree in the end point of each process to the results obtained by the off-line sieve analysis reference method. A strong positive correlation was obtained (coefficient of determination R 2 =0.99), confirming the feasibility of the approach. The in-line estimated agglomeration degree enables early detection of agglomeration and provides means for timely interventions to retain it in an acceptable range. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  12. Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.

    PubMed

    Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J

    2016-03-01

    To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.

  13. Spatial Patterns in the Efficiency of the Biological Pump: What Controls Export Ratios at the Global Scale?

    NASA Astrophysics Data System (ADS)

    Moore, J. K.

    2016-02-01

    The efficiency of the biological pump is influenced by complex interactions between chemical, biological, and physical processes. The efficiency of export out of surface waters and down through the water column to the deep ocean has been linked to a number of factors including biota community composition, production of mineral ballast components, physical aggregation and disaggregation processes, and ocean oxygen concentrations. I will examine spatial patterns in the export ratio and the efficiency of the biological pump at the global scale using the Community Earth System Model (CESM). There are strong spatial variations in the export efficiency as simulated by the CESM, which are strongly correlated with new nutrient inputs to the euphotic zone and their impacts on phytoplankton community structure. I will compare CESM simulations that include dynamic, variable export ratios driven by the phytoplankton community structure, with simulations that impose a near-constant export ratio to examine the effects of export efficiency on nutrient and surface chlorophyll distributions. The model predicted export ratios will also be compared with recent satellite-based estimates.

  14. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    PubMed Central

    Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping

    2015-01-01

    A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results. PMID:26694385

  15. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar.

    PubMed

    Yang, Shouguo; Li, Yong; Zhang, Kunhui; Tang, Weiping

    2015-12-14

    A novel spatio-temporal 2-dimensional (2-D) processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO) radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters' outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD) and direction of arrival (DOA), and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD) of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  16. Software sensors for bioprocesses.

    PubMed

    Bogaerts, Ph; Vande Wouwer, A

    2003-10-01

    State estimation is a significant problem in biotechnological processes, due to the general lack of hardware sensor measurements of the variables describing the process dynamics. The objective of this paper is to review a number of software sensor design methods, including extended Kalman filters, receding-horizon observers, asymptotic observers, and hybrid observers, which can be efficiently applied to bioprocesses. These several methods are illustrated with simulation and real-life case studies.

  17. A system dynamic model to estimate hydrological processes and water use in a eucalypt plantation

    Treesearch

    Ying Ouyang; Daping Xu; Ted Leininger; Ningnan Zhang

    2016-01-01

    Eucalypts have been identified as one of the best feedstocks for bioenergy production due to theirfast-growth rate and coppicing ability. However, their water use efficiency along with the adverse envi-ronmental impacts is still a controversial issue. In this study, a system dynamic model was developed toestimate the hydrological processes and water use in a eucalyptus...

  18. Robust Characterization of Loss Rates

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2015-08-01

    Many physical implementations of qubits—including ion traps, optical lattices and linear optics—suffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.

  19. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  20. Energy efficiency and greenhouse gas emission intensity of petroleum products at U.S. refineries.

    PubMed

    Elgowainy, Amgad; Han, Jeongwoo; Cai, Hao; Wang, Michael; Forman, Grant S; DiVita, Vincent B

    2014-07-01

    This paper describes the development of (1) a formula correlating the variation in overall refinery energy efficiency with crude quality, refinery complexity, and product slate; and (2) a methodology for calculating energy and greenhouse gas (GHG) emission intensities and processing fuel shares of major U.S. refinery products. Overall refinery energy efficiency is the ratio of the energy present in all product streams divided by the energy in all input streams. Using linear programming (LP) modeling of the various refinery processing units, we analyzed 43 refineries that process 70% of total crude input to U.S. refineries and cover the largest four Petroleum Administration for Defense District (PADD) regions (I, II, III, V). Based on the allocation of process energy among products at the process unit level, the weighted-average product-specific energy efficiencies (and ranges) are estimated to be 88.6% (86.2%-91.2%) for gasoline, 90.9% (84.8%-94.5%) for diesel, 95.3% (93.0%-97.5%) for jet fuel, 94.5% (91.6%-96.2%) for residual fuel oil (RFO), and 90.8% (88.0%-94.3%) for liquefied petroleum gas (LPG). The corresponding weighted-average, production GHG emission intensities (and ranges) (in grams of carbon dioxide-equivalent (CO2e) per megajoule (MJ)) are estimated to be 7.8 (6.2-9.8) for gasoline, 4.9 (2.7-9.9) for diesel, 2.3 (0.9-4.4) for jet fuel, 3.4 (1.5-6.9) for RFO, and 6.6 (4.3-9.2) for LPG. The findings of this study are key components of the life-cycle assessment of GHG emissions associated with various petroleum fuels; such assessment is the centerpiece of legislation developed and promulgated by government agencies in the United States and abroad to reduce GHG emissions and abate global warming.

  1. Improved quantitative visualization of hypervelocity flow through wavefront estimation based on shadow casting of sinusoidal gratings.

    PubMed

    Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2016-08-01

    A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.

  2. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  3. Reasoning and memory: People make varied use of the information available in working memory.

    PubMed

    Hardman, Kyle O; Cowan, Nelson

    2016-05-01

    Working memory (WM) is used for storing information in a highly accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information to perform optimally on the task. In this study, we used visual WM tasks that had both storage and reasoning components to determine both how ideally people are able to reason about information in WM and if there is a relationship between information storage and reasoning. We developed novel psychological process models of the tasks that allowed us to estimate for each participant both how much information they had in WM and how efficiently they reasoned about that information. Our estimates of information use showed that participants are not all ideal information users or minimal information users, but rather that there are individual differences in the thoroughness of information use in our WM tasks. However, we found that our participants tended to be more ideal than minimal. One implication of this work is that to accurately estimate the amount of information in WM, it is important to also estimate how efficiently that information is used. This new analysis contributes to the theoretical premise that human rationality may be bounded by the complexity of task demands. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Reasoning and memory: People make varied use of the information available in working memory

    PubMed Central

    Hardman, Kyle O.; Cowan, Nelson

    2015-01-01

    Working memory (WM) is used for storing information in a highly-accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information in order to perform optimally on the task. In this study, we used visual WM tasks that had both storage and reasoning components in order to determine both how ideally people are able to reason about information in WM and if there is a relationship between information storage and reasoning. We developed novel psychological process models of the tasks that allowed us to estimate for each participant both how much information they had in WM and how efficiently they reasoned about that information. Our estimates of information use showed that participants are not all ideal information users or minimal information users, but rather that there are individual differences in the thoroughness of information use in our WM tasks. However, we found that our participants tended to be more ideal than minimal. One implication of this work is that in order to accurately estimate the amount of information in WM, it is important to also estimate how efficiently that information is used. This new analysis contributes to the theoretical premise that human rationality may be bounded by the complexity of task demands. PMID:26569436

  5. What basic number processing measures in kindergarten explain unique variability in first-grade arithmetic proficiency?

    PubMed

    Bartelet, Dimona; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel

    2014-01-01

    Relations between children's mathematics achievement and their basic number processing skills have been reported in both cross-sectional and longitudinal studies. Yet, some key questions are currently unresolved, including which kindergarten skills uniquely predict children's arithmetic fluency during the first year of formal schooling and the degree to which predictors are contingent on children's level of arithmetic proficiency. The current study assessed kindergarteners' non-symbolic and symbolic number processing efficiency. In addition, the contribution of children's underlying magnitude representations to differences in arithmetic achievement was assessed. Subsequently, in January of Grade 1, their arithmetic proficiency was assessed. Hierarchical regression analysis revealed that children's efficiency to compare digits, count, and estimate numerosities uniquely predicted arithmetic differences above and beyond the non-numerical factors included. Moreover, quantile regression analysis indicated that symbolic number processing efficiency was consistently a significant predictor of arithmetic achievement scores regardless of children's level of arithmetic proficiency, whereas their non-symbolic number processing efficiency was not. Finally, none of the task-specific effects indexing children's representational precision was significantly associated with arithmetic fluency. The implications of the results are 2-fold. First, the findings indicate that children's efficiency to process symbols is important for the development of their arithmetic fluency in Grade 1 above and beyond the influence of non-numerical factors. Second, the impact of children's non-symbolic number processing skills does not depend on their arithmetic achievement level given that they are selected from a nonclinical population. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. The Virtual Liver Project: Simulating Tissue Injury Through Molecular and Cellular Processes

    EPA Science Inventory

    Efficiently and humanely testing the safety of thousands of environmental chemicals is a challenge. The US EPA Virtual Liver Project (v-Liver™) is aimed at simulating the effects of environmental chemicals computationally in order to estimate the risk of toxic outcomes in humans...

  7. Improved Analysis of Time Series with Temporally Correlated Errors: An Algorithm that Reduces the Computation Time.

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2016-12-01

    Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.

  8. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies.

    PubMed

    Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.

  9. Nonparametric estimation of median survival times with applications to multi-site or multi-center studies

    PubMed Central

    Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.

    2018-01-01

    We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007

  10. Platinum recycling in the United States in 1998

    USGS Publications Warehouse

    Hilliard, Henry E.

    2001-01-01

    In the United States, catalytic converters are the major source of secondary platinum for recycling. Other sources of platinum scrap include reforming and chemical process catalysts. The glass industry is a small but significant source of platinum scrap. In North America, it has been estimated that in 1998 more than 20,000 kilograms per year of platinum-group metals from automobile catalysts were available for recycling. In 1998, an estimated 7,690 kilograms of platinum were recycled in the United States. U.S. recycling efficiency was calculated to have been 76 percent in 1998; the recycling rate was estimated at 16 percent.

  11. Far-field DOA estimation and source localization for different scenarios in a distributed sensor network

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz

    Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.

  12. Experimental investigation of the influence of internal frames on the vibroacoustic behavior of a stiffened cylindrical shell using wavenumber analysis

    NASA Astrophysics Data System (ADS)

    Meyer, V.; Maxit, L.; Renou, Y.; Audoly, C.

    2017-09-01

    The understanding of the influence of non-axisymmetric internal frames on the vibroacoustic behavior of a stiffened cylindrical shell is of high interest for the naval or aeronautic industries. Several numerical studies have shown that the non-axisymmetric internal frame can increase the radiation efficiency significantly in the case of a mechanical point force. However, less attention has been paid to the experimental verification of this statement. That is why this paper proposes to compare the radiation efficiency estimated experimentally for a stiffened cylindrical shell with and without internal frames. The experimental process is based on scanning laser vibrometer measurements of the vibrations on the surface of the shell. A transform of the vibratory field in the wavenumber domain is then performed. It allows estimating the far-field radiated pressure with the stationary phase theorem. An increase of the radiation efficiency is observed in the low frequencies. Analysis of the velocity field in the physical and wavenumber spaces allows highlighting the coupling of the circumferential orders at the origin of the increase in the radiation efficiency.

  13. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  14. Efficiency of encounter-controlled reaction between diffusing reactants in a finite lattice: Non-nearest-neighbor effects

    NASA Astrophysics Data System (ADS)

    Bentz, Jonathan L.; Kozak, John J.; Nicolis, Gregoire

    2005-08-01

    The influence of non-nearest-neighbor displacements on the efficiency of diffusion-reaction processes involving one and two mobile diffusing reactants is studied. An exact analytic result is given for dimension d=1 from which, for large lattices, one can recover the asymptotic estimate reported 30 years ago by Lakatos-Lindenberg and Shuler. For dimensions d=2,3 we present numerically exact values for the mean time to reaction, as gauged by the mean walklength before reactive encounter, obtained via the theory of finite Markov processes and supported by Monte Carlo simulations. Qualitatively different results are found between processes occurring on d=1 versus d>1 lattices, and between results obtained assuming nearest-neighbor (only) versus non-nearest-neighbor displacements.

  15. 16 CFR 305.5 - Determinations of estimated annual energy consumption, estimated annual operating cost, and...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... consumption, estimated annual operating cost, and energy efficiency rating, and of water use rate. 305.5... energy efficiency rating, and of water use rate. (a) Procedures for determining the estimated annual energy consumption, the estimated annual operating costs, the energy efficiency ratings, and the efficacy...

  16. Estimation of multiple accelerated motions using chirp-Fourier transform and clustering.

    PubMed

    Alexiadis, Dimitrios S; Sergiadis, George D

    2007-01-01

    Motion estimation in the spatiotemporal domain has been extensively studied and many methodologies have been proposed, which, however, cannot handle both time-varying and multiple motions. Extending previously published ideas, we present an efficient method for estimating multiple, linearly time-varying motions. It is shown that the estimation of accelerated motions is equivalent to the parameter estimation of superpositioned chirp signals. From this viewpoint, one can exploit established signal processing tools such as the chirp-Fourier transform. It is shown that accelerated motion results in energy concentration along planes in the 4-D space: spatial frequencies-temporal frequency-chirp rate. Using fuzzy c-planes clustering, we estimate the plane/motion parameters. The effectiveness of our method is verified on both synthetic as well as real sequences and its advantages are highlighted.

  17. [Technical efficiency of traditional hospitals and public enterprises in Andalusia (Spain)].

    PubMed

    Herrero Tabanera, Luis; Martín Martín, José Jesús; López del Amo González, Ma del Puerto

    2015-01-01

    To assess the technical efficiency of traditional public hospitals without their own legal identity and subject to administrative law, and that of public enterprise hospitals, with their own legal identities and partly governed by private law, all of them belonging to the taxypayer-funded health system of Andalusia during the period 2005 -2008. The study included the 32 publicly-owned hospitals in Andalusia during the period 2005-2008. The method consisted of two stages. In the first stage, the indices of technical efficiency of the hospitals were calculated using Data Envelopment Analysis, and the change in total factor productivity was estimated using the Malmquist index. The results were compared according to perceived quality, and a sensitivity analysis was conducted through an auxiliary model and bootstrapping. In the second stage, a bivariate analysis was performed between hospital efficiency and organization type. Public enterprises were more efficient than traditional hospitals (on average by over 10%) in each of the study years. Nevertheless, a process of convergence was observed between the two types of organizations because, while the efficiency of traditional hospitals increased slightly (by 0.50%) over the study period, the performance of public enterprises declined by over 2%. The possible reasons for the greater efficiency of public enterprises include their greater budgetary and employment flexibility. However, the convergence process observed points to a process of mutual learning that is not necessarily efficient. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  18. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  19. Observations of Ocean Primary Productivity Using MODIS

    NASA Technical Reports Server (NTRS)

    Esaias, Wayne E.; Abbott, Mark R.; Koblinsky, Chester J. (Technical Monitor)

    2001-01-01

    Measuring the magnitude and variability of oceanic net primary productivity (NPP) represents a key advancement toward our understanding of the dynamics of marine ecosystems and the role of the ocean in the global carbon cycle. MODIS observations make two new contributions in addition to continuing the bio-optical time series begun with Orbview-2's SeaWiFS sensor. First, MODIS provides weekly estimates of global ocean net primary productivity on weekly and annual time periods, and annual empirical estimates of carbon export production. Second, MODIS provides additional insight into the spatial and temporal variations in photosynthetic efficiency through the direct measurements of solar-stimulated chlorophyll fluorescence. The two different weekly productivity indexes (first developed by Behrenfeld & Falkowski and by Yoder, Ryan and Howard) are used to derive daily productivity as a function of chlorophyll biomass, incident daily surface irradiance, temperature, euphotic depth, and mixed layer depth. Comparisons between these two estimates using both SeaWiFS and MODIS data show significant model differences in spatial distribution after allowance for the different integration depths. Both estimates are strongly dependence on the accuracy of the chlorophyll determination. In addition, an empirical approach is taken on annual scales to estimate global NPP and export production. Estimates of solar stimulated fluorescence efficiency from chlorophyll have been shown to be inversely related to photosynthetic efficiency by Abbott and co-workers. MODIS provides the first global estimates of oceanic chlorophyll fluorescence, providing an important proof of concept. MODIS observations are revealing spatial patterns of fluorescence efficiency which show expected variations with phytoplankton photo-physiological parameters as measured during in-situ surveys. This has opened the way for research into utilizing this information to improve our understanding of oceanic NPP variability. Deriving the ocean bio-optical properties places severe demands on instrument performance (especially band to band precision) and atmospheric correction. Improvements in MODIS instrument characterization and calibration over the first 16 mission months have greatly improved the accuracy of the chlorophyll input fields and FLH, and therefore the estimates of NPP and fluorescence efficiency. Annual estimates now show the oceanic NPP accounts for 40-50% of the global total NPP, with significant interannual variations related to large scale ocean processes. Spatial variations in ocean NPP, and exported production, have significant effects on exchange of CO2 between the ocean and atmosphere. Further work is underway to improve both the primary productivity model functions, and to refine our understanding of the relationships between fluorescence efficiency and NPP estimates. We expect that the MODIS instruments will prove extremely useful in assessing the time dependencies of oceanic carbon uptake and effects of iron enrichment, within the global carbon cycle.

  20. Reducing uncertainty in estimating virus reduction by advanced water treatment processes.

    PubMed

    Gerba, Charles P; Betancourt, Walter Q; Kitajima, Masaaki; Rock, Channah M

    2018-04-15

    Treatment of wastewater for potable reuse requires the reduction of enteric viruses to levels that pose no significant risk to human health. Advanced water treatment trains (e.g., chemical clarification, reverse osmosis, ultrafiltration, advanced oxidation) have been developed to provide reductions of viruses to differing levels of regulatory control depending upon the levels of human exposure and associated health risks. Importance in any assessment is information on the concentration and types of viruses in the untreated wastewater, as well as the degree of removal by each treatment process. However, it is critical that the uncertainty associated with virus concentration and removal or inactivation by wastewater treatment be understood to improve these estimates and identifying research needs. We reviewed the critically literature to assess to identify uncertainty in these estimates. Biological diversity within families and genera of viruses (e.g. enteroviruses, rotaviruses, adenoviruses, reoviruses, noroviruses) and specific virus types (e.g. serotypes or genotypes) creates the greatest uncertainty. These aspects affect the methods for detection and quantification of viruses and anticipated removal efficiency by treatment processes. Approaches to reduce uncertainty may include; 1) inclusion of a virus indicator for assessing efficiency of virus concentration and detection by molecular methods for each sample, 2) use of viruses most resistant to individual treatment processes (e.g. adenoviruses for UV light disinfection and reoviruses for chlorination), 3) data on ratio of virion or genome copies to infectivity in untreated wastewater, and 4) assessment of virus removal at field scale treatment systems to verify laboratory and pilot plant data for virus removal. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Linear Vector Quantisation and Uniform Circular Arrays based decoupled two-dimensional angle of arrival estimation

    NASA Astrophysics Data System (ADS)

    Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.

    2017-05-01

    Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.

  2. Journal: Efficient Hydrologic Tracer-Test Design for Tracer-Mass Estimation and Sample Collection Frequency, 1 Method Development

    EPA Science Inventory

    Hydrological tracer testing is the most reliable diagnostic technique available for the determination of basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowl...

  3. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  4. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  5. Evaluation strategy of regenerative braking energy for supercapacitor vehicle.

    PubMed

    Zou, Zhongyue; Cao, Junyi; Cao, Binggang; Chen, Wen

    2015-03-01

    In order to improve the efficiency of energy conversion and increase the driving range of electric vehicles, the regenerative energy captured during braking process is stored in the energy storage devices and then will be re-used. Due to the high power density of supercapacitors, they are employed to withstand high current in the short time and essentially capture more regenerative energy. The measuring methods for regenerative energy should be investigated to estimate the energy conversion efficiency and performance of electric vehicles. Based on the analysis of the regenerative braking energy system of a supercapacitor vehicle, an evaluation system for energy recovery in the braking process is established using USB portable data-acquisition devices. Experiments under various braking conditions are carried out. The results verify the higher efficiency of energy regeneration system using supercapacitors and the effectiveness of the proposed measurement method. It is also demonstrated that the maximum regenerative energy conversion efficiency can reach to 88%. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Optical determination of Shockley-Read-Hall and interface recombination currents in hybrid perovskites

    PubMed Central

    Sarritzu, Valerio; Sestu, Nicola; Marongiu, Daniela; Chang, Xueqing; Masi, Sofia; Rizzo, Aurora; Colella, Silvia; Quochi, Francesco; Saba, Michele; Mura, Andrea; Bongiovanni, Giovanni

    2017-01-01

    Metal-halide perovskite solar cells rival the best inorganic solar cells in power conversion efficiency, providing the outlook for efficient, cheap devices. In order for the technology to mature and approach the ideal Shockley-Queissier efficiency, experimental tools are needed to diagnose what processes limit performances, beyond simply measuring electrical characteristics often affected by parasitic effects and difficult to interpret. Here we study the microscopic origin of recombination currents causing photoconversion losses with an all-optical technique, measuring the electron-hole free energy as a function of the exciting light intensity. Our method allows assessing the ideality factor and breaks down the electron-hole recombination current into bulk defect and interface contributions, providing an estimate of the limit photoconversion efficiency, without any real charge current flowing through the device. We identify Shockley-Read-Hall recombination as the main decay process in insulated perovskite layers and quantify the additional performance degradation due to interface recombination in heterojunctions. PMID:28317883

  7. Reversible thermodynamic cycle for AMTEC power conversion. [Alkali Metal Thermal-to-Electric Converter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vining, C.B.; Williams, R.M.; Underwood, M.L.

    1993-10-01

    An AMTEC cell, may be described as performing two distinct energy conversion processes: (i) conversion of heat to mechanical energy via a sodium-based heat engine and (ii) conversion of mechanical energy to electrical energy by utilizing the special properties of the electrolyte material. The thermodynamic cycle appropriate to an alkali metal thermal-to-electric converter cell is discussed for both liquid- and vapor-fed modes of operation, under the assumption that all processes can be performed reversibly. In the liquid-fed mode, the reversible efficiency is greater than 89.6% of Carnot efficiency for heat input and rejection temperatures (900--1,300 and 400--800 K, respectively) typicalmore » of practical devices. Vapor-fed cells can approach the efficiency of liquid-fed cells. Quantitative estimates confirm that the efficiency is insensitive to either the work required to pressurize the sodium liquid or the details of the state changes associated with cooling the low pressure sodium gas to the heat rejection temperature.« less

  8. Optimal estimates of free energies from multistate nonequilibrium work data.

    PubMed

    Maragakis, Paul; Spichty, Martin; Karplus, Martin

    2006-03-17

    We derive the optimal estimates of the free energies of an arbitrary number of thermodynamic states from nonequilibrium work measurements; the work data are collected from forward and reverse switching processes and obey a fluctuation theorem. The maximum likelihood formulation properly reweights all pathways contributing to a free energy difference and is directly applicable to simulations and experiments. We demonstrate dramatic gains in efficiency by combining the analysis with parallel tempering simulations for alchemical mutations of model amino acids.

  9. Index to Estimate the Efficiency of an Ophthalmic Practice.

    PubMed

    Chen, Andrew; Kim, Eun Ah; Aigner, Dennis J; Afifi, Abdelmonem; Caprioli, Joseph

    2015-08-01

    A metric of efficiency, a function of the ratio of quality to cost per patient, will allow the health care system to better measure the impact of specific reforms and compare the effectiveness of each. To develop and evaluate an efficiency index that estimates the performance of an ophthalmologist's practice as a function of cost, number of patients receiving care, and quality of care. Retrospective review of 36 ophthalmology subspecialty practices from October 2011 to September 2012 at a university-based eye institute. The efficiency index (E) was defined as a function of adjusted number of patients (N(a)), total practice adjusted costs (C(a)), and a preliminary measure of quality (Q). Constant b limits E between 0 and 1. Constant y modifies the influence of Q on E. Relative value units and geographic cost indices determined by the Centers for Medicare and Medicaid for 2012 were used to calculate adjusted costs. The efficiency index is expressed as the following: E = b(N(a)/C(a))Q(y). Independent, masked auditors reviewed 20 random patient medical records for each practice and filled out 3 questionnaires to obtain a process-based quality measure. The adjusted number of patients, adjusted costs, quality, and efficiency index were calculated for 36 ophthalmology subspecialties. The median adjusted number of patients was 5516 (interquartile range, 3450-11,863), the median adjusted cost was 1.34 (interquartile range, 0.99-1.96), the median quality was 0.89 (interquartile range, 0.79-0.91), and the median value of the efficiency index was 0.26 (interquartile range, 0.08-0.42). The described efficiency index is a metric that provides a broad overview of performance for a variety of ophthalmology specialties as estimated by resources used and a preliminary measure of quality of care provided. The results of the efficiency index could be used in future investigations to determine its sensitivity to detect the impact of interventions on a practice such as training modules or practice restructuring.

  10. Updated estimation of energy efficiencies of U.S. petroleum refineries.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palou-Rivera, I.; Wang, M. Q.

    2010-12-08

    Evaluation of life-cycle (or well-to-wheels, WTW) energy and emission impacts of vehicle/fuel systems requires energy use (or energy efficiencies) of energy processing or conversion activities. In most such studies, petroleum fuels are included. Thus, determination of energy efficiencies of petroleum refineries becomes a necessary step for life-cycle analyses of vehicle/fuel systems. Petroleum refinery energy efficiencies can then be used to determine the total amount of process energy use for refinery operation. Furthermore, since refineries produce multiple products, allocation of energy use and emissions associated with petroleum refineries to various petroleum products is needed for WTW analysis of individual fuels suchmore » as gasoline and diesel. In particular, GREET, the life-cycle model developed at Argonne National Laboratory with DOE sponsorship, compares energy use and emissions of various transportation fuels including gasoline and diesel. Energy use in petroleum refineries is key components of well-to-pump (WTP) energy use and emissions of gasoline and diesel. In GREET, petroleum refinery overall energy efficiencies are used to determine petroleum product specific energy efficiencies. Argonne has developed petroleum refining efficiencies from LP simulations of petroleum refineries and EIA survey data of petroleum refineries up to 2006 (see Wang, 2008). This memo documents Argonne's most recent update of petroleum refining efficiencies.« less

  11. Estimation of a monotone percentile residual life function under random censorship.

    PubMed

    Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo

    2013-01-01

    In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting

    PubMed Central

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2015-01-01

    Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982

  13. Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.

    PubMed

    Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng

    2016-06-01

    The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.

  14. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  15. 3D shape reconstruction of specular surfaces by using phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan

    2016-10-01

    The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.

  16. On the calculation of dynamic and heat loads on a three-dimensional body in a hypersonic flow

    NASA Astrophysics Data System (ADS)

    Bocharov, A. N.; Bityurin, V. A.; Evstigneev, N. M.; Fortov, V. E.; Golovin, N. N.; Petrovskiy, V. P.; Ryabkov, O. I.; Teplyakov, I. O.; Shustov, A. A.; Solomonov, Yu S.

    2018-01-01

    We consider a three-dimensional body in a hypersonic flow at zero angle of attack. Our aim is to estimate heat and aerodynamic loads on specific body elements. We are considering a previously developed code to solve coupled heat- and mass-transfer problem. The change of the surface shape is taken into account by formation of the iterative process for the wall material ablation. The solution is conducted on the multi-graphics-processing-unit (multi-GPU) cluster. Five Mach number points are considered, namely for M = 20-28. For each point we estimate body shape after surface ablation, heat loads on the surface and aerodynamic loads on the whole body and its elements. The latter is done using Gauss-type quadrature on the surface of the body. The comparison of the results for different Mach numbers is performed. We also estimate the efficiency of the Navier-Stokes code on multi-GPU and central processing unit architecture for the coupled heat and mass transfer problem.

  17. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  18. Journal: Efficient Hydrologic Tracer-Test Design for Tracer ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for the determination of basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed to facilitate the design of tracer tests by root determination of the one-dimensional advection-dispersion equation (ADE) using a preset average tracer concentration which provides a theoretical basis for an estimate of necessary tracer mass. The method uses basic measured field parameters (e.g., discharge, distance, cross-sectional area) that are combined in functional relatipnships that descrive solute-transport processes related to flow velocity and time of travel. These initial estimates for time of travel and velocity are then applied to a hypothetical continuous stirred tank reactor (CSTR) as an analog for the hydrological-flow system to develop initial estimates for tracer concentration, tracer mass, and axial dispersion. Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be necessary for descri

  19. Tracer-Test Planning Using the Efficient Hydrologic Tracer ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be

  20. EFFICIENT HYDROLOGICAL TRACER-TEST DESIGN (EHTD ...

    EPA Pesticide Factsheets

    Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to

  1. III-Vs at Scale: A PV Manufacturing Cost Analysis of the Thin Film Vapor-Liquid-Solid Growth Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Maxwell; Horowitz, Kelsey; Woodhouse, Michael

    The authors present a manufacturing cost analysis for producing thin-film indium phosphide modules by combining a novel thin-film vapor-liquid-solid (TF-VLS) growth process with a standard monolithic module platform. The example cell structure is ITO/n-TiO2/p-InP/Mo. For a benchmark scenario of 12% efficient modules, the module cost is estimated to be $0.66/W(DC) and the module cost is calculated to be around $0.36/W(DC) at a long-term potential efficiency of 24%. The manufacturing cost for the TF-VLS growth portion is estimated to be ~$23/m2, a significant reduction compared with traditional metalorganic chemical vapor deposition. The analysis here suggests the TF-VLS growth mode could enablemore » lower-cost, high-efficiency III-V photovoltaics compared with manufacturing methods used today and open up possibilities for other optoelectronic applications as well.« less

  2. Working memory capacity and redundant information processing efficiency.

    PubMed

    Endres, Michael J; Houpt, Joseph W; Donkin, Chris; Finn, Peter R

    2015-01-01

    Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.

  3. Environmental Assessment of Different Cement Manufacturing ...

    EPA Pesticide Factsheets

    Due to its high environmental impact and energy intensive production, the cement industry needs to adopt more energy efficient technologies to reduce its demand for fossil fuels and impact on the environment. Bearing in mind that cement is the most widely used material for housing and modern infrastructure, the aim of this paper is to analyse the Emergy and Ecological Footprint of different cement manufacturing processes for a particular cement plant. There are several mitigation measures that can be incorporated in the cement manufacturing process to reduce the demand for fossil fuels and consequently reduce the CO2 emissions. The mitigation measures considered in this paper were the use of alternative fuels and a more energy efficient kiln process. In order to estimate the sustainability effect of the aforementioned measures, Emergy and Ecological Footprint were calculated for four different scenarios. The results show that Emergy, due to the high input mass of raw material needed for clinker production, stays at about the same level. However, for the Ecological Footprint, the results show that by combining the use of alternative fuels together with a more energy efficient kiln process, the environmental impact of the cement manufacturing process can be lowered. The research paper presents an analysis of the sustainability of cement production , a major contributor to carbon emissions, with respect to using alternative fuels and a more efficient kiln. It show

  4. Laccase from Pycnoporus cinnabarinus and phenolic compounds: can the efficiency of an enzyme mediator for delignifying kenaf pulp be predicted?

    PubMed

    Andreu, Glòria; Vidal, Teresa

    2013-03-01

    In this work, kenaf pulp was delignified by using laccase in combination with various redox mediators and the efficiency of the different laccase–mediator systems assessed in terms of the changes in pulp properties after bleaching. The oxidative ability of the individual mediators used (acetosyringone, syringaldehyde, p-coumaric acid, vanillin and actovanillone) and the laccase–mediator systems was determined by monitoring the oxidation–reduction potential (ORP) during process. The results confirmed the production of phenoxy radicals of variable reactivity and stressed the significant role of lignin structure in the enzymatic process. Although changes in ORP were correlated with the oxidative ability of the mediators, pulp properties as determined after the bleaching stage were also influenced by condensation and grafting reactions. As shown here, ORP measurements provide a first estimation of the delignification efficiency of a laccase–mediator system. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Design of production process main shaft process with lean manufacturing to improve productivity

    NASA Astrophysics Data System (ADS)

    Siregar, I.; Nasution, A. A.; Andayani, U.; Anizar; Syahputri, K.

    2018-02-01

    This object research is one of manufacturing companies that produce oil palm machinery parts. In the production process there is delay in the completion of the Main shaft order. Delays in the completion of the order indicate the low productivity of the company in terms of resource utilization. This study aimed to obtain a draft improvement of production processes that can improve productivity by identifying and eliminating activities that do not add value (non-value added activity). One approach that can be used to reduce and eliminate non-value added activity is Lean Manufacturing. This study focuses on the identification of non-value added activity with value stream mapping analysis tools, while the elimination of non-value added activity is done with tools 5 whys and implementation of pull demand system. Based on the research known that non-value added activity on the production process of the main shaft is 9,509.51 minutes of total lead time 10,804.59 minutes. This shows the level of efficiency (Process Cycle Efficiency) in the production process of the main shaft is still very low by 11.89%. Estimation results of improvement showed a decrease in total lead time became 4,355.08 minutes and greater process cycle efficiency that is equal to 29.73%, which indicates that the process was nearing the concept of lean production.

  6. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.

    PubMed

    Wolf, Elizabeth Skubak; Anderson, David F

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.

  7. Analysis of Water Use Efficiency derived from MODIS satellite image in Northeast Asia

    NASA Astrophysics Data System (ADS)

    Park, J.; Jang, K.; Kang, S.

    2014-12-01

    Water Use Efficiency (WUE) is defined as ratio of evapotranspriation (ET) to gross primary productivity (GPP). It can detect the changes of ecosystem properties due to the variability of enviromental condition, and provide a chance to understand the linkage between carbon and water processes in terrestrial ecosystem. In a changing climate, the understanding of ecosystem functional responses to climate variability is crucial for evaluating effect. However, continental or sub-continental scale WUE analysis is were rare. In this study, WUE was estimated in the Northeast Asia using satellite data from 2003 to 2010. ET and GPP were estimated using various MODIS products. The estimated ET and GPP showed favorable agreements with flux tower observations. WUE in the study domain showed considerable variations according to the plant functional types and climatic and elevational gradients. The results produced in this study indicate that satellite remote sensing provides a useful tool for monitoring variability of terrestrial ecosystem functions.

  8. Costing behavioral interventions: a practical guide to enhance translation.

    PubMed

    Ritzwoller, Debra P; Sukhanova, Anna; Gaglio, Bridget; Glasgow, Russell E

    2009-04-01

    Cost and cost effectiveness of behavioral interventions are critical parts of dissemination and implementation into non-academic settings. Due to the lack of indicative data and policy makers' increasing demands for both program effectiveness and efficiency, cost analyses can serve as valuable tools in the evaluation process. To stimulate and promote broader use of practical techniques that can be used to efficiently estimate the implementation costs of behavioral interventions, we propose a set of analytic steps that can be employed across a broad range of interventions. Intervention costs must be distinguished from research, development, and recruitment costs. The inclusion of sensitivity analyses is recommended to understand the implications of implementation of the intervention into different settings using different intervention resources. To illustrate these procedures, we use data from a smoking reduction practical clinical trial to describe the techniques and methods used to estimate and evaluate the costs associated with the intervention. Estimated intervention costs per participant were $419, with a range of $276 to $703, depending on the number of participants.

  9. Real-time caries diagnostics by optical PNC method

    NASA Astrophysics Data System (ADS)

    Masychev, Victor I.; Alexandrov, Michail T.

    2000-11-01

    The results of hard tooth tissues research by the optical PNC- method in experimental and clinical conditions are presented. In the experiment under 90 test-sample of tooth slices with thickness about 1mm (enamel, dentine and cement) were researched. The results of the experiment were processed by the method of correlation analyze. Clinical researches were executed on teeth of 210 patients. The regions of tooth tissue diseases with initial, moderate and deep caries were investigated. Spectral characteristics of intact and pathologically changed tooth tissues are presented and their peculiar features are discussed. The results the optical PNC-method application while processing tooth carious cavities are presented in order to estimate efficiency of the mechanical and antiseptic processing of teeth. It is revealed that the PNC-method can be sued as for differential diagnostics of a degree dental carious stage, as for estimating of carefulness of tooth cavity processing before filling.

  10. Express diagnostics of intact and pathological dental hard tissues by optical PNC method

    NASA Astrophysics Data System (ADS)

    Masychev, Victor I.; Alexandrov, Michail T.

    2000-03-01

    The results of hard tooth tissues research by the optical PNC- method in experimental and clinical conditions are presented. In the experiment under 90 test-sample of tooth slices with thickness about 1 mm (enamel, dentine and cement) were researched. The results of the experiment were processed by the method of correlation analyze. Clinical researches were executed on teeth of 210 patients. The regions of tooth tissue diseases with initial, moderate and deep caries were investigated. Spectral characteristics of intact and pathologically changed tooth tissues are presented and their peculiar features are discussed. The results the optical PNC- method application while processing tooth carious cavities are presented in order to estimate efficiency of the mechanical and antiseptic processing of teeth. It is revealed that the PNC-method can be used as for differential diagnostics of a degree dental carious stage, as for estimating of carefulness of tooth cavity processing before filling.

  11. Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.

    PubMed

    García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G

    2017-08-01

    The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.

  12. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  13. Molecular dynamics simulations investigating consecutive nucleation, solidification and grain growth in a twelve-million-atom Fe-system

    NASA Astrophysics Data System (ADS)

    Okita, Shin; Verestek, Wolfgang; Sakane, Shinji; Takaki, Tomohiro; Ohno, Munekazu; Shibuta, Yasushi

    2017-09-01

    Continuous processes of homogeneous nucleation, solidification and grain growth are spontaneously achieved from an undercooled iron melt without any phenomenological parameter in the molecular dynamics (MD) simulation with 12 million atoms. The nucleation rate at the critical temperature is directly estimated from the atomistic configuration by cluster analysis to be of the order of 1034 m-3 s-1. Moreover, time evolution of grain size distribution during grain growth is obtained by the combination of Voronoi and cluster analyses. The grain growth exponent is estimated to be around 0.3 from the geometric average of the grain size distribution. Comprehensive understanding of kinetic properties during continuous processes is achieved in the large-scale MD simulation by utilizing the high parallel efficiency of a graphics processing unit (GPU), which is shedding light on the fundamental aspects of production processes of materials from the atomistic viewpoint.

  14. Electrical power production from low-grade waste heat using a thermally regenerative ethylenediamine battery

    NASA Astrophysics Data System (ADS)

    Rahimi, Mohammad; D'Angelo, Adriana; Gorski, Christopher A.; Scialdone, Onofrio; Logan, Bruce E.

    2017-05-01

    Thermally regenerative ammonia-based batteries (TRABs) have been developed to harvest low-grade waste heat as electricity. To improve the power production and anodic coulombic efficiency, the use of ethylenediamine as an alternative ligand to ammonia was explored here. The power density of the ethylenediamine-based battery (TRENB) was 85 ± 3 W m-2-electrode area with 2 M ethylenediamine, and 119 ± 4 W m-2 with 3 M ethylenediamine. This power density was 68% higher than that of TRAB. The energy density was 478 Wh m-3-anolyte, which was ∼50% higher than that produced by TRAB. The anodic coulombic efficiency of the TRENB was 77 ± 2%, which was more than twice that obtained using ammonia in a TRAB (35%). The higher anodic efficiency reduced the difference between the anode dissolution and cathode deposition rates, resulting in a process more suitable for closed loop operation. The thermal-electric efficiency based on ethylenediamine separation using waste heat was estimated to be 0.52%, which was lower than that of TRAB (0.86%), mainly due to the more complex separation process. However, this energy recovery could likely be improved through optimization of the ethylenediamine separation process.

  15. Efficient Estimation of the Standardized Value

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    2009-01-01

    We derive an estimator of the standardized value which, under the standard assumptions of normality and homoscedasticity, is more efficient than the established (asymptotically efficient) estimator and discuss its gains for small samples. (Contains 1 table and 3 figures.)

  16. Practical Issues in Implementing Software Reliability Measurement

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.

    1999-01-01

    Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.

  17. Inertia Estimation of Spacecraft Based on Modified Law of Conservation of Angular Momentum

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hoon; Choi, Dae-Gyun; Oh, Hwa-Suk

    2010-12-01

    In general, the information of inertia properties is required to control a spacecraft. The inertia properties are changed by some activities such as consumption of propellant, deployment of solar panel, sloshing, etc. Extensive estimation methods have been investigated to obtain the precise inertia properties. The gyro-based attitude data including noise and bias needs to be compensated for improvement of attitude control accuracy. A modified estimation method based on the law of conservation of angular momentum is suggested to avoid inconvenience like filtering process for noiseeffect compensation. The conventional method is modified and beforehand estimated moment of inertia is applied to improve estimation efficiency of product of inertia. The performance of the suggested method has been verified for the case of STSAT-3, Korea Science Technology Satellite.

  18. Optimal estimates of the diffusion coefficient of a single Brownian trajectory.

    PubMed

    Boyer, Denis; Dean, David S; Mejía-Monasterio, Carlos; Oshanin, Gleb

    2012-03-01

    Modern developments in microscopy and image processing are revolutionizing areas of physics, chemistry, and biology as nanoscale objects can be tracked with unprecedented accuracy. The goal of single-particle tracking is to determine the interaction between the particle and its environment. The price paid for having a direct visualization of a single particle is a consequent lack of statistics. Here we address the optimal way to extract diffusion constants from single trajectories for pure Brownian motion. It is shown that the maximum likelihood estimator is much more efficient than the commonly used least-squares estimate. Furthermore, we investigate the effect of disorder on the distribution of estimated diffusion constants and show that it increases the probability of observing estimates much smaller than the true (average) value.

  19. Coupled Modeling of Flow, Transport, and Deformation during Hydrodynamically Unstable Displacement in Fractured Rocks

    NASA Astrophysics Data System (ADS)

    Jha, B.; Juanes, R.

    2015-12-01

    Coupled processes of flow, transport, and deformation are important during production of hydrocarbons from oil and gas reservoirs. Effective design and implementation of enhanced recovery techniques such as miscible gas flooding and hydraulic fracturing requires modeling and simulation of these coupled proceses in geologic porous media. We develop a computational framework to model the coupled processes of flow, transport, and deformation in heterogeneous fractured rock. We show that the hydrocarbon recovery efficiency during unstable displacement of a more viscous oil with a less viscous fluid in a fractured medium depends on the mechanical state of the medium, which evolves due to permeability alteration within and around fractures. We show that fully accounting for the coupling between the physical processes results in estimates of the recovery efficiency in agreement with observations in field and lab experiments.

  20. Vector Observation-Aided/Attitude-Rate Estimation Using Global Positioning System Signals

    NASA Technical Reports Server (NTRS)

    Oshman, Yaakov; Markley, F. Landis

    1997-01-01

    A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.

  1. Wastewater treatment in the pulp-and-paper industry: A review of treatment processes and the associated greenhouse gas emission.

    PubMed

    Ashrafi, Omid; Yerushalmi, Laleh; Haghighat, Fariborz

    2015-08-01

    Pulp-and-paper mills produce various types of contaminants and a significant amount of wastewater depending on the type of processes used in the plant. Since the generated wastewaters can be potentially polluting and very dangerous, they should be treated in wastewater treatment plants before being released to the environment. This paper reviews different wastewater treatment processes used in the pulp-and-paper industry and compares them with respect to their contaminant removal efficiencies and the extent of greenhouse gas (GHG) emission. It also evaluates the impact of operating parameters on the performance of different treatment processes. Two mathematical models were used to estimate GHG emission in common biological treatment processes used in the pulp-and-paper industry. Nutrient removal processes and sludge treatment are discussed and their associated GHG emissions are calculated. Although both aerobic and anaerobic biological processes are appropriate for wastewater treatment, their combination known as hybrid processes showed a better contaminant removal capacity at higher efficiencies under optimized operating conditions with reduced GHG emission and energy costs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  3. Multiview face detection based on position estimation over multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  4. Investigation of optimum conditions and costs estimation for degradation of phenol by solar photo-Fenton process

    NASA Astrophysics Data System (ADS)

    Gar Alalm, Mohamed; Tawfik, Ahmed; Ookawara, Shinichi

    2017-03-01

    In this study, solar photo-Fenton reaction using compound parabolic collectors reactor was assessed for removal of phenol from aqueous solution. The effect of irradiation time, initial concentration, initial pH, and dosage of Fenton reagent were investigated. H2O2 and aromatic intermediates (catechol, benzoquinone, and hydroquinone) were quantified during the reaction to study the pathways of the oxidation process. Complete degradation of phenol was achieved after 45 min of irradiation when the initial concentration was 100 mg/L. However, increasing the initial concentration up to 500 mg/L inhibited the degradation efficiency. The dosage of H2O2 and Fe+2 significantly affected the degradation efficiency of phenol. The observed optimum pH for the reaction was 3.1. Phenol degradation at different concentration was fitted to the pseudo-first order kinetic according to Langmuir-Hinshelwood model. Costs estimation for a large scale reactor based was performed. The total costs of the best economic condition with maximum degradation of phenol are 2.54 €/m3.

  5. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  6. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  7. Sodium Hydroxide Production from Seawater Desalination Brine: Process Design and Energy Efficiency.

    PubMed

    Du, Fengmin; Warsinger, David M; Urmi, Tamanna I; Thiel, Gregory P; Kumar, Amit; Lienhard V, John H

    2018-05-15

    The ability to increase pH is a crucial need for desalination pretreatment (especially in reverse osmosis) and for other industries, but processes used to raise pH often incur significant emissions and nonrenewable resource use. Alternatively, waste brine from desalination can be used to create sodium hydroxide, via appropriate concentration and purification pretreatment steps, for input into the chlor-alkali process. In this work, an efficient process train (with variations) is developed and modeled for sodium hydroxide production from seawater desalination brine using membrane chlor-alkali electrolysis. The integrated system includes nanofiltration, concentration via evaporation or mechanical vapor compression, chemical softening, further ion-exchange softening, dechlorination, and membrane electrolysis. System productivity, component performance, and energy consumption of the NaOH production process are highlighted, and their dependencies on electrolyzer outlet conditions and brine recirculation are investigated. The analysis of the process also includes assessment of the energy efficiency of major components, estimation of system operating expense and comparison with similar processes. The brine-to-caustic process is shown to be technically feasible while offering several advantages, that is, the reduced environmental impact of desalination through lessened brine discharge, and the increase in the overall water recovery ratio of the reverse osmosis facility. Additionally, best-use conditions are given for producing caustic not only for use within the plant, but also in excess amounts for potential revenue.

  8. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2015-12-01

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.

  9. A Bayes linear Bayes method for estimation of correlated event rates.

    PubMed

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  10. Utilization of high-frequency Rayleigh waves in near-surface geophysics

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.; Ivanov, J.; Tian, G.; Chen, C.

    2004-01-01

    Shear-wave velocities can be derived from inverting the dispersive phase velocity of the surface. The multichannel analysis of surface waves (MASW) is one technique for inverting high-frequency Rayleigh waves. The process includes acquisition of high-frequency broad-band Rayleigh waves, efficient and accurate algorithms designed to extract Rayleigh-wave dispersion curves from Rayleigh waves, and stable and efficient inversion algorithms to obtain near-surface S-wave velocity profiles. MASW estimates S-wave velocity from multichannel vertical compoent data and consists of data acquisition, dispersion-curve picking, and inversion.

  11. An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP

    PubMed Central

    Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin

    2015-01-01

    In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755

  12. Energy conservation through more efficient lighting.

    PubMed

    Maya, J; Grossman, M W; Lagushenko, R; Waymouth, J F

    1984-10-26

    The efficiency of a mercury-rare gas electrical discharge, which forms the basis of a fluorescent lamp, can be increased about 5 percent simply by increasing the concentration of mercury-196 from 0.146 percent (natural) to about 3 percent. These findings can be implemented immediately without any significant change in the process of manufacturing of this widely used source of illumination, provided that mercury-196 can be obtained economically. The potential energy savings for the United States are estimated to be worth in excess of $200 million per year.

  13. An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation

    PubMed Central

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-01-01

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035

  14. Super-Nyquist shaping and processing technologies for high-spectral-efficiency optical systems

    NASA Astrophysics Data System (ADS)

    Jia, Zhensheng; Chien, Hung-Chang; Zhang, Junwen; Dong, Ze; Cai, Yi; Yu, Jianjun

    2013-12-01

    The implementations of super-Nyquist pulse generation, both in a digital field using a digital-to-analog converter (DAC) or an optical filter at transmitter side, are introduced. Three corresponding signal processing algorithms at receiver are presented and compared for high spectral-efficiency (SE) optical systems employing the spectral prefiltering. Those algorithms are designed for the mitigation towards inter-symbol-interference (ISI) and inter-channel-interference (ICI) impairments by the bandwidth constraint, including 1-tap constant modulus algorithm (CMA) and 3-tap maximum likelihood sequence estimation (MLSE), regular CMA and digital filter with 2-tap MLSE, and constant multi-modulus algorithm (CMMA) with 2-tap MLSE. The principles and prefiltering tolerance are given through numerical and experimental results.

  15. Overarching framework for data-based modelling

    NASA Astrophysics Data System (ADS)

    Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco

    2014-02-01

    One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.

  16. Enhancement of Er optical efficiency through bismuth sensitization in yttrium oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarangella, Adriana; Dipartimento di Fisica e Astronomia, Università di Catania, Via S. Sofia 64, 95123 Catania; Reitano, Riccardo

    2015-07-27

    The process of energy transfer (ET) between optically active ions has been widely studied to improve the optical efficiency of a system for different applications, from lighting and photovoltaics to silicon microphotonics. In this work, we report the influence of Bi on the Er optical emission in erbium-yttrium oxide thin films synthesized by magnetron co-sputtering. We demonstrate that this host permits to well dissolve Er and Bi ions, avoiding their clustering, and thus to stabilize the optically active Er{sup 3+} and Bi{sup 3+} valence states. In addition, we establish the ET occurrence from Bi{sup 3+} to Er{sup 3+} by themore » observed Bi{sup 3+} PL emission decrease and the simultaneous Er{sup 3+} photoluminescence (PL) emission increase. This was further confirmed by the coincidence of the Er{sup 3+} and Bi{sup 3+} excitation bands, analyzed by PL excitation spectroscopy. By increasing the Bi content of two orders of magnitude inside the host, though the occurrence of Bi-Bi interactions becomes deleterious for Bi{sup 3+} optical efficiency, the ET process between Bi{sup 3+} and Er{sup 3+} is still prevalent. We estimate ET efficiency of 70% for the optimized Bi:Er ratio equal to 1:3. Moreover, we have demonstrated to enhance the Er{sup 3+} effective excitation cross section by more than three orders of magnitude with respect to the direct one, estimating a value of 5.3 × 10{sup −18} cm{sup 2}, similar to the expected Bi{sup 3+} excitation cross section. This value is one of the highest obtained for Er in Si compatible hosts. These results make this material very promising as an efficient emitter for Si-compatible photonics devices.« less

  17. Partition method and experimental validation for impact dynamics of flexible multibody system

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  18. Wavelet Filter Banks for Super-Resolution SAR Imaging

    NASA Technical Reports Server (NTRS)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  19. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  20. Technical and scale efficiency of public community hospitals in Eritrea: an exploratory study

    PubMed Central

    2013-01-01

    Background Eritrean gross national income of Int$610 per capita is lower than the average for Africa (Int$1620) and considerably lower than the global average (Int$6977). It is therefore imperative that the country’s resources, including those specifically allocated to the health sector, are put to optimal use. The objectives of this study were (a) to estimate the relative technical and scale efficiency of public secondary level community hospitals in Eritrea, based on data generated in 2007, (b) to estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient, and (c) to estimate using Tobit regression analysis the impact of institutional and contextual/environmental variables on hospital inefficiencies. Methods A two-stage Data Envelopment Analysis (DEA) method is used to estimate efficiency of hospitals and to explain the inefficiencies. In the first stage, the efficient frontier and the hospital-level efficiency scores are first estimated using DEA. In the second stage, the estimated DEA efficiency scores are regressed on some institutional and contextual/environmental variables using a Tobit model. In 2007 there were a total of 20 secondary public community hospitals in Eritrea, nineteen of which generated data that could be included in the study. The input and output data were obtained from the Ministry of Health (MOH) annual health service activity report of 2007. Since our study employs data that are five years old, the results are not meant to uncritically inform current decision-making processes, but rather to illustrate the potential value of such efficiency analyses. Results The key findings were as follows: (i) the average constant returns to scale technical efficiency score was 90.3%; (ii) the average variable returns to scale technical efficiency score was 96.9%; and (iii) the average scale efficiency score was 93.3%. In 2007, the inefficient hospitals could have become more efficient by either increasing their outputs by 20,611 outpatient visits and 1,806 hospital discharges, or by transferring the excess 2.478 doctors (2.85%), 9.914 nurses and midwives (0.98%), 9.774 laboratory technicians (9.68%), and 195 beds (10.42%) to primary care facilities such as health centres, health stations, and maternal and child health clinics. In the Tobit regression analysis, the coefficient for OPDIPD (outpatient visits as a proportion of inpatient days) had a negative sign, and was statistically significant; and the coefficient for ALOS (average length of stay) had a positive sign, and was statistically significant at 5% level of significance. Conclusions The findings from the first-stage analysis imply that 68% hospitals were variable returns to scale technically efficient; and only 42% hospitals achieved scale efficiency. On average, inefficient hospitals could have increased their outpatient visits by 5.05% and hospital discharges by 3.42% using the same resources. Our second-stage analysis shows that the ratio of outpatient visits to inpatient days and average length of inpatient stay are significantly correlated with hospital inefficiencies. This study shows that routinely collected hospital data in Eritrea can be used to identify relatively inefficient hospitals as well as the sources of their inefficiencies. PMID:23497525

  1. On the Capacity of Attention: Its Estimation and Its Role in Working Memory and Cognitive Aptitudes

    PubMed Central

    Cowan, Nelson; Elliott, Emily M.; Saults, J. Scott; Morey, Candice C.; Mattox, Sam; Hismjatullina, Anna; Conway, Andrew R.A.

    2008-01-01

    Working memory (WM) is the set of mental processes holding limited information in a temporarily accessible state in service of cognition. We provide a theoretical framework to understand the relation between WM and aptitude measures. The WM measures that have yielded high correlations with aptitudes include separate storage and processing task components, on the assumption that WM involves both storage and processing. We argue that the critical aspect of successful WM measures is that rehearsal and grouping processes are prevented, allowing a clearer estimate of how many separate chunks of information the focus of attention circumscribes at once. Storage-and-processing tasks correlate with aptitudes, according to this view, largely because the processing task prevents rehearsal and grouping of items to be recalled. In a developmental study, we document that several scope-of-attention measures that do not include a separate processing component, but nevertheless prevent efficient rehearsal or grouping, also correlate well with aptitudes and with storage-and-processing measures. So does digit span in children too young to rehearse. PMID:16039935

  2. FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Rizeq; Janice West; Arnaldo Frydman

    It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Energy and Environmental Research Corporation (GE EER) has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GEmore » EER (prime contractor) was awarded a Vision 21 program from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GE EER, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal/opportunity fuels and air are simultaneously converted into separate streams of (1) pure hydrogen that can be utilized in fuel cells, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure oxygen-depleted air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on process modeling work, has an estimated process efficiency of 68%, based on electrical and H{sub 2} energy outputs relative to the higher heating value of coal, and an estimated equivalent electrical efficiency of 60%. The Phase I R&D program will determine the operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates lab-, bench- and pilot-scale studies to demonstrate the UFP technology. This is the tenth quarterly technical progress report for the Vision 21 UFP program supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974). This report summarizes program accomplishments for the period starting January 1, 2003 and ending March 31, 2003. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, pilot-scale assembly, and program management.« less

  3. FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Rizeq; Janice West; Arnaldo Frydman

    It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Energy and Environmental Research Corporation (GE EER) has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GEmore » EER was awarded a Vision 21 program from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GE EER, California Energy Commission, Southern Illinois University at Carbondale, and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal/opportunity fuels and air are simultaneously converted into separate streams of (1) pure hydrogen that can be utilized in fuel cells, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure oxygen-depleted air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on process modeling work, has an estimated process efficiency of 68%, based on electrical and H{sub 2} energy outputs relative to the higher heating value of coal, and an estimated equivalent electrical efficiency of 60%. The Phase I R&D program will determine the operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates lab-, bench- and pilot-scale studies to demonstrate the UFP technology. This is the ninth quarterly technical progress report for the Vision 21 UFP program supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974). This report summarizes program accomplishments for the period starting October 1, 2002 and ending December 31, 2002. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab- and bench-scale experimental testing, pilot-scale design and assembly, and program management.« less

  4. FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Rizeq; Janice West; Arnaldo Frydman

    It is expected that in the 21st century the Nation will continue to rely on fossil fuels for electricity, transportation, and chemicals. It will be necessary to improve both the process efficiency and environmental impact performance of fossil fuel utilization. GE Global Research (GEGR) has developed an innovative fuel-flexible Unmixed Fuel Processor (UFP) technology to produce H{sub 2}, power, and sequestration-ready CO{sub 2} from coal and other solid fuels. The UFP module offers the potential for reduced cost, increased process efficiency relative to conventional gasification and combustion systems, and near-zero pollutant emissions including NO{sub x}. GEGR (prime contractor) was awardedmore » a Vision 21 program from U.S. DOE NETL to develop the UFP technology. Work on this Phase I program started on October 1, 2000. The project team includes GEGR, Southern Illinois University at Carbondale (SIU-C), California Energy Commission (CEC), and T. R. Miles, Technical Consultants, Inc. In the UFP technology, coal/opportunity fuels and air are simultaneously converted into separate streams of (1) pure hydrogen that can be utilized in fuel cells, (2) sequestration-ready CO{sub 2}, and (3) high temperature/pressure oxygen-depleted air to produce electricity in a gas turbine. The process produces near-zero emissions and, based on process modeling with best-case scenario assumptions, has an estimated process efficiency of 68%, based on electrical and H{sub 2} energy outputs relative to the higher heating value of coal, and an estimated equivalent electrical efficiency of 60%. The Phase I R&D program will determine the operating conditions that maximize separation of CO{sub 2} and pollutants from the vent gas, while simultaneously maximizing coal conversion efficiency and hydrogen production. The program integrates lab-, bench- and pilot-scale studies to demonstrate the UFP technology. This is the eleventh quarterly technical progress report for the Vision 21 UFP program supported by U.S. DOE NETL (Contract No. DE-FC26-00FT40974). This report summarizes program accomplishments for the period starting April 1, 2003 and ending June 30, 2003. The report includes an introduction summarizing the UFP technology, main program tasks, and program objectives; it also provides a summary of program activities and accomplishments covering progress in tasks including lab-scale experimental testing, pilot-scale assembly, and program management.« less

  5. Interval versions of statistical techniques with applications to environmental analysis, bioinformatics, and privacy in statistical databases

    NASA Astrophysics Data System (ADS)

    Kreinovich, Vladik; Longpre, Luc; Starks, Scott A.; Xiang, Gang; Beck, Jan; Kandathi, Raj; Nayak, Asis; Ferson, Scott; Hajagos, Janos

    2007-02-01

    In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)--e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit (DL). We must, therefore, modify the existing statistical algorithms to process such interval data. Such a modification is also necessary to process data from statistical databases, where, in order to maintain privacy, we only keep interval ranges instead of the actual numeric data (e.g., a salary range instead of the actual salary). Most resulting computational problems are NP-hard--which means, crudely speaking, that in general, no computationally efficient algorithm can solve all particular cases of the corresponding problem. In this paper, we overview practical situations in which computationally efficient algorithms exist: e.g., situations when measurements are very accurate, or when all the measurements are done with one (or few) instruments. As a case study, we consider a practical problem from bioinformatics: to discover the genetic difference between the cancer cells and the healthy cells, we must process the measurements results and find the concentrations c and h of a given gene in cancer and in healthy cells. This is a particular case of a general situation in which, to estimate states or parameters which are not directly accessible by measurements, we must solve a system of equations in which coefficients are only known with interval uncertainty. We show that in general, this problem is NP-hard, and we describe new efficient algorithms for solving this problem in practically important situations.

  6. Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity and Efficient Estimators

    DTIC Science & Technology

    2012-09-27

    particular, we require no entangling gates or ancillary systems for the procedure. In contrast with [19], our method is not restricted to processes that are...of states, such as those recently developed for use with permutation-invariant states [60], matrix product states [61] or multi-scale entangled states...process tomography: first prepare the Jamiołkowski state ρE (by adjoining an ancilla, preparing the maximally entangled state |ψ0, and applying E); then

  7. Combining Image Processing with Signal Processing to Improve Transmitter Geolocation Estimation

    DTIC Science & Technology

    2014-03-27

    transmitter by searching a grid of possible transmitter locations within the image region. At each evaluated grid point, theoretical TDOA values are computed...requires converting the image to a grayscale intensity image. This allows efficient manipulation of data and ease of comparison among pixel values . The...cluster of redundant y values along the top edge of an ideal rectangle. The same is true for the bottom edge, as well as for the x values along the

  8. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  9. CTER-rapid estimation of CTF parameters with error assessment.

    PubMed

    Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    PubMed

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  11. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks

    PubMed Central

    2010-01-01

    Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862

  12. Development, test and evaluation of a computerized procedure for using Landsat data to estimate spring small grains acreage

    NASA Technical Reports Server (NTRS)

    Mohler, R. R. J.; Palmer, W. F.; Smyrski, M. M.; Baker, T. C.; Nazare, C. V.

    1982-01-01

    A number of methods which can provide information concerning crop acreages on the basis of a utilization of multispectral scanner (MSS) data require for their implementation a comparatively large amount of labor. The present investigation is concerned with a project designed to improve the efficiency of analysis through increased automation. The Caesar technique was developed to realize this objective. The processability rates of the Caesar procedure versus the historical state-of-the-art proportion estimation procedures were determined in an experiment. Attention is given to the study site, the aggregation technology, the results of the aggregation test, and questions of error characterization. It is found that the Caesar procedure, which has been developed for the spring small grains region of North America, is highly efficient and provides accurate results.

  13. Estimation of diastolic intraventricular pressure gradients by Doppler M-mode echocardiography

    NASA Technical Reports Server (NTRS)

    Greenberg, N. L.; Vandervoort, P. M.; Firstenberg, M. S.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Previous studies have shown that small intraventricular pressure gradients (IVPG) are important for efficient filling of the left ventricle (LV) and as a sensitive marker for ischemia. Unfortunately, there has previously been no way of measuring these noninvasively, severely limiting their research and clinical utility. Color Doppler M-mode (CMM) echocardiography provides a spatiotemporal velocity distribution along the inflow tract throughout diastole, which we hypothesized would allow direct estimation of IVPG by using the Euler equation. Digital CMM images, obtained simultaneously with intracardiac pressure waveforms in six dogs, were processed by numerical differentiation for the Euler equation, then integrated to estimate IVPG and the total (left atrial to left ventricular apex) pressure drop. CMM-derived estimates agreed well with invasive measurements (IVPG: y = 0.87x + 0.22, r = 0.96, P < 0.001, standard error of the estimate = 0.35 mmHg). Quantitative processing of CMM data allows accurate estimation of IVPG and tracking of changes induced by beta-adrenergic stimulation. This novel approach provides unique information on LV filling dynamics in an entirely noninvasive way that has previously not been available for assessment of diastolic filling and function.

  14. Color vision predicts processing modes of goal activation during action cascading.

    PubMed

    Jongkees, Bryant J; Steenbergen, Laura; Colzato, Lorenza S

    2017-09-01

    One of the most important functions of cognitive control is action cascading: the ability to cope with multiple response options when confronted with various task goals. A recent study implicates a key role for dopamine (DA) in this process, suggesting higher D1 efficiency shifts the action cascading strategy toward a more serial processing mode, whereas higher D2 efficiency promotes a shift in the opposite direction by inducing a more parallel processing mode (Stock, Arning, Epplen, & Beste, 2014). Given that DA is found in high concentration in the retina and modulation of retinal DA release displays characteristics of D2-receptors (Peters, Schweibold, Przuntek, & Müller, 2000), color vision discrimination might serve as an index of D2 efficiency. We used color discrimination, assessed with the Lanthony Desaturated Panel D-15 test, to predict individual differences (N = 85) in a stop-change paradigm that provides a well-established measure of action cascading. In this task it is possible to calculate an individual slope value for each participant that estimates the degree of overlap in task goal activation. When the stopping process of a previous task goal has not finished at the time the change process toward a new task goal is initiated (parallel processing), the slope value becomes steeper. In case of less overlap (more serial processing), the slope value becomes flatter. As expected, participants showing better color vision were more prone to activate goals in a parallel manner as indicated by a steeper slope. Our findings suggest that color vision might represent a predictor of D2 efficiency and the predisposed processing mode of goal activation during action cascading. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Modelling the energy costs of the wastewater treatment process: The influence of the aging factor.

    PubMed

    Castellet-Viciano, Lledó; Hernández-Chover, Vicent; Hernández-Sancho, Francesc

    2018-06-01

    Wastewater treatment plants (WWTPs) are aging and its effects on the process are more evident as time goes by. Due to the deterioration of the facilities, the efficiency of the treatment process decreases gradually. Within this framework, this paper proves the increase in the energy consumption of the WWTPs with time, and finds differences among facilities size. Accordingly, the paper aims to develop a dynamic energy cost function capable of predicting the energy cost of the process in the future. The time variable is used to introduce the aging effects on the energy cost estimation in order to increase the accuracy of the estimation. For this purpose, the evolution of energy costs will be assessed and modelled for a group of WWTPs using the methodology of cost functions. The results will be useful for the managers of the facilities in the decision making process. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  17. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  18. Assessment of Energy Efficiency Improvement in the United States Petroleum Refining Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, William R.; Marano, John; Sathaye, Jayant

    2013-02-01

    Adoption of efficient process technologies is an important approach to reducing CO 2 emissions, in particular those associated with combustion. In many cases, implementing energy efficiency measures is among the most cost-effective approaches that any refiner can take, improving productivity while reducing emissions. Therefore, careful analysis of the options and costs associated with efficiency measures is required to establish sound carbon policies addressing global climate change, and is the primary focus of LBNL’s current petroleum refining sector analysis for the U.S. Environmental Protection Agency. The analysis is aimed at identifying energy efficiency-related measures and developing energy abatement supply curves andmore » CO 2 emissions reduction potential for the U.S. refining industry. A refinery model has been developed for this purpose that is a notional aggregation of the U.S. petroleum refining sector. It consists of twelve processing units and account s for the additional energy requirements from steam generation, hydrogen production and water utilities required by each of the twelve processing units. The model is carbon and energy balanced such that crud e oil inputs and major refinery sector outputs (fuels) are benchmarked to 2010 data. Estimates of the current penetration for the identified energy efficiency measures benchmark the energy requirements to those reported in U.S. DOE 2010 data. The remaining energy efficiency potential for each of the measures is estimated and compared to U.S. DOE fuel prices resulting in estimates of cost- effective energy efficiency opportunities for each of the twelve major processes. A combined cost of conserved energy supply curve is also presented along with the CO 2 emissions abatement opportunities that exist in the U.S. petroleum refinery sector. Roughly 1,200 PJ per year of primary fuels savings and close to 500 GWh per y ear of electricity savings are potentially cost-effective given U.S. DOE fuel price forecasts. This represents roughly 70 million metric tonnes of CO 2 emission reductions assuming 2010 emissions factor for grid electricity. Energy efficiency measures resulting in an additional 400 PJ per year of primary fuels savings and close to 1,700 GWh per year of electricity savings, and an associated 24 million metric tonnes of CO 2 emission reductions are not cost-effective given the same assumption with respect to fuel prices and electricity emissions factors. Compared to the modeled energy requirements for the U.S. petroleum refining sector, the cost effective potential represents a 40% reduction in fuel consumption and a 2% reduction in electricity consumption. The non-cost-effective potential represents an additional 13% reduction in fuel consumption and an additional 7% reduction in electricity consumption. The relative energy reduction potentials are mu ch higher for fuel consumption than electricity consumption largely in part because fuel is the primary energy consumption type in the refineries. Moreover, many cost effective fuel savings measures would increase electricity consumption. The model also has the potential to be used to examine the costs and benefits of the other CO 2 mitigation options, such as combined heat and power (CHP), carbon capture, and the potential introduction of biomass feedstocks. However, these options are not addressed in this report as this report is focused on developing the modeling methodology and assessing fuels savings measures. These opportunities to further reduce refinery sector CO 2 emissions and are recommended for further research and analysis.« less

  19. Kuang's Semi-Classical Formalism for Calculating Electron Capture Cross Sections: A Space- Physics Application

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.

    2014-01-01

    Accurate estimates of electroncapture cross sections at energies relevant to the modeling of the transport, acceleration, and interaction of energetic neutral atoms (ENA) in space (approximately few MeV per nucleon) and especially for multi-electron ions must rely on detailed, but computationally expensive, quantum-mechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts for apace applications, we shall briefly present this approach along with sample applications and report on current progress.

  20. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  1. Some Calculated Research Results of the Working Process Parameters of the Low Thrust Rocket Engine Operating on Gaseous Oxygen-Hydrogen Fuel

    NASA Astrophysics Data System (ADS)

    Ryzhkov, V.; Morozov, I.

    2018-01-01

    The paper presents the calculating results of the combustion products parameters in the tract of the low thrust rocket engine with thrust P ∼ 100 N. The article contains the following data: streamlines, distribution of total temperature parameter in the longitudinal section of the engine chamber, static temperature distribution in the cross section of the engine chamber, velocity distribution of the combustion products in the outlet section of the engine nozzle, static temperature near the inner wall of the engine. The presented parameters allow to estimate the efficiency of the mixture formation processes, flow of combustion products in the engine chamber and to estimate the thermal state of the structure.

  2. Active media for up-conversion diode-pumped lasers

    NASA Astrophysics Data System (ADS)

    Tkachuk, Alexandra M.

    1996-03-01

    In this work, we consider the different methods of populating the initial and final working levels of laser transitions in TR-doped crystals under the selective 'up-conversion' and 'avalanche' diode laser pumping. On the basis of estimates of the probabilities of competing non-radiative energy-transfer processes rates obtained from the experimental data and theoretical calculations, we estimated the efficiency of the up-conversion pumping and selfquenching of the upper TR3+ states excited by laser-diode emission. The effect of the host composition, dopant concentration, and temperature on the output characteristics and up-conversion processes in YLF:Er; BaY2F8:Er; BaY2F8:Er,Yb and BaY2F8:Yb,Ho are determined.

  3. Estimating the Volterra Series Transfer Function over coherent optical OFDM for efficient monitoring of the fiber channel nonlinearity.

    PubMed

    Shulkind, Gal; Nazarathy, Moshe

    2012-12-17

    We present an efficient method for system identification (nonlinear channel estimation) of third order nonlinear Volterra Series Transfer Function (VSTF) characterizing the four-wave-mixing nonlinear process over a coherent OFDM fiber link. Despite the seemingly large number of degrees of freedom in the VSTF (cubic in the number of frequency points) we identified a compressed VSTF representation which does not entail loss of information. Additional slightly lossy compression may be obtained by discarding very low power VSTF coefficients associated with regions of destructive interference in the FWM phased array effect. Based on this two-staged VSTF compressed representation, we develop a robust and efficient algorithm of nonlinear system identification (optical performance monitoring) estimating the VSTF by transmission of an extended training sequence over the OFDM link, performing just a matrix-vector multiplication at the receiver by a pseudo-inverse matrix which is pre-evaluated offline. For 512 (1024) frequency samples per channel, the VSTF measurement takes less than 1 (10) msec to complete with computational complexity of one real-valued multiply-add operation per time sample. Relative to a naïve exhaustive three-tone-test, our algorithm is far more tolerant of ASE additive noise and its acquisition time is orders of magnitude faster.

  4. Cost inefficiency under financial strain: a stochastic frontier analysis of hospitals in Washington State through the Great Recession.

    PubMed

    Izón, Germán M; Pardini, Chelsea A

    2017-06-01

    The importance of increasing cost efficiency for community hospitals in the United States has been underscored by the Great Recession and the ever-changing health care reimbursement environment. Previous studies have shown mixed evidence with regards to the relationship between linking hospitals' reimbursement to quality of care and cost efficiency. Moreover, current evidence suggests that not only inherently financially disadvantaged hospitals (e.g., safety-net providers), but also more financially stable providers, experienced declines to their financial viability throughout the recession. However, little is known about how hospital cost efficiency fared throughout the Great Recession. This study contributes to the literature by using stochastic frontier analysis to analyze cost inefficiency of Washington State hospitals between 2005 and 2012, with controls for patient burden of illness, hospital process of care quality, and hospital outcome quality. The quality measures included in this study function as central measures for the determination of recently implemented pay-for-performance programs. The average estimated level of hospital cost inefficiency before the Great Recession (10.4 %) was lower than it was during the Great Recession (13.5 %) and in its aftermath (14.1 %). Further, the estimated coefficients for summary process of care quality indexes for three health conditions (acute myocardial infarction, pneumonia, and heart failure) suggest that higher quality scores are associated with increased cost inefficiency.

  5. Permeability Estimation of Rock Reservoir Based on PCA and Elman Neural Networks

    NASA Astrophysics Data System (ADS)

    Shi, Ying; Jian, Shaoyong

    2018-03-01

    an intelligent method which based on fuzzy neural networks with PCA algorithm, is proposed to estimate the permeability of rock reservoir. First, the dimensionality reduction process is utilized for these parameters by principal component analysis method. Further, the mapping relationship between rock slice characteristic parameters and permeability had been found through fuzzy neural networks. The estimation validity and reliability for this method were tested with practical data from Yan’an region in Ordos Basin. The result showed that the average relative errors of permeability estimation for this method is 6.25%, and this method had the better convergence speed and more accuracy than other. Therefore, by using the cheap rock slice related information, the permeability of rock reservoir can be estimated efficiently and accurately, and it is of high reliability, practicability and application prospect.

  6. Estimating residual fault hitting rates by recapture sampling

    NASA Technical Reports Server (NTRS)

    Lee, Larry; Gupta, Rajan

    1988-01-01

    For the recapture debugging design introduced by Nayak (1988) the problem of estimating the hitting rates of the faults remaining in the system is considered. In the context of a conditional likelihood, moment estimators are derived and are shown to be asymptotically normal and fully efficient. Fixed sample properties of the moment estimators are compared, through simulation, with those of the conditional maximum likelihood estimators. Properties of the conditional model are investigated such as the asymptotic distribution of linear functions of the fault hitting frequencies and a representation of the full data vector in terms of a sequence of independent random vectors. It is assumed that the residual hitting rates follow a log linear rate model and that the testing process is truncated when the gaps between the detection of new errors exceed a fixed amount of time.

  7. A Comparison of Two Fat Grafting Methods on Operating Room Efficiency and Costs.

    PubMed

    Gabriel, Allen; Maxwell, G Patrick; Griffin, Leah; Champaneria, Manish C; Parekh, Mousam; Macarios, David

    2017-02-01

    Centrifugation (Cf) is a common method of fat processing but may be time consuming, especially when processing large volumes. To determine the effects on fat grafting time, volume efficiency, reoperations, and complication rates of Cf vs an autologous fat processing system (Rv) that incorporates fat harvesting and processing in a single unit. We performed a retrospective cohort study of consecutive patients who underwent autologous fat grafting during reconstructive breast surgery with Rv or Cf. Endpoints measured were volume of fat harvested (lipoaspirate) and volume injected after processing, time to complete processing, reoperations, and complications. A budget impact model was used to estimate cost of Rv vs Cf. Ninety-eight patients underwent fat grafting with Rv, and 96 patients received Cf. Mean volumes of lipoaspirate (506.0 vs 126.1 mL) and fat injected (177.3 vs 79.2 mL) were significantly higher (P < .0001) in the Rv vs Cf group, respectively. Mean time to complete fat grafting was significantly shorter in the Rv vs Cf group (34.6 vs 90.1 minutes, respectively; P < .0001). Proportions of patients with nodule and cyst formation and/or who received reoperations were significantly less in the Rv vs Cf group. Based on these outcomes and an assumed per minute operating room cost, an average per patient cost savings of $2,870.08 was estimated with Rv vs Cf. Compared to Cf, the Rv fat processing system allowed for a larger volume of fat to be processed for injection and decreased operative time in these patients, potentially translating to cost savings. LEVEL OF EVIDENCE 3. © 2016 The American Society for Aesthetic Plastic Surgery, Inc.

  8. Performance of (in)active anodic materials for the electrooxidation of phenolic wastewaters from cashew-nut processing industry.

    PubMed

    Oliveira, Edna M S; Silva, Francisco R; Morais, Crislânia C O; Oliveira, Thiago Mielle B F; Martínez-Huitle, Carlos A; Motheo, Artur J; Albuquerque, Cynthia C; Castro, Suely S L

    2018-06-01

    This study investigated the anodic oxidation of phenolic wastewater generated by cashew-nut processing industry (CNPI) using active (Ti/RuO 2 -TiO 2 ) and inactive (boron doped diamond, BDD) anodes. During electrochemical treatment, various operating parameters were investigated, such as current density, chemical oxygen demand (COD), total phenols, O 2 production, temperature, pH, as well as current efficiency and energy consumption. After electrolysis under optimized working conditions, samples were evaluated by chromatography and toxicological tests against L. sativa. When both electrode materials were compared under the same operating conditions, higher COD removal efficiency was achieved for BDD anode; achieving lower energy requirements when compared with the values estimated for Ti/RuO 2 -TiO 2 . The presence of Cl - in the wastewater promoted the electrogeneration of strong oxidant species as chlorine, hypochlorite and mainly hypochlorous acid, increasing the efficiency of degradation process. Regarding the temperature effect, BDD showed slower performances than those achieved for Ti/RuO 2 -TiO 2 . Chromatographic and phytotoxicity studies indicated formation of some by-products after electrolytic process, regardless of the anode evaluated, and phytotoxic action of the effluent. Results encourage the applicability of the electrochemical method as wastewater treatment process for the CNPI, reducing depuration time. Copyright © 2018. Published by Elsevier Ltd.

  9. Single nucleotide polymorphisms for feed efficiency and performance in crossbred beef cattle

    PubMed Central

    2014-01-01

    Background This study was conducted to: (1) identify new SNPs for residual feed intake (RFI) and performance traits within candidate genes identified in a genome wide association study (GWAS); (2) estimate the proportion of variation in RFI explained by the detected SNPs; (3) estimate the effects of detected SNPs on carcass traits to avoid undesirable correlated effects on these economically important traits when selecting for feed efficiency; and (4) map the genes to biological mechanisms and pathways. A total number of 339 SNPs corresponding to 180 genes were tested for association with phenotypes using a single locus regression (SLRM) and genotypic model on 726 and 990 crossbred animals for feed efficiency and carcass traits, respectively. Results Strong evidence of associations for RFI were located on chromosomes 8, 15, 16, 18, 19, 21, and 28. The strongest association with RFI (P = 0.0017) was found with a newly discovered SNP located on BTA 8 within the ELP3 gene. SNPs rs41820824 and rs41821600 on BTA 16 within the gene HMCN1 were strongly associated with RFI (P = 0.0064 and P = 0.0033, respectively). A SNP located on BTA 18 within the ZNF423 gene provided strong evidence for association with RFI (P = 0.0028). Genomic estimated breeding values (GEBV) from 98 significant SNPs were moderately correlated (0.47) to the estimated breeding values (EBVs) from a mixed animal model. The significant (P < 0.05) SNPs (98) explained 26% of the genetic variance for RFI. In silico functional analysis for the genes suggested 35 and 39 biological processes and pathways, respectively for feed efficiency traits. Conclusions This study identified several positional and functional candidate genes involved in important biological mechanisms associated with feed efficiency and performance. Significant SNPs should be validated in other populations to establish their potential utilization in genetic improvement programs. PMID:24476087

  10. Waste heat recovery systems in the sugar industry: An Indian perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madnaik, S.D.; Jadhav, M.G.

    1996-04-01

    This article identifies the key role of the sugar industry in the rural development of developing countries. The Indian sugar industry, already second largest among the country`s processing industries, shows even greater potential, according to the Plan Documents (shown in a table). The potential of waste heat in sugar processing plants, which produce white crystal sugar using the double sulphitation clarification process, is estimated at 5757.9 KJ/kg of sugar. Efficient waste heat recovery (WHR) systems could help arrest the trend of increasing production costs. This would help the sugar industry not only in India, but in many other countries asmore » well. The innovative methods suggested and discussed briefly in this article include dehydration of prepared cane, bagasse drying, and juice heating using waste heat. These methods can reduce the cost of energy in sugar production by at least 10% and improve efficiency and productivity.« less

  11. Experiment design for pilot identification in compensatory tracking tasks

    NASA Technical Reports Server (NTRS)

    Wells, W. R.

    1976-01-01

    A design criterion for input functions in laboratory tracking tasks resulting in efficient parameter estimation is formulated. The criterion is that the statistical correlations between pairs of parameters be reduced in order to minimize the problem of nonuniqueness in the extraction process. The effectiveness of the method is demonstrated for a lower order dynamic system.

  12. Eccentricity and fluting in young–growth western hemlock in Oregon.

    Treesearch

    Ryan Singleton; Dean S. DeBell; David D. Marshall; Barbara L. Gartner

    2004-01-01

    Stem irregularities can influence estimates of tree and stand attributes, efficiency of manufacturing processes, and quality of wood products. Eccentricity and fluting were characterized in young, managed western hemlock stands in the Oregon Coast Range. Sixty-one trees were selected from pure western hemlock stands across a range of age, site, and densities. The trees...

  13. [The experience with spectrophotometric method for determination of teeth colour changes in the process of tooth whitening by different systems].

    PubMed

    Porkhun, T V; Iakoviuk, I A

    2006-01-01

    The use of spectrophotometer VITA Easyshade is described as an objective method of definition of color of a teeth alongside with a standard scale VITA. Results of assessment of efficiency, safety, a comparative estimation of domestic bleaching Aquafresh and LumaArch systems are given.

  14. Evaluation of a Zirconium Recycle Scrubber System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Barry B.; Bruffey, Stephanie H.

    2017-04-01

    A hot-cell demonstration of the zirconium recycle process is planned as part of the Materials Recovery and Waste Forms Development (MRWFD) campaign. The process treats Zircaloy® cladding recovered from used nuclear fuel with chlorine gas to recover the zirconium as volatile ZrCl4. This releases radioactive tritium trapped in the alloy, converting it to volatile tritium chloride (TCl). To meet regulatory requirements governing radioactive emissions from nuclear fuel treatment operations, the capture and retention of a portion of this TCl may be required prior to discharge of the off-gas stream to the environment. In addition to demonstrating tritium removal from amore » synthetic zirconium recycle off-gas stream, the recovery and quantification of tritium may refine estimates of the amount of tritium present in the Zircaloy cladding of used nuclear fuel. To support these objectives, a bubbler-type scrubber was fabricated to remove the TCl from the zirconium recycle off-gas stream. The scrubber was fabricated from glass and polymer components that are resistant to chlorine and hydrochloric acid solutions. Because of concerns that the scrubber efficiency is not quantitative, tests were performed using DCl as a stand-in to experimentally measure the scrubbing efficiency of this unit. Scrubbing efficiency was ~108% ± 3% with water as the scrubber solution. Variations were noted when 1 M NaOH scrub solution was used, values ranged from 64% to 130%. The reason for the variations is not known. It is recommended that the equipment be operated with water as the scrubbing solution. Scrubbing efficiency is estimated at 100%.« less

  15. Coherence in quantum estimation

    NASA Astrophysics Data System (ADS)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  16. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  17. Optical information-processing systems and architectures II; Proceedings of the Meeting, San Diego, CA, July 9-13, 1990

    NASA Astrophysics Data System (ADS)

    Javidi, Bahram

    The present conference discusses topics in the fields of neural networks, acoustooptic signal processing, pattern recognition, phase-only processing, nonlinear signal processing, image processing, optical computing, and optical information processing. Attention is given to the optical implementation of an inner-product neural associative memory, optoelectronic associative recall via motionless-head/parallel-readout optical disk, a compact real-time acoustooptic image correlator, a multidimensional synthetic estimation filter, and a light-efficient joint transform optical correlator. Also discussed are a high-resolution spatial light modulator, compact real-time interferometric Fourier-transform processors, a fast decorrelation algorithm for permutation arrays, the optical interconnection of optical modules, and carry-free optical binary adders.

  18. Multiple-exciton generation in lead selenide nanorod solar cells with external quantum efficiencies exceeding 120%

    PubMed Central

    Davis, Nathaniel J. L. K.; Böhm, Marcus L.; Tabachnyk, Maxim; Wisnivesky-Rocca-Rivarola, Florencia; Jellicoe, Tom C.; Ducati, Caterina; Ehrler, Bruno; Greenham, Neil C.

    2015-01-01

    Multiple-exciton generation—a process in which multiple charge-carrier pairs are generated from a single optical excitation—is a promising way to improve the photocurrent in photovoltaic devices and offers the potential to break the Shockley–Queisser limit. One-dimensional nanostructures, for example nanorods, have been shown spectroscopically to display increased multiple exciton generation efficiencies compared with their zero-dimensional analogues. Here we present solar cells fabricated from PbSe nanorods of three different bandgaps. All three devices showed external quantum efficiencies exceeding 100% and we report a maximum external quantum efficiency of 122% for cells consisting of the smallest bandgap nanorods. We estimate internal quantum efficiencies to exceed 150% at relatively low energies compared with other multiple exciton generation systems, and this demonstrates the potential for substantial improvements in device performance due to multiple exciton generation. PMID:26411283

  19. A Joint Gaussian Process Model for Active Visual Recognition with Expertise Estimation in Crowdsourcing

    PubMed Central

    Long, Chengjiang; Hua, Gang; Kapoor, Ashish

    2015-01-01

    We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892

  20. An evaluation of multipass electrofishing for estimating the abundance of stream-dwelling salmonids

    Treesearch

    James T. Peterson; Russell F. Thurow; John W. Guzevich

    2004-01-01

    Failure to estimate capture efficiency, defined as the probability of capturing individual fish, can introduce a systematic error or bias into estimates of fish abundance. We evaluated the efficacy of multipass electrofishing removal methods for estimating fish abundance by comparing estimates of capture efficiency from multipass removal estimates to capture...

  1. Reference-material system for estimating health and environmental risks of selected material cycles and energy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowther, M.A.; Moskowitz, P.D.

    1981-07-01

    Sample analyses and detailed documentation are presented for a Reference Material System (RMS) to estimate health and environmental risks of different material cycles and energy systems. Data inputs described include: end-use material demands, efficiency coefficients, environmental emission coefficients, fuel demand coefficients, labor productivity estimates, and occupational health and safety coefficients. Application of this model permits analysts to estimate fuel use (e.g., Btu), occupational risk (e.g., fatalities), and environmental emissions (e.g., sulfur oxide) for specific material trajectories or complete energy systems. Model uncertainty is quantitatively defined by presenting a range of estimates for each data input. Systematic uncertainty not quantified relatesmore » to the boundaries chosen for analysis and reference system specification. Although the RMS can be used to analyze material system impacts for many different energy technologies, it was specifically used to examine the health and environmental risks of producing the following four types of photovoltaic devices: silicon n/p single-crystal cells produced by a Czochralski process; silicon metal/insulator/semiconductor (MIS) cells produced by a ribbon-growing process; cadmium sulfide/copper sulfide backwall cells produced by a spray deposition process; and gallium arsenide cells with 500X concentrator produced by a modified Czochralski process. Emission coefficients for particulates, sulfur dioxide and nitrogen dioxide; solid waste; total suspended solids in water; and, where applicable, air and solid waste residuals for arsenic, cadmium, gallium, and silicon are examined and presented. Where data are available the coefficients for particulates, sulfur oxides, and nitrogen oxides include both process and on-site fuel-burning emissions.« less

  2. Energy Conversion Alternatives Study (ECAS), General Electric Phase 1. Volume 3: Energy conversion subsystems and components. Part 3: Gasification, process fuels, and balance of plant

    NASA Technical Reports Server (NTRS)

    Boothe, W. A.; Corman, J. C.; Johnson, G. G.; Cassel, T. A. V.

    1976-01-01

    Results are presented of an investigation of gasification and clean fuels from coal. Factors discussed include: coal and coal transportation costs; clean liquid and gas fuel process efficiencies and costs; and cost, performance, and environmental intrusion elements of the integrated low-Btu coal gasification system. Cost estimates for the balance-of-plant requirements associated with advanced energy conversion systems utilizing coal or coal-derived fuels are included.

  3. Attitude/attitude-rate estimation from GPS differential phase measurements using integrated-rate parameters

    NASA Technical Reports Server (NTRS)

    Oshman, Yaakov; Markley, Landis

    1998-01-01

    A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.

  4. Online frequency estimation with applications to engine and generator sets

    NASA Astrophysics Data System (ADS)

    Manngård, Mikael; Böling, Jari M.

    2017-07-01

    Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.

  5. Updated energy budgets for neural computation in the neocortex and cerebellum

    PubMed Central

    Howarth, Clare; Gleeson, Padraig; Attwell, David

    2012-01-01

    The brain's energy supply determines its information processing power, and generates functional imaging signals. The energy use on the different subcellular processes underlying neural information processing has been estimated previously for the grey matter of the cerebral and cerebellar cortex. However, these estimates need reevaluating following recent work demonstrating that action potentials in mammalian neurons are much more energy efficient than was previously thought. Using this new knowledge, this paper provides revised estimates for the energy expenditure on neural computation in a simple model for the cerebral cortex and a detailed model of the cerebellar cortex. In cerebral cortex, most signaling energy (50%) is used on postsynaptic glutamate receptors, 21% is used on action potentials, 20% on resting potentials, 5% on presynaptic transmitter release, and 4% on transmitter recycling. In the cerebellar cortex, excitatory neurons use 75% and inhibitory neurons 25% of the signaling energy, and most energy is used on information processing by non-principal neurons: Purkinje cells use only 15% of the signaling energy. The majority of cerebellar signaling energy use is on the maintenance of resting potentials (54%) and postsynaptic receptors (22%), while action potentials account for only 17% of the signaling energy use. PMID:22434069

  6. Spatio-temporal models of mental processes from fMRI.

    PubMed

    Janoos, Firdaus; Machiraju, Raghu; Singh, Shantanu; Morocz, Istvan Ákos

    2011-07-15

    Understanding the highly complex, spatially distributed and temporally organized phenomena entailed by mental processes using functional MRI is an important research problem in cognitive and clinical neuroscience. Conventional analysis methods focus on the spatial dimension of the data discarding the information about brain function contained in the temporal dimension. This paper presents a fully spatio-temporal multivariate analysis method using a state-space model (SSM) for brain function that yields not only spatial maps of activity but also its temporal structure along with spatially varying estimates of the hemodynamic response. Efficient algorithms for estimating the parameters along with quantitative validations are given. A novel low-dimensional feature-space for representing the data, based on a formal definition of functional similarity, is derived. Quantitative validation of the model and the estimation algorithms is provided with a simulation study. Using a real fMRI study for mental arithmetic, the ability of this neurophysiologically inspired model to represent the spatio-temporal information corresponding to mental processes is demonstrated. Moreover, by comparing the models across multiple subjects, natural patterns in mental processes organized according to different mental abilities are revealed. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. An estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS)

    NASA Technical Reports Server (NTRS)

    Chatterjee, Sharmista; Seagrave, Richard C.

    1993-01-01

    The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the system will gradually approach equilibrium with the surroundings until it reaches the point where the entropy gradient is zero. At this point no work can be extracted from the system. This is called the 'dead state' of the system.

  8. Control of Complex Dynamic Systems by Neural Networks

    NASA Technical Reports Server (NTRS)

    Spall, James C.; Cristion, John A.

    1993-01-01

    This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.

  9. Semipermeability Evolution of Wakkanai Mudstones During Isotropic Compression

    NASA Astrophysics Data System (ADS)

    Takeda, M.; Manaka, M.

    2015-12-01

    Precise identification of major processes that influence groundwater flow system is of fundamental importance for the performance assessment of waste disposal in subsurface. In the characterization of groundwater flow system, gravity- and pressure-driven flows have been conventionally assumed as dominant processes. However, recent studies have suggested that argillites can act as semipermeable membranes and they can cause chemically driven flow, i.e., chemical osmosis, under salinity gradients, which may generate erratic pore pressures in argillaceous formations. In order to identify the possibility that chemical osmosis is involved in erratic pore pressure generations in argillaceous formations, it is essential to measure the semipermeability of formation media; however, in the measurements of semipermeability, little consideration has been given to the stresses that the formation media would have experienced in past geologic processes. This study investigates the influence of stress history on the semipermeability of an argillite by an experimental approach. A series of chemical osmosis experiments were performed on Wakkanai mudstones to measure the evolution of semipermeability during loading and unloading confining pressure cycles. The osmotic efficiency, which represents the semipermeability, was estimated at each confining pressure. The results show that the osmotic efficiency increases almost linearly with increasing confining pressure; however, the increased osmotic efficiency does not recover during unloading unless the confining pressure is almost relieved. The observed unrecoverable change in osmotic efficiency may have an important implication on the evaluation of chemical osmosis in argillaceous formations that have been exposed to large stresses in past geologic processes. If the osmotic efficiency increased by the past stress can remain unchanged to date, the osmotic efficiency should be measured at the past highest stress rather than the current in-situ stress. Otherwise, the effect of chemical osmosis on the pore pressure generation would be underestimated.

  10. ESARR: enhanced situational awareness via road sign recognition

    NASA Astrophysics Data System (ADS)

    Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.

    2010-04-01

    The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.

  11. Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior

    NASA Astrophysics Data System (ADS)

    Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique

    2015-09-01

    A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.

  12. Logistic regression for dichotomized counts.

    PubMed

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  13. Efficiency assessment of using satellite data for crop area estimation in Ukraine

    NASA Astrophysics Data System (ADS)

    Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga

    2014-06-01

    The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.

  14. Nitrogen expander cycles for large capacity liquefaction of natural gas

    NASA Astrophysics Data System (ADS)

    Chang, Ho-Myung; Park, Jae Hoon; Gwak, Kyung Hyun; Choe, Kun Hyung

    2014-01-01

    Thermodynamic study is performed on nitrogen expander cycles for large capacity liquefaction of natural gas. In order to substantially increase the capacity, a Brayton refrigeration cycle with nitrogen expander was recently added to the cold end of the reputable propane pre-cooled mixed-refrigerant (C3-MR) process. Similar modifications with a nitrogen expander cycle are extensively investigated on a variety of cycle configurations. The existing and modified cycles are simulated with commercial process software (Aspen HYSYS) based on selected specifications. The results are compared in terms of thermodynamic efficiency, liquefaction capacity, and estimated size of heat exchangers. The combination of C3-MR with partial regeneration and pre-cooling of nitrogen expander cycle is recommended to have a great potential for high efficiency and large capacity.

  15. Enhanced diffusion on oscillating surfaces through synchronization

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Cao, Wei; Ma, Ming; Zheng, Quanshui

    2018-02-01

    The diffusion of molecules and clusters under nanoscale confinement or absorbed on surfaces is the key controlling factor in dynamical processes such as transport, chemical reaction, or filtration. Enhancing diffusion could benefit these processes by increasing their transport efficiency. Using a nonlinear Langevin equation with an extensive number of simulations, we find a large enhancement in diffusion through surface oscillation. For helium confined in a narrow carbon nanotube, the diffusion enhancement is estimated to be over three orders of magnitude. A synchronization mechanism between the kinetics of the particles and the oscillating surface is revealed. Interestingly, a highly nonlinear negative correlation between diffusion coefficient and temperature is predicted based on this mechanism, and further validated by simulations. Our results provide a general and efficient method for enhancing diffusion, especially at low temperatures.

  16. Pursuing Excellence: The Power of Selection Science to Provide Meaningful Data and Enhance Efficiency in Selecting Surgical Trainees.

    PubMed

    Gardner, Aimee K; Dunkin, Brian J

    2018-05-01

    As current screening methods for selecting surgical trainees are receiving increasing scrutiny, development of a more efficient and effective selection system is needed. We describe the process of creating an evidence-based selection system and examine its impact on screening efficiency, faculty perceptions, and improving representation of underrepresented minorities. The program partnered with an expert in organizational science to identify fellowship position requirements and associated competencies. Situational judgment tests, personality profiles, structured interviews, and technical skills assessments were used to measure these competencies. The situational judgment test and personality profiles were administered online and used to identify candidates to invite for on-site structured interviews and skills testing. A final rank list was created based on all data points and their respective importance. All faculty completed follow-up surveys regarding their perceptions of the process. Candidate demographic and experience data were pulled from the application website. Fifty-five of 72 applicants met eligibility requirements and were invited to take the online assessment, with 50 (91%) completing it. Average time to complete was 42 ± 12 minutes. Eighteen applicants (35%) were invited for on-site structured interviews and skills testing-a greater than 50% reduction in number of invites compared to prior years. Time estimates reveal that the process will result in a time savings of 68% for future iterations, compared to traditional methodologies. Fellowship faculty (N = 5) agreed on the value and efficiency of the process. Underrepresented minority candidates increased from an initial 70% to 92% being invited for an interview and ranked using the new screening tools. Applying selection science to the process of choosing surgical trainees is feasible, efficient, and well-received by faculty for making selection decisions.

  17. Dynamic Statistical Characterization of Variation in Source Processes of Microseismic Events

    NASA Astrophysics Data System (ADS)

    Smith-Boughner, L.; Viegas, G. F.; Urbancic, T.; Baig, A. M.

    2015-12-01

    During a hydraulic fracture, water is pumped at high pressure into a formation. A proppant, typically sand is later injected in the hope that it will make its way into a fracture, keep it open and provide a path for the hydrocarbon to enter the well. This injection can create micro-earthquakes, generated by deformation within the reservoir during treatment. When these injections are monitored, thousands of microseismic events are recorded within several hundred cubic meters. For each well-located event, many source parameters are estimated e.g. stress drop, Savage-Wood efficiency and apparent stress. However, because we are evaluating outputs from a power-law process, the extent to which the failure is impacted by fluid injection or stress triggering is not immediately clear. To better detect differences in source processes, we use a set of dynamic statistical parameters which characterize various force balance assumptions using the average distance to the nearest event, event rate, volume enclosed by the events, cumulative moment and energy from a group of events. One parameter, the Fracability index, approximates the ratio of viscous to elastic forcing and highlights differences in the response time of a rock to changes in stress. These dynamic parameters are applied to a database of more than 90 000 events in a shale-gas play in the Horn River Basin to characterize spatial-temporal variations in the source processes. In order to resolve these differences, a moving window, nearest neighbour approach was used. First, the center of mass of the local distribution was estimated for several source parameters. Then, a set of dynamic parameters, which characterize the response of the rock were estimated. These techniques reveal changes in seismic efficiency and apparent stress and often coincide with marked changes in the Fracability index and other dynamic statistical parameters. Utilizing these approaches allowed for the characterization of fluid injection related processes.

  18. Automatic vision-based grain optimization and analysis of multi-crystalline solar wafers using hierarchical region growing

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Tsai, Du-Ming; Chuang, Wei-Che

    2017-04-01

    Solar power has become an attractive alternative source of energy. The multi-crystalline solar cell has been widely accepted in the market because it has a relatively low manufacturing cost. Multi-crystalline solar wafers with larger grain sizes and fewer grain boundaries are higher quality and convert energy more efficiently than mono-crystalline solar cells. In this article, a new image processing method is proposed for assessing the wafer quality. An adaptive segmentation algorithm based on region growing is developed to separate the closed regions of individual grains. Using the proposed method, the shape and size of each grain in the wafer image can be precisely evaluated. Two measures of average grain size are taken from the literature and modified to estimate the average grain size. The resulting average grain size estimate dictates the quality of the crystalline solar wafers and can be considered a viable quantitative indicator of conversion efficiency.

  19. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  20. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  1. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  2. Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu

    2015-01-21

    Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less

  3. Analytical solutions for efficient interpretation of single-well push-pull tracer tests

    NASA Astrophysics Data System (ADS)

    Huang, Junqi; Christ, John A.; Goltz, Mark N.

    2010-08-01

    Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations describing the governing processes acting on a dissolved compound during a modified push-pull test (advection, longitudinal and transverse dispersion, first-order decay, and rate-limited sorption/partitioning in steady, divergent, and convergent flow fields) is developed. The coupling of this solution with inverse modeling to estimate aquifer parameters provides an efficient methodology for subsurface characterization. Synthetic data for single-well push-pull tests are employed to demonstrate the utility of the solution for determining (1) estimates of aquifer longitudinal and transverse dispersivities, (2) sorption distribution coefficients and rate constants, and (3) non-aqueous phase liquid (NAPL) saturations. Employment of the solution to estimate NAPL saturations based on partitioning and non-partitioning tracers is designed to overcome limitations of previous efforts by including rate-limited mass transfer. This solution provides a new tool for use by practitioners when interpreting single-well push-pull test results.

  4. Lead Telluride Quantum Dot Solar Cells Displaying External Quantum Efficiencies Exceeding 120%

    PubMed Central

    2015-01-01

    Multiple exciton generation (MEG) in semiconducting quantum dots is a process that produces multiple charge-carrier pairs from a single excitation. MEG is a possible route to bypass the Shockley-Queisser limit in single-junction solar cells but it remains challenging to harvest charge-carrier pairs generated by MEG in working photovoltaic devices. Initial yields of additional carrier pairs may be reduced due to ultrafast intraband relaxation processes that compete with MEG at early times. Quantum dots of materials that display reduced carrier cooling rates (e.g., PbTe) are therefore promising candidates to increase the impact of MEG in photovoltaic devices. Here we demonstrate PbTe quantum dot-based solar cells, which produce extractable charge carrier pairs with an external quantum efficiency above 120%, and we estimate an internal quantum efficiency exceeding 150%. Resolving the charge carrier kinetics on the ultrafast time scale with pump–probe transient absorption and pump–push–photocurrent measurements, we identify a delayed cooling effect above the threshold energy for MEG. PMID:26488847

  5. Probabilistic cost-benefit analysis of disaster risk management in a development context.

    PubMed

    Kull, Daniel; Mechler, Reinhard; Hochrainer-Stigler, Stefan

    2013-07-01

    Limited studies have shown that disaster risk management (DRM) can be cost-efficient in a development context. Cost-benefit analysis (CBA) is an evaluation tool to analyse economic efficiency. This research introduces quantitative, stochastic CBA frameworks and applies them in case studies of flood and drought risk reduction in India and Pakistan, while also incorporating projected climate change impacts. DRM interventions are shown to be economically efficient, with integrated approaches more cost-effective and robust than singular interventions. The paper highlights that CBA can be a useful tool if certain issues are considered properly, including: complexities in estimating risk; data dependency of results; negative effects of interventions; and distributional aspects. The design and process of CBA must take into account specific objectives, available information, resources, and the perceptions and needs of stakeholders as transparently as possible. Intervention design and uncertainties should be qualified through dialogue, indicating that process is as important as numerical results. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  6. A new framework to increase the efficiency of large-scale solar power plants.

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Kleissl, Jan P.

    2015-11-01

    A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.

  7. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  8. Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei

    Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.

  9. Cadmium Recycling in the United States in 2000

    USGS Publications Warehouse

    Plachy, Jozef

    2003-01-01

    Recycling of cadmium is a young and growing industry that has been influenced by environmental concerns and regulatory constraints. Domestic recycling of cadmium began in 1989 as a byproduct of processing of spent nickel-cadmium batteries. In 1995, International Metals Reclamation Co. Inc. expanded its operations by building a dedicated cadmium recycling plant. In 2000, an estimated 13 percent of cadmium consumption in the United States was sourced from recycled cadmium, which is derived mainly from old scrap or, to lesser degree, new scrap. The easiest forms of old scrap to recycle are small spent nickel-cadmium batteries followed by flue dust generated during recycling of galvanized steel and small amounts of alloys that contain cadmium. Most of new scrap is generated during manufacturing processes, such as nickel-cadmium battery production. All other uses of cadmium are in low concentrations and, therefore, difficult to recycle. Consequently, much of this cadmium is dissipated and lost. The amount of cadmium in scrap that was unrecovered in 2000 was estimated to be 2,030 t, and an estimated 285 t was recovered. Recycling efficiency was estimated to be about 15 percent.

  10. Cadmium recycling in the United States in 2000

    USGS Publications Warehouse

    Plachy, Jozef

    2003-01-01

    Recycling of cadmium is a young and growing industry that has been influenced by environmental concerns and regulatory constraints. Domestic recycling of cadmium began in 1989 as a byproduct of processing of spent nickel-cadmium batteries. In 1995, International Metals Reclamation Co. Inc. expanded its operations by building a dedicated cadmium recycling plant. In 2000, an estimated 13 percent of cadmium consumption in the United States was sourced from recycled cadmium, which is derived mainly from old scrap or, to lesser degree, new scrap. The easiest forms of old scrap to recycle are small spent nickel-cadmium batteries followed by flue dust generated during recycling of galvanized steel and small amounts of alloys that contain cadmium. Most of new scrap is generated during manufacturing processes, such as nickel-cadmium battery production. All other uses of cadmium are in low concentrations and, therefore, difficult to recycle. Consequently, much of this cadmium is dissipated and lost. The amount of cadmium in scrap that was unrecovered in 2000 was estimated to be 2,030 metric tons, and an estimated 285 tons was recovered. Recycling efficiency was estimated to be about 15 percent.

  11. A new device to estimate abundance of moist-soil plant seeds

    USGS Publications Warehouse

    Penny, E.J.; Kaminski, R.M.; Reinecke, K.J.

    2006-01-01

    Methods to sample the abundance of moist-soil seeds efficiently and accurately are critical for evaluating management practices and determining food availability. We adapted a portable, gasoline-powered vacuum to estimate abundance of seeds on the surface of a moist-soil wetland in east-central Mississippi and evaluated the sampler by simulating conditions that researchers and managers may experience when sampling moist-soil areas for seeds. We measured the percent recovery of known masses of seeds by the vacuum sampler in relation to 4 experimentally controlled factors (i.e., seed-size class, sample mass, soil moisture class, and vacuum time) with 2-4 levels per factor. We also measured processing time of samples in the laboratory. Across all experimental factors, seed recovery averaged 88.4% and varied little (CV = 0.68%, n = 474). Overall, mean time to process a sample was 30.3 ? 2.5 min (SE, n = 417). Our estimate of seed recovery rate (88%) may be used to adjust estimates for incomplete seed recovery, or project-specific correction factors may be developed by investigators. Our device was effective for estimating surface abundance of moist-soil plant seeds after dehiscence and before habitats were flooded.

  12. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less

  13. Analysis of power sector efficiency improvements for an integrated utility planning process in Costa Rica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waddle, D.B.; MacDonald, J.M.

    1990-01-01

    In an effort to analyze and document the potential for power sector efficiency improvements from generation to end-use, the Agency for International Development and the Government of Costa Rica are jointly conducting an integrated power sector efficiency analysis. Potential for energy and cost savings in power plants, transmission and distribution, and demand-side management programs are being evaluated. The product of this study will be an integrated investment plan for the Instituto Costarricense de Electricidad, incorporating both supply and demand side investment options. This paper presents the methodology employed in the study, as well as preliminary estimates of the results ofmore » the study. 14 refs., 4 figs., 5 tabs.« less

  14. Learning Time-Varying Coverage Functions

    PubMed Central

    Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le

    2015-01-01

    Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data. PMID:25960624

  15. Learning Time-Varying Coverage Functions.

    PubMed

    Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le

    2014-12-08

    Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data.

  16. Integrating Efficiency of Industry Processes and Practices Alongside Technology Effectiveness in Space Transportation Cost Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Zapata, Edgar

    2012-01-01

    This paper presents past and current work in dealing with indirect industry and NASA costs when providing cost estimation or analysis for NASA projects and programs. Indirect costs, when defined as those costs in a project removed from the actual hardware or software hands-on labor; makes up most of the costs of today's complex large scale NASA space/industry projects. This appears to be the case across phases from research into development into production and into the operation of the system. Space transportation is the case of interest here. Modeling and cost estimation as a process rather than a product will be emphasized. Analysis as a series of belief systems in play among decision makers and decision factors will also be emphasized to provide context.

  17. Self-Centered Management Skills and Knowledge Appropriation by Students in High Schools and Private Secondary Schools of the City of Maroua

    ERIC Educational Resources Information Center

    Oyono, Tadjuidje Michel

    2016-01-01

    Knowledge in its process of appropriation necessitates on the part of the learner, the mobilization of an efficient management strategy of adapted competencies. The present article in its problematic presents the theoretical perspective of Desaunay (1985) which estimates that three fundamental competences (relational, technical and affective) have…

  18. Efficient and Robust Signal Approximations

    DTIC Science & Technology

    2009-05-01

    otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse

  19. Pair production of helicity-flipped neutrinos in supernovae

    NASA Technical Reports Server (NTRS)

    Perez, Armando; Gandhi, Raj

    1989-01-01

    The emissivity was calculated for the pair production of helicity-flipped neutrinos, in a way that can be used in supernova calculations. Also presented are simple estimates which show that such process can act as an efficient energy-loss mechanism in the shocked supernova core, and this fact is used to extract neutrino mass limits from SN 1987A neutrino observations.

  20. Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds

    NASA Astrophysics Data System (ADS)

    Wesemann, Stefan; Marzetta, Thomas L.

    2017-12-01

    For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.

  1. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  2. Common and Innovative Visuals: A sparsity modeling framework for video.

    PubMed

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  3. A Fuzzy analytical hierarchy process approach in irrigation networks maintenance

    NASA Astrophysics Data System (ADS)

    Riza Permana, Angga; Rintis Hadiani, Rr.; Syafi'i

    2017-11-01

    Ponorogo Regency has 440 Irrigation Area with a total area of 17,950 Ha. Due to the limited budget and lack of maintenance cause decreased function on the irrigation. The aim of this study is to make an appropriate system to determine the indices weighted of the rank prioritization criteria for irrigation network maintenance using a fuzzy-based methodology. The criteria that are used such as the physical condition of irrigation networks, area of service, estimated maintenance cost, and efficiency of irrigation water distribution. 26 experts in the field of water resources in the Dinas Pekerjaan Umum were asked to fill out the questionnaire, and the result will be used as a benchmark to determine the rank of irrigation network maintenance priority. The results demonstrate that the physical condition of irrigation networks criterion (W1) = 0,279 has the greatest impact on the assessment process. The area of service (W2) = 0,270, efficiency of irrigation water distribution (W4) = 0,249, and estimated maintenance cost (W3) = 0,202 criteria rank next in effectiveness, respectively. The proposed methodology deals with uncertainty and vague data using triangular fuzzy numbers, and, moreover, it provides a comprehensive decision-making technique to assess maintenance priority on irrigation network.

  4. Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis

    NASA Astrophysics Data System (ADS)

    Che, E.; Olsen, M. J.

    2017-09-01

    Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.

  5. Synthesizing Equivalence Indices for the Comparative Evaluation of Technoeconomic Efficiency of Industrial Processes at the Design/Re-engineering Level

    NASA Astrophysics Data System (ADS)

    Fotilas, P.; Batzias, A. F.

    2007-12-01

    The equivalence indices synthesized for the comparative evaluation of technoeconomic efficiency of industrial processes are of critical importance since they serve as both, (i) positive/analytic descriptors of the physicochemical nature of the process and (ii) measures of effectiveness, especially helpful for investigated competitiveness in the industrial/energy/environmental sector of the economy. In the present work, a new algorithmic procedure has been developed, which initially standardizes a real industrial process, then analyzes it as a compromise of two ideal processes, and finally synthesizes the index that can represent/reconstruct the real process as a result of the trade-off between the two ideal processes taking as parental prototypes. The same procedure makes fuzzy multicriteria ranking within a set of pre-selected industrial processes for two reasons: (a) to analyze the process most representative of the production/treatment under consideration, (b) to use the `second best' alternative as a dialectic pole in absence of the two ideal processes mentioned above. An implantation of this procedure is presented, concerning a facility of biological wastewater treatment with six alternatives: activated sludge through (i) continuous-flow incompletely-stirred tank reactors in series, (ii) a plug flow reactor with dispersion, (iii) an oxidation ditch, and biological processing through (iv) a trickling filter, (v) rotating contactors, (vi) shallow ponds. The criteria used for fuzzy (to count for uncertainty) ranking are capital cost, operating cost, environmental friendliness, reliability, flexibility, extendibility. Two complementary indices were synthesized for the (ii)-alternative ranked first and their quantitative expressions were derived, covering a variety of kinetic models as well as recycle/bypass conditions. Finally, analysis of estimating the optimal values of these indices at maximum technoeconomic efficiency is presented and the implications (expected to be) caused by exogenous and endogenous factors (e.g., environmental standards change and innovative energy savings/substitution, respectively) are discussed by means of marginal efficiency graphs.

  6. Global patterns and climate drivers of water-use efficiency in terrestrial ecosystems deduced from satellite-based datasets and carbon cycle models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yan; Piao, Shilong; Huang, Mengtian

    Our aim is to investigate how ecosystem water-use efficiency (WUE) varies spatially under different climate conditions, and how spatial variations in WUE differ from those of transpiration-based water-use efficiency (WUE t) and transpiration-based inherent water-use efficiency (IWUE t). LocationGlobal terrestrial ecosystems. We investigated spatial patterns of WUE using two datasets of gross primary productivity (GPP) and evapotranspiration (ET) and four biosphere model estimates of GPP and ET. Spatial relationships between WUE and climate variables were further explored through regression analyses. Global WUE estimated by two satellite-based datasets is 1.9 ± 0.1 and 1.8 ± 0.6g C m -2mm -1 lowermore » than the simulations from four process-based models (2.0 ± 0.3g C m -2mm -1) but comparable within the uncertainty of both approaches. In both satellite-based datasets and process models, precipitation is more strongly associated with spatial gradients of WUE for temperate and tropical regions, but temperature dominates north of 50 degrees N. WUE also increases with increasing solar radiation at high latitudes. The values of WUE from datasets and process-based models are systematically higher in wet regions (with higher GPP) than in dry regions. WUE t shows a lower precipitation sensitivity than WUE, which is contrary to leaf- and plant-level observations. IWUE t, the product of WUE t and water vapour deficit, is found to be rather conservative with spatially increasing precipitation, in agreement with leaf- and plant-level measurements. In conclusion, WUE, WUE t and IWUE t produce different spatial relationships with climate variables. In dry ecosystems, water losses from evaporation from bare soil, uncorrelated with productivity, tend to make WUE lower than in wetter regions. Yet canopy conductance is intrinsically efficient in those ecosystems and maintains a higher IWUEt. This suggests that the responses of each component flux of evapotranspiration should be analysed separately when investigating regional gradients in WUE, its temporal variability and its trends.« less

  7. Global patterns and climate drivers of water-use efficiency in terrestrial ecosystems deduced from satellite-based datasets and carbon cycle models

    DOE PAGES

    Sun, Yan; Piao, Shilong; Huang, Mengtian; ...

    2015-12-23

    Our aim is to investigate how ecosystem water-use efficiency (WUE) varies spatially under different climate conditions, and how spatial variations in WUE differ from those of transpiration-based water-use efficiency (WUE t) and transpiration-based inherent water-use efficiency (IWUE t). LocationGlobal terrestrial ecosystems. We investigated spatial patterns of WUE using two datasets of gross primary productivity (GPP) and evapotranspiration (ET) and four biosphere model estimates of GPP and ET. Spatial relationships between WUE and climate variables were further explored through regression analyses. Global WUE estimated by two satellite-based datasets is 1.9 ± 0.1 and 1.8 ± 0.6g C m -2mm -1 lowermore » than the simulations from four process-based models (2.0 ± 0.3g C m -2mm -1) but comparable within the uncertainty of both approaches. In both satellite-based datasets and process models, precipitation is more strongly associated with spatial gradients of WUE for temperate and tropical regions, but temperature dominates north of 50 degrees N. WUE also increases with increasing solar radiation at high latitudes. The values of WUE from datasets and process-based models are systematically higher in wet regions (with higher GPP) than in dry regions. WUE t shows a lower precipitation sensitivity than WUE, which is contrary to leaf- and plant-level observations. IWUE t, the product of WUE t and water vapour deficit, is found to be rather conservative with spatially increasing precipitation, in agreement with leaf- and plant-level measurements. In conclusion, WUE, WUE t and IWUE t produce different spatial relationships with climate variables. In dry ecosystems, water losses from evaporation from bare soil, uncorrelated with productivity, tend to make WUE lower than in wetter regions. Yet canopy conductance is intrinsically efficient in those ecosystems and maintains a higher IWUEt. This suggests that the responses of each component flux of evapotranspiration should be analysed separately when investigating regional gradients in WUE, its temporal variability and its trends.« less

  8. Parameterizing ecosystem light use efficiency and water use efficiency to estimate maize gross primary production and evapotranspiration using MODIS EVI

    USDA-ARS?s Scientific Manuscript database

    Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...

  9. Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood

    PubMed Central

    Bondell, Howard D.; Stefanski, Leonard A.

    2013-01-01

    Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805

  10. Likelihood-based inference for discretely observed birth-death-shift processes, with applications to evolution of mobile genetic elements.

    PubMed

    Xu, Jason; Guttorp, Peter; Kato-Maeda, Midori; Minin, Vladimir N

    2015-12-01

    Continuous-time birth-death-shift (BDS) processes are frequently used in stochastic modeling, with many applications in ecology and epidemiology. In particular, such processes can model evolutionary dynamics of transposable elements-important genetic markers in molecular epidemiology. Estimation of the effects of individual covariates on the birth, death, and shift rates of the process can be accomplished by analyzing patient data, but inferring these rates in a discretely and unevenly observed setting presents computational challenges. We propose a multi-type branching process approximation to BDS processes and develop a corresponding expectation maximization algorithm, where we use spectral techniques to reduce calculation of expected sufficient statistics to low-dimensional integration. These techniques yield an efficient and robust optimization routine for inferring the rates of the BDS process, and apply broadly to multi-type branching processes whose rates can depend on many covariates. After rigorously testing our methodology in simulation studies, we apply our method to study intrapatient time evolution of IS6110 transposable element, a genetic marker frequently used during estimation of epidemiological clusters of Mycobacterium tuberculosis infections. © 2015, The International Biometric Society.

  11. Simultaneously estimating evolutionary history and repeated traits phylogenetic signal: applications to viral and host phenotypic evolution

    PubMed Central

    Vrancken, Bram; Lemey, Philippe; Rambaut, Andrew; Bedford, Trevor; Longdon, Ben; Günthard, Huldrych F.; Suchard, Marc A.

    2014-01-01

    Phylogenetic signal quantifies the degree to which resemblance in continuously-valued traits reflects phylogenetic relatedness. Measures of phylogenetic signal are widely used in ecological and evolutionary research, and are recently gaining traction in viral evolutionary studies. Standard estimators of phylogenetic signal frequently condition on data summary statistics of the repeated trait observations and fixed phylogenetics trees, resulting in information loss and potential bias. To incorporate the observation process and phylogenetic uncertainty in a model-based approach, we develop a novel Bayesian inference method to simultaneously estimate the evolutionary history and phylogenetic signal from molecular sequence data and repeated multivariate traits. Our approach builds upon a phylogenetic diffusion framework that model continuous trait evolution as a Brownian motion process and incorporates Pagel’s λ transformation parameter to estimate dependence among traits. We provide a computationally efficient inference implementation in the BEAST software package. We evaluate the synthetic performance of the Bayesian estimator of phylogenetic signal against standard estimators, and demonstrate the use of our coherent framework to address several virus-host evolutionary questions, including virulence heritability for HIV, antigenic evolution in influenza and HIV, and Drosophila sensitivity to sigma virus infection. Finally, we discuss model extensions that will make useful contributions to our flexible framework for simultaneously studying sequence and trait evolution. PMID:25780554

  12. Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas

    NASA Astrophysics Data System (ADS)

    Harabi, F.; Akkar, S.; Gharsallah, A.

    2016-07-01

    Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.

  13. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    NASA Astrophysics Data System (ADS)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  14. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  15. Efficient Regressions via Optimally Combining Quantile Information*

    PubMed Central

    Zhao, Zhibiao; Xiao, Zhijie

    2014-01-01

    We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481

  16. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks

    PubMed Central

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-01-01

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400

  17. Improved PPP Ambiguity Resolution Considering the Stochastic Characteristics of Atmospheric Corrections from Regional Networks.

    PubMed

    Li, Yihe; Li, Bofeng; Gao, Yang

    2015-11-30

    With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.

  18. Efficiency and productivity assessment of public hospitals in Greece during the crisis period 2009-2012.

    PubMed

    Xenos, P; Yfantopoulos, J; Nektarios, M; Polyzos, N; Tinios, P; Constantopoulos, A

    2017-01-01

    This study is an initial effort to examine the dynamics of efficiency and productivity in Greek public hospitals during the first phase of the crisis 2009-2012. Data were collected by the Ministry of Health after several quality controls ensuring comparability and validity of hospital inputs and outputs. Productivity is estimated using the Malmquist Indicator, decomposing the estimated values into efficiency and technological change. Hospital efficiency and productivity growth are calculated by bootstrapping the non-parametric Malmquist analysis. The advantage of this method is the estimation efficiency and productivity through the corresponding confidence intervals. Additionally, a Random-effects Tobit model is explored to investigate the impact of contextual factors on the magnitude of efficiency. Findings reveal substantial variations in hospital productivity over the period from 2009 to 2012. The economic crisis of 2009 had a negative impact in productivity. The average Malmquist Productivity Indicator (MPI) score is 0.72 with unity signifying stable production. Approximately 91% of the hospitals score lower than unity. Substantial increase is observed between 2010 and 2011, as indicated by the average MPI score which fluctuates to 1.52. Moreover, technology change scored more than unity in more than 75% of hospitals. The last period (2011-2012) has shown stabilization in the expansionary process of productivity. The main factors contributing to overall productivity gains are increases in occupancy rates, type and size of the hospital. This paper attempts to offer insights in efficiency and productivity growth for public hospitals in Greece. The results suggest that the average hospital experienced substantial productivity growth between 2009 and 2012 as indicated by variations in MPI. Almost all of the productivity increase was due to technology change which could be explained by the concurrent managerial and financing healthcare reforms. Hospitals operating under decreasing returns to scale could achieve higher efficiency rates by reducing their capacity. However, certain social objectives should also be considered. Emphasis perhaps should be placed in utilizing and advancing managerial and organizational reforms, so that the benefits of technological improvements will have a continuing positive impact in the future.

  19. Energy-Efficient Transmissions for Remote Wireless Sensor Networks: An Integrated HAP/Satellite Architecture for Emergency Scenarios

    PubMed Central

    Dong, Feihong; Li, Hongjun; Gong, Xiangwu; Liu, Quan; Wang, Jingchao

    2015-01-01

    A typical application scenario of remote wireless sensor networks (WSNs) is identified as an emergency scenario. One of the greatest design challenges for communications in emergency scenarios is energy-efficient transmission, due to scarce electrical energy in large-scale natural and man-made disasters. Integrated high altitude platform (HAP)/satellite networks are expected to optimally meet emergency communication requirements. In this paper, a novel integrated HAP/satellite (IHS) architecture is proposed, and three segments of the architecture are investigated in detail. The concept of link-state advertisement (LSA) is designed in a slow flat Rician fading channel. The LSA is received and processed by the terminal to estimate the link state information, which can significantly reduce the energy consumption at the terminal end. Furthermore, the transmission power requirements of the HAPs and terminals are derived using the gradient descent and differential equation methods. The energy consumption is modeled at both the source and system level. An innovative and adaptive algorithm is given for the energy-efficient path selection. The simulation results validate the effectiveness of the proposed adaptive algorithm. It is shown that the proposed adaptive algorithm can significantly improve energy efficiency when combined with the LSA and the energy consumption estimation. PMID:26404292

  20. Energy-Efficient Transmissions for Remote Wireless Sensor Networks: An Integrated HAP/Satellite Architecture for Emergency Scenarios.

    PubMed

    Dong, Feihong; Li, Hongjun; Gong, Xiangwu; Liu, Quan; Wang, Jingchao

    2015-09-03

    A typical application scenario of remote wireless sensor networks (WSNs) is identified as an emergency scenario. One of the greatest design challenges for communications in emergency scenarios is energy-efficient transmission, due to scarce electrical energy in large-scale natural and man-made disasters. Integrated high altitude platform (HAP)/satellite networks are expected to optimally meet emergency communication requirements. In this paper, a novel integrated HAP/satellite (IHS) architecture is proposed, and three segments of the architecture are investigated in detail. The concept of link-state advertisement (LSA) is designed in a slow flat Rician fading channel. The LSA is received and processed by the terminal to estimate the link state information, which can significantly reduce the energy consumption at the terminal end. Furthermore, the transmission power requirements of the HAPs and terminals are derived using the gradient descent and differential equation methods. The energy consumption is modeled at both the source and system level. An innovative and adaptive algorithm is given for the energy-efficient path selection. The simulation results validate the effectiveness of the proposed adaptive algorithm. It is shown that the proposed adaptive algorithm can significantly improve energy efficiency when combined with the LSA and the energy consumption estimation.

  1. Inference about species richness and community structure using species-specific occupancy models in the National Swiss Breeding Bird Survey MUB

    USGS Publications Warehouse

    Kery, M.; Royle, J. Andrew; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Species richness is the most widely used biodiversity measure. Virtually always, it cannot be observed but needs to be estimated because some species may be present but remain undetected. This fact is commonly ignored in ecology and management, although it will bias estimates of species richness and related parameters such as occupancy, turnover or extinction rates. We describe a species community modeling strategy based on species-specific models of occurrence, from which estimates of important summaries of community structure, e.g., species richness, occupancy, or measures of similarity among species or sites, are derived by aggregating indicators of occurrence for all species observed in the sample, and for the estimated complement of unobserved species. We use data augmentation for an efficient Bayesian approach to estimation and prediction under this model based on MCMC in WinBUGS. For illustration, we use the Swiss breeding bird survey (MHB) that conducts 2?3 territory-mapping surveys in a systematic sample of 267 1 km2 units on quadrat-specific routes averaging 5.1 km to obtain species-specific estimates of occupancy, and estimates of species richness of all diurnal species free of distorting effects of imperfect detectability. We introduce into our model species-specific covariates relevant to occupancy (elevation, forest cover, route length) and sampling (season, effort). From 1995 to 2004, 185 diurnal breeding bird species were known in Switzerland, and an additional 13 bred 1?3 times since 1900. 134 species were observed during MHB surveys in 254 quadrats surveyed in 2001, and our estimate of 169.9 (95% CI 151?195) therefore appeared sensible. The observed number of species ranged from 4 to 58 (mean 32.8), but with an estimated 0.7?11.2 (mean 2.6) further, unobserved species, the estimated proportion of detected species was 0.48?0.98 (mean 0.91). As is well known, species richness declined at higher elevation and fell above the timberline, and most species showed some preferred elevation. Route length had clear effects on occupancy, suggesting it is a proxy for the size of the effectively sampled area. Detection probability of most species showed clear seasonal patterns and increased with greater survey effort; these are important results for the planning of focused surveys. The main benefit of our model, and its implementation in WinBUGS for which we provide code, is its conceptual simplicity. Species richness is naturally expressed as the sum of occurrences of individual species. Information about species is combined across sites, which yields greater efficiency or may even enable estimation for sites with very few observed species in the first place. At the same time, species detections are clearly segregated into a true state process (occupancy) and an observation process (detection, given occupancy), and covariates can be readily introduced, which provides for efficient introduction of such additional information as well as sharp testing of such relationships.

  2. Estimating the costs of psychiatric hospital services at a public health facility in Nigeria.

    PubMed

    Ezenduka, Charles; Ichoku, Hyacinth; Ochonma, Ogbonnia

    2012-09-01

    Information on the cost of mental health services in Africa is very limited even though mental health disorders represent a significant public health concern, in terms of health and economic impact. Cost analysis is important for planning and for efficiency in the provision of hospital services. The study estimated the total and unit costs of psychiatric hospital services to guide policy and psychiatric hospital management efficiency in Nigeria. The study was exploratory and analytical, examining 2008 data. A standard costing methodology based on ingredient approach was adopted combining top-down method with step-down approach to allocate resources (overhead and indirect costs) to the final cost centers. Total and unit cost items related to the treatment of psychiatric patients (including the costs of personnel, overhead and annualised costs of capital items) were identified and measured on the basis of outpatients' visits, inpatients' days and inpatients' admissions. The exercise reflected the input-output process of hospital services where inputs were measured in terms of resource utilisation and output measured by activities carried out at both the outpatient and inpatient departments. In the estimation process total costs were calculated at every cost center/department and divided by a measure of corresponding patient output to produce the average cost per output. This followed a stepwise process of first allocating the direct costs of overhead to the intermediate and final cost centers and from intermediate cost centers to final cost centers for the calculation of total and unit costs. Costs were calculated from the perspective of the healthcare facility, and converted to the US Dollars at the 2008 exchange rate. Personnel constituted the greatest resource input in all departments, averaging 80% of total hospital cost, reflecting the mix of capital and recurrent inputs. Cost per inpatient day, at $56 was equivalent to 1.4 times the cost per outpatient visit at $41, while cost per emergency visit was about two times the cost per outpatient visit. The cost of one psychiatric inpatient admission averaged $3,675, including the costs of drugs and laboratory services, which was equivalent to the cost of 90 outpatients' visits. Cost of drugs was about 4.4% of the total costs and each prescription averaged $7.48. The male ward was the most expensive cost center. Levels of subsidization for inpatient services were over 90% while ancillary services were not subsidized hence full cost recovery. The hospital costs were driven by personnel which reflected the mix of inputs that relied most on technical manpower. The unit cost estimates are significantly higher than the upper limit range for low income countries based on the WHO-CHOICE estimates. Findings suggest a scope for improving efficiency of resource use given the high proportion of fixed costs which indicates excess capacity. Adequate research is needed for effective comparisons and valid assessment of efficiency in psychiatric hospital services in Africa. The unit cost estimates will be useful in making projections for total psychiatric hospital package and a basis for determining the cost of specific neuropsychiatric cases.

  3. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  4. A cost-effective line-based light-balancing technique using adaptive processing.

    PubMed

    Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min

    2006-09-01

    The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.

  5. Estimation of Transpiration and Water Use Efficiency Using Satellite and Field Observations

    NASA Technical Reports Server (NTRS)

    Choudhury, Bhaskar J.; Quick, B. E.

    2003-01-01

    Structure and function of terrestrial plant communities bring about intimate relations between water, energy, and carbon exchange between land surface and atmosphere. Total evaporation, which is the sum of transpiration, soil evaporation and evaporation of intercepted water, couples water and energy balance equations. The rate of transpiration, which is the major fraction of total evaporation over most of the terrestrial land surface, is linked to the rate of carbon accumulation because functioning of stomata is optimized by both of these processes. Thus, quantifying the spatial and temporal variations of the transpiration efficiency (which is defined as the ratio of the rate of carbon accumulation and transpiration), and water use efficiency (defined as the ratio of the rate of carbon accumulation and total evaporation), and evaluation of modeling results against observations, are of significant importance in developing a better understanding of land surface processes. An approach has been developed for quantifying spatial and temporal variations of transpiration, and water-use efficiency based on biophysical process-based models, satellite and field observations. Calculations have been done using concurrent meteorological data derived from satellite observations and four dimensional data assimilation for four consecutive years (1987-1990) over an agricultural area in the Northern Great Plains of the US, and compared with field observations within and outside the study area. The paper provides substantive new information about interannual variation, particularly the effect of drought, on the efficiency values at a regional scale.

  6. Regional TEC dynamic modeling based on Slepian functions

    NASA Astrophysics Data System (ADS)

    Sharifi, Mohammad Ali; Farzaneh, Saeed

    2015-09-01

    In this work, the three-dimensional state of the ionosphere has been estimated by integrating the spherical Slepian harmonic function and Kalman filter. The spherical Slepian harmonic functions have been used to establish the observation equations because of their properties in local modeling. Spherical harmonics are poor choices to represent or analyze geophysical processes without perfect global coverage but the Slepian functions afford spatial and spectral selectivity. The Kalman filter has been utilized to perform the parameter estimation due to its suitable properties in processing the GPS measurements in the real-time mode. The proposed model has been applied to the real data obtained from the ground-based GPS observations across some portion of the IGS network in Europe. Results have been compared with the estimated TECs by the CODE, ESA, IGS centers and IRI-2012 model. The results indicated that the proposed model which takes advantage of the Slepian basis and Kalman filter is efficient and allows for the generation of the near-real-time regional TEC map.

  7. Market-driven emissions from recovery of carbon dioxide gas.

    PubMed

    Supekar, Sarang D; Skerlos, Steven J

    2014-12-16

    This article uses a market-based allocation method in a consequential life cycle assessment (LCA) framework to estimate the environmental emissions created by recovering carbon dioxide (CO2). We find that 1 ton of CO2 recovered as a coproduct of chemicals manufacturing leads to additional greenhouse gas emissions of 147-210 kg CO2 eq , while consuming 160-248 kWh of electricity, 254-480 MJ of heat, and 1836-4027 kg of water. The ranges depend on the initial and final purity of the CO2, particularly because higher purity grades require additional processing steps such as distillation, as well as higher temperature and flow rate of regeneration as needed for activated carbon treatment and desiccant beds. Higher purity also reduces process efficiency due to increased yield losses from regeneration gas and distillation reflux. Mass- and revenue-based allocation methods used in attributional LCA estimate that recovering CO2 leads to 19 and 11 times the global warming impact estimated from a market-based allocation used in consequential LCA.

  8. New charging strategy for lithium-ion batteries based on the integration of Taguchi method and state of charge estimation

    NASA Astrophysics Data System (ADS)

    Vo, Thanh Tu; Chen, Xiaopeng; Shen, Weixiang; Kapoor, Ajay

    2015-01-01

    In this paper, a new charging strategy of lithium-polymer batteries (LiPBs) has been proposed based on the integration of Taguchi method (TM) and state of charge estimation. The TM is applied to search an optimal charging current pattern. An adaptive switching gain sliding mode observer (ASGSMO) is adopted to estimate the SOC which controls and terminates the charging process. The experimental results demonstrate that the proposed charging strategy can successfully charge the same types of LiPBs with different capacities and cycle life. The proposed charging strategy also provides much shorter charging time, narrower temperature variation and slightly higher energy efficiency than the equivalent constant current constant voltage charging method.

  9. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  10. Statistical processing of large image sequences.

    PubMed

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  11. Determining production level under uncertainty using fuzzy simulation and bootstrap technique, a case study

    NASA Astrophysics Data System (ADS)

    Hamidi, Mohammadreza; Shahanaghi, Kamran; Jabbarzadeh, Armin; Jahani, Ehsan; Pousti, Zahra

    2017-12-01

    In every production plant, it is necessary to have an estimation of production level. Sometimes there are many parameters affective in this estimation. In this paper, it tried to find an appropriate estimation of production level for an industrial factory called Barez in an uncertain environment. We have considered a part of production line, which has different production time for different kind of products, which means both environmental and system uncertainty. To solve the problem we have simulated the line and because of the uncertainty in the times, fuzzy simulation is considered. Required fuzzy numbers are estimated by the use of bootstrap technique. The results are used in production planning process by factory experts and have had satisfying consequences. Opinions of these experts about the efficiency of using this methodology, has been attached.

  12. Getting to the point: Rapid point selection and variable density InSAR time series for urban deformation monitoring

    NASA Astrophysics Data System (ADS)

    Spaans, K.; Hooper, A. J.

    2017-12-01

    The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.

  13. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  14. An efficient solution of real-time data processing for multi-GNSS network

    NASA Astrophysics Data System (ADS)

    Gong, Xiaopeng; Gu, Shengfeng; Lou, Yidong; Zheng, Fu; Ge, Maorong; Liu, Jingnan

    2017-12-01

    Global navigation satellite systems (GNSS) are acting as an indispensable tool for geodetic research and global monitoring of the Earth, and they have been rapidly developed over the past few years with abundant GNSS networks, modern constellations, and significant improvement in mathematic models of data processing. However, due to the increasing number of satellites and stations, the computational efficiency becomes a key issue and it could hamper the further development of GNSS applications. In this contribution, this problem is overcome from the aspects of both dense linear algebra algorithms and GNSS processing strategy. First, in order to fully explore the power of modern microprocessors, the square root information filter solution based on the blocked QR factorization employing as many matrix-matrix operations as possible is introduced. In addition, the algorithm complexity of GNSS data processing is further decreased by centralizing the carrier-phase observations and ambiguity parameters, as well as performing the real-time ambiguity resolution and elimination. Based on the QR factorization of the simulated matrix, we can conclude that compared to unblocked QR factorization, the blocked QR factorization can greatly improve processing efficiency with a magnitude of nearly two orders on a personal computer with four 3.30 GHz cores. Then, with 82 globally distributed stations, the processing efficiency is further validated in multi-GNSS (GPS/BDS/Galileo) satellite clock estimation. The results suggest that it will take about 31.38 s per epoch for the unblocked method. While, without any loss of accuracy, it only takes 0.50 and 0.31 s for our new algorithm per epoch for float and fixed clock solutions, respectively.

  15. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  16. Energy transfer and up-conversion in rare-earth doped dielectric crystals

    NASA Astrophysics Data System (ADS)

    Tkachuk, Alexandra M.

    1996-01-01

    In this work, we consider the prospects of development of the visible, and IR laser-diode pumped lasers based on TR3+-doped double-fluoride crystals. On the basis of estimates of the probabilities of competing non-radiative energy-transfer processes obtained from the experiments and theoretical calculations, the conclusions are drawn on the efficiency of up-conversion pumping and selfquenching of the upper TR3+ states excited by laser-diode emission. The effect of the host composition, dopant concentration, and temperature on the efficiency of up-conversion processes is demonstrated on the example of the YLF:Nd, YLF:Er, BaY2F8:Er, and BaY2F8:Er,Yb crystals. The transfer microparameters for most important cross-relaxation transitions are determined and the conclusions about interaction mechanisms are drawn.

  17. Measuring the efficiency of a healthcare waste management system in Serbia with data envelopment analysis.

    PubMed

    Ratkovic, Branislava; Andrejic, Milan; Vidovic, Milorad

    2012-06-01

    In 2007, the Serbian Ministry of Health initiated specific activities towards establishing a workable model based on the existing administrative framework, which corresponds to the needs of healthcare waste management throughout Serbia. The objective of this research was to identify the reforms carried out and their outcomes by estimating the efficiencies of a sample of 35 healthcare facilities engaged in the process of collection and treatment of healthcare waste, using data envelopment analysis. Twenty-one (60%) of the 35 healthcare facilities analysed were found to be technically inefficient, with an average level of inefficiency of 13%. This fact indicates deficiencies in the process of collection and treatment of healthcare waste and the information obtained and presented in this paper could be used for further improvement and development of healthcare waste management in Serbia.

  18. Further study of inversion layer MIS solar cells

    NASA Technical Reports Server (NTRS)

    Ho, Fat Duen

    1992-01-01

    Many inversion layer metal-insulator-semiconductor (IL/MIS) solar cells have been fabricated. As of today, the best cell fabricated by us has a 9.138 percent AMO efficiency, with FF = 0.641, V(sub OC) = 0.557 V, and I(sub SC) = 26.9 micro A. Efforts made for fabricating an IL/MOS solar cell with reasonable efficiencies are reported. The more accurate control of the thickness of the thin layer of oxide between aluminum and silicon of the MIS contacts has been achieved by using two different process methods. Comparison of these two different thin oxide processings is reported. The effects of annealing time of the sample are discussed. The range of the resistivity of the substrates used in the IL cell fabrication is experimentally estimated. Theoretical study of the MIS contacts under dark conditions is addressed.

  19. Robust functional regression model for marginal mean and subject-specific inferences.

    PubMed

    Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo

    2017-01-01

    We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.

  20. Copper vapor laser precision processing

    NASA Astrophysics Data System (ADS)

    Nikonchuk, Michail O.

    1991-05-01

    Copper vapor laser (CVL) was designed on the basis master oscillator (MO) - spatial filter - amplifier (AMP) system which is placed in thermostable volume. Processing material is moved by means of CNC system GPM-AP-400 with +/- 5 micrometers accuracy. Several cutting parameters are considered which define the quality and productivity of vaporization cutting: efficiency, cutwidth, height of upper and lower burr, roughness, laser and heat affected zones. Estimates are made for some metals with thickness 0.02 - 0.3 mm and cutwidth 0.01 - 0.03 mm. The examples of workpieces produced by CVL are presented.

  1. Ore Reserve Estimation of Saprolite Nickel Using Inverse Distance Method in PIT Block 3A Banggai Area Central Sulawesi

    NASA Astrophysics Data System (ADS)

    Khaidir Noor, Muhammad

    2018-03-01

    Reserve estimation is one of important work in evaluating a mining project. It is estimation of the quality and quantity of the presence of minerals have economic value. Reserve calculation method plays an important role in determining the efficiency in commercial exploration of a deposit. This study was intended to calculate ore reserves contained in the study area especially Pit Block 3A. Nickel ore reserve was estimated by using detailed exploration data, processing by using Surpac 6.2 by Inverse Distance Weight: Squared Power estimation method. Ore estimation result obtained from 30 drilling data was 76453.5 ton of Saprolite with density of 1.5 ton/m3 and COG (Cut Off Grade) Ni ≥ 1.6 %, while overburden data was 112,570.8 tons with waste rock density of 1.2 ton/m3 . Striping Ratio (SR) was 1.47 : 1 smaller than Stripping Ratio ( SR ) were set of 1.60 : 1.

  2. Extending birthday paradox theory to estimate the number of tags in RFID systems.

    PubMed

    Shakiba, Masoud; Singh, Mandeep Jit; Sundararajan, Elankovan; Zavvari, Azam; Islam, Mohammad Tariqul

    2014-01-01

    The main objective of Radio Frequency Identification systems is to provide fast identification for tagged objects. However, there is always a chance of collision, when tags transmit their data to the reader simultaneously. Collision is a time-consuming event that reduces the performance of RFID systems. Consequently, several anti-collision algorithms have been proposed in the literature. Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular of these algorithms. DFSA dynamically modifies the frame size based on the number of tags. Since the real number of tags is unknown, it needs to be estimated. Therefore, an accurate tag estimation method has an important role in increasing the efficiency and overall performance of the tag identification process. In this paper, we propose a novel estimation technique for DFSA anti-collision algorithms that applies birthday paradox theory to estimate the number of tags accurately. The analytical discussion and simulation results prove that the proposed method increases the accuracy of tag estimation and, consequently, outperforms previous schemes.

  3. Extending Birthday Paradox Theory to Estimate the Number of Tags in RFID Systems

    PubMed Central

    Shakiba, Masoud; Singh, Mandeep Jit; Sundararajan, Elankovan; Zavvari, Azam; Islam, Mohammad Tariqul

    2014-01-01

    The main objective of Radio Frequency Identification systems is to provide fast identification for tagged objects. However, there is always a chance of collision, when tags transmit their data to the reader simultaneously. Collision is a time-consuming event that reduces the performance of RFID systems. Consequently, several anti-collision algorithms have been proposed in the literature. Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular of these algorithms. DFSA dynamically modifies the frame size based on the number of tags. Since the real number of tags is unknown, it needs to be estimated. Therefore, an accurate tag estimation method has an important role in increasing the efficiency and overall performance of the tag identification process. In this paper, we propose a novel estimation technique for DFSA anti-collision algorithms that applies birthday paradox theory to estimate the number of tags accurately. The analytical discussion and simulation results prove that the proposed method increases the accuracy of tag estimation and, consequently, outperforms previous schemes. PMID:24752285

  4. Efficient spares matrix multiplication scheme for the CYBER 203

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.

    1984-01-01

    This work has been directed toward the development of an efficient algorithm for performing this computation on the CYBER-203. The desire to provide software which gives the user the choice between the often conflicting goals of minimizing central processing (CPU) time or storage requirements has led to a diagonal-based algorithm in which one of three types of storage is selected for each diagonal. For each storage type, an initialization sub-routine estimates the CPU and storage requirements based upon results from previously performed numerical experimentation. These requirements are adjusted by weights provided by the user which reflect the relative importance the user places on the resources. The three storage types employed were chosen to be efficient on the CYBER-203 for diagonals which are sparse, moderately sparse, or dense; however, for many densities, no diagonal type is most efficient with respect to both resource requirements. The user-supplied weights dictate the choice.

  5. Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data

    PubMed Central

    CHEN, SHUAI; ZHAO, HONGWEI

    2013-01-01

    Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869

  6. High-efficiency cell concepts on low-cost silicon sheets

    NASA Technical Reports Server (NTRS)

    Bell, R. O.; Ravi, K. V.

    1985-01-01

    The limitations on sheet growth material in terms of the defect structure and minority carrier lifetime are discussed. The effect of various defects on performance are estimated. Given these limitations designs for a sheet growth cell that will make the best of the material characteristics are proposed. Achievement of optimum synergy between base material quality and device processing variables is proposed. A strong coupling exists between material quality and the variables during crystal growth, and device processing variables. Two objectives are outlined: (1) optimization of the coupling for maximum performance at minimal cost; and (2) decoupling of materials from processing by improvement in base material quality to make it less sensitive to processing variables.

  7. CO 2 laser cutting of MDF . 2. Estimation of power distribution

    NASA Astrophysics Data System (ADS)

    Ng, S. L.; Lum, K. C. P.; Black, I.

    2000-02-01

    Part 2 of this paper details an experimentally-based method to evaluate the power distribution for both CW and PM cutting. Variations in power distribution with different cutting speeds, material thickness and pulse ratios are presented. The paper also provides information on both the cutting efficiency and absorptivity index for MDF, and comments on the beam dispersion characteristics after the cutting process.

  8. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  9. Effects of rainfall events on the occurrence and detection efficiency of viruses in river water impacted by combined sewer overflows.

    PubMed

    Hata, Akihiko; Katayama, Hiroyuki; Kojima, Keisuke; Sano, Shoichi; Kasuga, Ikuro; Kitajima, Masaaki; Furumai, Hiroaki

    2014-01-15

    Rainfall events can introduce large amount of microbial contaminants including human enteric viruses into surface water by intermittent discharges from combined sewer overflows (CSOs). The present study aimed to investigate the effect of rainfall events on viral loads in surface waters impacted by CSO and the reliability of molecular methods for detection of enteric viruses. The reliability of virus detection in the samples was assessed by using process controls for virus concentration, nucleic acid extraction and reverse transcription (RT)-quantitative PCR (qPCR) steps, which allowed accurate estimation of virus detection efficiencies. Recovery efficiencies of poliovirus in river water samples collected during rainfall events (<10%) were lower than those during dry weather conditions (>10%). The log10-transformed virus concentration efficiency was negatively correlated with suspended solid concentration (r(2)=0.86) that increased significantly during rainfall events. Efficiencies of DNA extraction and qPCR steps determined with adenovirus type 5 and a primer sharing control, respectively, were lower in dry weather. However, no clear relationship was observed between organic water quality parameters and efficiencies of these two steps. Observed concentrations of indigenous enteric adenoviruses, GII-noroviruses, enteroviruses, and Aichi viruses increased during rainfall events even though the virus concentration efficiency was presumed to be lower than in dry weather. The present study highlights the importance of using appropriate process controls to evaluate accurately the concentration of water borne enteric viruses in natural waters impacted by wastewater discharge, stormwater, and CSOs. © 2013.

  10. Improved system integration for integrated gasification combined cycle (IGCC) systems.

    PubMed

    Frey, H Christopher; Zhu, Yunhua

    2006-03-01

    Integrated gasification combined cycle (IGCC) systems are a promising technology for power generation. They include an air separation unit (ASU), a gasification system, and a gas turbine combined cycle power block, and feature competitive efficiency and lower emissions compared to conventional power generation technology. IGCC systems are not yet in widespread commercial use and opportunities remain to improve system feasibility via improved process integration. A process simulation model was developed for IGCC systems with alternative types of ASU and gas turbine integration. The model is applied to evaluate integration schemes involving nitrogen injection, air extraction, and combinations of both, as well as different ASU pressure levels. The optimal nitrogen injection only case in combination with an elevated pressure ASU had the highest efficiency and power output and approximately the lowest emissions per unit output of all cases considered, and thus is a recommended design option. The optimal combination of air extraction coupled with nitrogen injection had slightly worse efficiency, power output, and emissions than the optimal nitrogen injection only case. Air extraction alone typically produced lower efficiency, lower power output, and higher emissions than all other cases. The recommended nitrogen injection only case is estimated to provide annualized cost savings compared to a nonintegrated design. Process simulation modeling is shown to be a useful tool for evaluation and screening of technology options.

  11. A novel estimating method for steering efficiency of the driver with electromyography signals

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Ji, Xuewu; Hayama, Ryouhei; Mizuno, Takahiro

    2014-05-01

    The existing research of steering efficiency mainly focuses on the mechanism efficiency of steering system, aiming at designing and optimizing the mechanism of steering system. In the development of assist steering system especially the evaluation of its comfort, the steering efficiency of driver physiological output usually are not considered, because this physiological output is difficult to measure or to estimate, and the objective evaluation of steering comfort therefore cannot be conducted with movement efficiency perspective. In order to take a further step to the objective evaluation of steering comfort, an estimating method for the steering efficiency of the driver was developed based on the research of the relationship between the steering force and muscle activity. First, the steering forces in the steering wheel plane and the electromyography (EMG) signals of the primary muscles were measured. These primary muscles are the muscles in shoulder and upper arm which mainly produced the steering torque, and their functions in steering maneuver were identified previously. Next, based on the multiple regressions of the steering force and EMG signals, both the effective steering force and the total force capacity of driver in steering maneuver were calculated. Finally, the steering efficiency of driver was estimated by means of the estimated effective force and the total force capacity, which represented the information of driver physiological output of the primary muscles. This research develops a novel estimating method for driver steering efficiency of driver physiological output, including the estimation of both steering force and the force capacity of primary muscles with EMG signals, and will benefit to evaluate the steering comfort with an objective perspective.

  12. Quantifying Adoption Rates and Energy Savings Over Time for Advanced Manufacturing Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanes, Rebecca; Carpenter Petri, Alberta C; Riddle, Matt

    Energy-efficient manufacturing technologies can reduce energy consumption and lower operating costs for an individual manufacturing facility, but increased process complexity and the resulting risk of disruption means that manufacturers may be reluctant to adopt such technologies. In order to quantify potential energy savings at scales larger than a single facility, it is necessary to account for how quickly and how widely the technology will be adopted by manufacturers. This work develops a methodology for estimating energy-efficient manufacturing technology adoption rates using quantitative, objectively measurable technology characteristics, including energetic, economic and technical criteria. Twelve technology characteristics are considered, and each characteristicmore » is assigned an importance weight that reflects its impact on the overall technology adoption rate. Technology characteristic data and importance weights are used to calculate the adoption score, a number between 0 and 1 that represents how quickly the technology is likely to be adopted. The adoption score is then used to estimate parameters for the Bass diffusion curve, which quantifies the change in the number of new technology adopters in a population over time. Finally, energy savings at the sector level are calculated over time by multiplying the number of new technology adopters at each time step with the technology's facility-level energy savings. The proposed methodology will be applied to five state-of-the-art energy-efficient technologies in the carbon fiber composites sector, with technology data obtained from the Department of Energy's 2016 bandwidth study. Because the importance weights used in estimating the Bass curve parameters are subjective, a sensitivity analysis will be performed on the weights to obtain a range of parameters for each technology. The potential energy savings for each technology and the rate at which each technology is adopted in the sector are quantified and used to identify the technologies which offer the greatest cumulative sector-level energy savings over a period of 20 years. Preliminary analysis indicates that relatively simple technologies, such as efficient furnaces, will be adopted more quickly and result in greater cumulative energy savings compared to more complex technologies that require process retrofitting, such as advanced control systems.« less

  13. Potential Organ-Donor Supply and Efficiency of Organ Procurement Organizations

    PubMed Central

    Guadagnoli, Edward; Christiansen, Cindy L.; Beasley, Carol L.

    2003-01-01

    The authors estimated the supply of organ donors in the U.S. and also according to organ procurement organizations (OPOs). They estimated the number of donors in the U.S. to be 16,796. Estimates of the number of potential donors for each OPO were used to calculate the level of donor efficiency (actual donors as a percent of potential donors). Overall, donor efficiency for OPOs was 35 percent; the majority was between 30- and 40-percent efficient. Although there is room to improve donor efficiency in the U.S., even a substantial improvement will not meet the Nation's demand for organs. PMID:14628403

  14. Potential organ-donor supply and efficiency of organ procurement organizations.

    PubMed

    Guadagnoli, Edward; Christiansen, Cindy L; Beasley, Carol L

    2003-01-01

    The authors estimated the supply of organ donors in the U.S. and also according to organ procurement organizations (OPOs). They estimated the number of donors in the U.S. to be 16,796. Estimates of the number of potential donors for each OPO were used to calculate the level of donor efficiency (actual donors as a percent of potential donors). Overall, donor efficiency for OPOs was 35 percent; the majority was between 30- and 40-percent efficient. Although there is room to improve donor efficiency in the U.S., even a substantial improvement will not meet the Nation's demand for organs.

  15. Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications

    DOEpatents

    Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI

    2012-05-29

    A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.

  16. Developing software to use parallel processing effectively. Final report, June-December 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Center, J.

    1988-10-01

    This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less

  17. Solar photochemical process engineering for production of fuels and chemicals

    NASA Technical Reports Server (NTRS)

    Biddle, J. R.; Peterson, D. B.; Fujita, T.

    1984-01-01

    The engineering costs and performance of a nominal 25,000 scmd (883,000 scfd) photochemical plant to produce dihydrogen from water were studied. Two systems were considered, one based on flat-plate collector/reactors and the other on linear parabolic troughs. Engineering subsystems were specified including the collector/reactor, support hardware, field transport piping, gas compression equipment, and balance-of-plant (BOP) items. Overall plant efficiencies of 10.3 and 11.6% are estimated for the flat-plate and trough systems, respectively, based on assumed solar photochemical efficiencies of 12.9 and 14.6%. Because of the opposing effects of concentration ratio and operating temperature on efficiency, it was concluded that reactor cooling would be necessary with the trough system. Both active and passive cooling methods were considered. Capital costs and energy costs, for both concentrating and non-concentrating systems, were determined and their sensitivity to efficiency and economic parameters were analyzed. The overall plant efficiency is the single most important factor in determining the cost of the fuel.

  18. Solar photochemical process engineering for production of fuels and chemicals

    NASA Technical Reports Server (NTRS)

    Biddle, J. R.; Peterson, D. B.; Fujita, T.

    1985-01-01

    The engineering costs and performance of a nominal 25,000 scmd (883,000 scfd) photochemical plant to produce dihydrogen from water were studied. Two systems were considered, one based on flat-plate collector/reactors and the other on linear parabolic troughs. Engineering subsystems were specified including the collector/reactor, support hardware, field transport piping, gas compression equipment, and balance-of-plant (BOP) items. Overall plant efficiencies of 10.3 and 11.6 percent are estimated for the flat-plate and trough systems, respectively, based on assumed solar photochemical efficiencies of 12.9 and 14.6 percent. Because of the opposing effects of concentration ratio and operating temperature on efficiency, it was concluded that reactor cooling would be necessary with the trough system. Both active and passive cooling methods were considered. Capital costs and energy costs, for both concentrating and non-concentrating systems, were determined and their sensitivity to efficiency and economic parameters were analyzed. The overall plant efficiency is the single most important factor in determining the cost of the fuel.

  19. Quantifying the flow efficiency in constant-current capacitive deionization.

    PubMed

    Hawks, Steven A; Knipe, Jennifer M; Campbell, Patrick G; Loeb, Colin K; Hubert, McKenzie A; Santiago, Juan G; Stadermann, Michael

    2018-02-01

    Here we detail a previously unappreciated loss mechanism inherent to capacitive deionization (CDI) cycling operation that has a substantial role determining performance. This mechanism reflects the fact that desalinated water inside a cell is partially lost to re-salination if desorption is carried out immediately after adsorption. We describe such effects by a parameter called the flow efficiency, and show that this efficiency is distinct from and yet multiplicative with other highly-studied adsorption efficiencies. Flow losses can be minimized by flowing more feed solution through the cell during desalination; however, this also results in less effluent concentration reduction. While the rationale outlined here is applicable to all CDI cell architectures that rely on cycling, we validate our model with a flow-through electrode CDI device operated in constant-current mode. We find excellent agreement between flow efficiency model predictions and experimental results, thus giving researchers simple equations by which they can estimate this distinct loss process for their operation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  1. Development of the hybrid sulfur cycle for use with concentrated solar heat. I. Conceptual design

    DOE PAGES

    Gorensek, Maximilian B.; Corgnale, Claudio; Summers, William A.

    2017-07-27

    We propose a detailed conceptual design of a solar hybrid sulfur (HyS) cycle. Numerous design tradeoffs, including process operating conditions and strategies, methods of integration with solar energy sources, and solar design options were considered. A baseline design was selected, and process flowsheets were developed. Pinch analyses were performed to establish the limiting energy efficiency. Detailed material and energy balances were completed, and a full stream table prepared. Design assumptions include use of: location in the southwest US desert, falling particle concentrated solar receiver, indirect heat transfer via pressurized helium, continuous operation with thermal energy storage, liquid-fed electrolyzer with PBImore » membrane, and bayonet-type acid decomposer. Thermochemical cycle efficiency for the HyS process was estimated to be 35.0%, LHV basis. The solar-to-hydrogen (STH) energy conversion ratio was 16.9%. This thus exceeds the Year 2015 DOE STCH target of STH >10%, and shows promise for meeting the Year 2020 target of 20%.« less

  2. Development of the hybrid sulfur cycle for use with concentrated solar heat. I. Conceptual design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorensek, Maximilian B.; Corgnale, Claudio; Summers, William A.

    We propose a detailed conceptual design of a solar hybrid sulfur (HyS) cycle. Numerous design tradeoffs, including process operating conditions and strategies, methods of integration with solar energy sources, and solar design options were considered. A baseline design was selected, and process flowsheets were developed. Pinch analyses were performed to establish the limiting energy efficiency. Detailed material and energy balances were completed, and a full stream table prepared. Design assumptions include use of: location in the southwest US desert, falling particle concentrated solar receiver, indirect heat transfer via pressurized helium, continuous operation with thermal energy storage, liquid-fed electrolyzer with PBImore » membrane, and bayonet-type acid decomposer. Thermochemical cycle efficiency for the HyS process was estimated to be 35.0%, LHV basis. The solar-to-hydrogen (STH) energy conversion ratio was 16.9%. This thus exceeds the Year 2015 DOE STCH target of STH >10%, and shows promise for meeting the Year 2020 target of 20%.« less

  3. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  4. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  5. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  6. Bayesian framework for modeling diffusion processes with nonlinear drift based on nonlinear and incomplete observations.

    PubMed

    Wu, Hao; Noé, Frank

    2011-03-01

    Diffusion processes are relevant for a variety of phenomena in the natural sciences, including diffusion of cells or biomolecules within cells, diffusion of molecules on a membrane or surface, and diffusion of a molecular conformation within a complex energy landscape. Many experimental tools exist now to track such diffusive motions in single cells or molecules, including high-resolution light microscopy, optical tweezers, fluorescence quenching, and Förster resonance energy transfer (FRET). Experimental observations are most often indirect and incomplete: (1) They do not directly reveal the potential or diffusion constants that govern the diffusion process, (2) they have limited time and space resolution, and (3) the highest-resolution experiments do not track the motion directly but rather probe it stochastically by recording single events, such as photons, whose properties depend on the state of the system under investigation. Here, we propose a general Bayesian framework to model diffusion processes with nonlinear drift based on incomplete observations as generated by various types of experiments. A maximum penalized likelihood estimator is given as well as a Gibbs sampling method that allows to estimate the trajectories that have caused the measurement, the nonlinear drift or potential function and the noise or diffusion matrices, as well as uncertainty estimates of these properties. The approach is illustrated on numerical simulations of FRET experiments where it is shown that trajectories, potentials, and diffusion constants can be efficiently and reliably estimated even in cases with little statistics or nonequilibrium measurement conditions.

  7. Characterization of emission factors related to source activity for trichloroethylene degreasing and chrome plating processes.

    PubMed

    Wadden, R A; Hawkins, J L; Scheff, P A; Franke, J E

    1991-09-01

    A study at an automotive parts fabrication plant evaluated four metal surface treatment processes during production conditions. The evaluation provides examples of how to estimate process emission factors from activity and air concentration data. The processes were open tank and enclosed tank degreasing with trichloroethylene (TCE), chromium conversion coating, and chromium electroplating. Area concentrations of TCE and chromium (Cr) were monitored for 1-hr periods at three distances from each process. Source activities at each process were recorded during each sampling interval. Emission rates were determined by applying appropriate mass balance models to the concentration patterns around each source. The emission factors obtained from regression analysis of the emission rate and activity data were 16.9 g TCE/basket of parts for the open-top degreaser; 1.0 g TCE/1000 parts for the enclosed degreaser; 1.48-1.64 mg Cr/1000 parts processed in the hot CrO3/HNO3 tank for the chrome conversion coating; and 5.35-9.17 mg Cr/rack of parts for chrome electroplating. The factors were also used to determine the efficiency of collection for the local exhaust systems serving each process. Although the number of observations were limited, these factors may be useful for providing initial estimates of emissions from similar processes in other settings.

  8. Comparison of Modeling Approaches for Carbon Partitioning: Impact on Estimates of Global Net Primary Production and Equilibrium Biomass of Woody Vegetation from MODIS GPP

    NASA Astrophysics Data System (ADS)

    Ise, T.; Litton, C. M.; Giardina, C. P.; Ito, A.

    2009-12-01

    Plant partitioning of carbon (C) to above- vs. belowground, to growth vs. respiration, and to short vs. long lived tissues exerts a large influence on ecosystem structure and function with implications for the global C budget. Importantly, outcomes of process-based terrestrial vegetation models are likely to vary substantially with different C partitioning algorithms. However, controls on C partitioning patterns remain poorly quantified, and studies have yielded variable, and at times contradictory, results. A recent meta-analysis of forest studies suggests that the ratio of net primary production (NPP) and gross primary production (GPP) is fairly conservative across large scales. To illustrate the effect of this unique meta-analysis-based partitioning scheme (MPS), we compared an application of MPS to a terrestrial satellite-based (MODIS) GPP to estimate NPP vs. two global process-based vegetation models (Biome-BGC and VISIT) to examine the influence of C partitioning on C budgets of woody plants. Due to the temperature dependence of maintenance respiration, NPP/GPP predicted by the process-based models increased with latitude while the ratio remained constant with MPS. Overall, global NPP estimated with MPS was 17 and 27% lower than the process-based models for temperate and boreal biomes, respectively, with smaller differences in the tropics. Global equilibrium biomass of woody plants was then calculated from the NPP estimates and tissue turnover rates from VISIT. Since turnover rates differed greatly across tissue types (i.e., metabolically active vs. structural), global equilibrium biomass estimates were sensitive to the partitioning scheme employed. The MPS estimate of global woody biomass was 7-21% lower than that of the process-based models. In summary, we found that model output for NPP and equilibrium biomass was quite sensitive to the choice of C partitioning schemes. Carbon use efficiency (CUE; NPP/GPP) by forest biome and the globe. Values are means for 2001-2006.

  9. Implications of sampling design and sample size for national carbon accounting systems.

    PubMed

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  10. Novel approach for computing photosynthetically active radiation for productivity modeling using remotely sensed images in the Great Plains, United States

    USGS Publications Warehouse

    Singh, Ramesh K.; Liu, Shu-Guang; Tieszen, Larry L.; Suyker, Andrew E.; Verma, Shashi B.

    2012-01-01

    Gross primary production (GPP) is a key indicator of ecosystem performance, and helps in many decision-making processes related to environment. We used the Eddy covariancelight use efficiency (EC-LUE) model for estimating GPP in the Great Plains, United States in order to evaluate the performance of this model. We developed a novel algorithm for computing the photosynthetically active radiation (PAR) based on net radiation. A strong correlation (R2=0.94,N=24) was found between daily PAR and Landsat-based mid-day instantaneous net radiation. Though the Moderate Resolution Spectroradiometer (MODIS) based instantaneous net radiation was in better agreement (R2=0.98,N=24) with the daily measured PAR, there was no statistical significant difference between Landsat based PAR and MODIS based PAR. The EC-LUE model validation also confirms the need to consider biological attributes (C3 versus C4 plants) for potential light use efficiency. A universal potential light use efficiency is unable to capture the spatial variation of GPP. It is necessary to use C3 versus C4 based land use/land cover map for using EC-LUE model for estimating spatiotemporal distribution of GPP.

  11. Long-term shifts in life-cycle energy efficiency and carbon intensity.

    PubMed

    Yeh, Sonia; Mishra, Gouri Shankar; Morrison, Geoff; Teter, Jacob; Quiceno, Raul; Gillingham, Kenneth; Riera-Palou, Xavier

    2013-03-19

    The quantity of primary energy needed to support global human activity is in large part determined by how efficiently that energy is converted to a useful form. We estimate the system-level life-cycle energy efficiency (EF) and carbon intensity (CI) across primary resources for 2005-2100. Our results underscore that although technological improvements at each energy conversion process will improve technology efficiency and lead to important reductions in primary energy use, market mediated effects and structural shifts toward less efficient pathways and pathways with multiple stages of conversion will dampen these efficiency gains. System-level life-cycle efficiency may decrease as mitigation efforts intensify, since low-efficiency renewable systems with high output have much lower GHG emissions than some high-efficiency fossil fuel systems. Climate policies accelerate both improvements in EF and the adoption of renewable technologies, resulting in considerably lower primary energy demand and GHG emissions. Life-cycle EF and CI of useful energy provide a useful metric for understanding dynamics of implementing climate policies. The approaches developed here reiterate the necessity of a combination of policies that target efficiency and decarbonized energy technologies. We also examine life-cycle exergy efficiency (ExF) and find that nearly all of the qualitative results hold regardless of whether we use ExF or EF.

  12. Enhanced analysis of real-time PCR data by using a variable efficiency model: FPK-PCR

    PubMed Central

    Lievens, Antoon; Van Aelst, S.; Van den Bulcke, M.; Goetghebeur, E.

    2012-01-01

    Current methodology in real-time Polymerase chain reaction (PCR) analysis performs well provided PCR efficiency remains constant over reactions. Yet, small changes in efficiency can lead to large quantification errors. Particularly in biological samples, the possible presence of inhibitors forms a challenge. We present a new approach to single reaction efficiency calculation, called Full Process Kinetics-PCR (FPK-PCR). It combines a kinetically more realistic model with flexible adaptation to the full range of data. By reconstructing the entire chain of cycle efficiencies, rather than restricting the focus on a ‘window of application’, one extracts additional information and loses a level of arbitrariness. The maximal efficiency estimates returned by the model are comparable in accuracy and precision to both the golden standard of serial dilution and other single reaction efficiency methods. The cycle-to-cycle changes in efficiency, as described by the FPK-PCR procedure, stay considerably closer to the data than those from other S-shaped models. The assessment of individual cycle efficiencies returns more information than other single efficiency methods. It allows in-depth interpretation of real-time PCR data and reconstruction of the fluorescence data, providing quality control. Finally, by implementing a global efficiency model, reproducibility is improved as the selection of a window of application is avoided. PMID:22102586

  13. Improving photometric redshift estimation using GPZ: size information, post processing, and improved photometry

    NASA Astrophysics Data System (ADS)

    Gomes, Zahra; Jarvis, Matt J.; Almosallam, Ibrahim A.; Roberts, Stephen J.

    2018-03-01

    The next generation of large-scale imaging surveys (such as those conducted with the Large Synoptic Survey Telescope and Euclid) will require accurate photometric redshifts in order to optimally extract cosmological information. Gaussian Process for photometric redshift estimation (GPZ) is a promising new method that has been proven to provide efficient, accurate photometric redshift estimations with reliable variance predictions. In this paper, we investigate a number of methods for improving the photometric redshift estimations obtained using GPZ (but which are also applicable to others). We use spectroscopy from the Galaxy and Mass Assembly Data Release 2 with a limiting magnitude of r < 19.4 along with corresponding Sloan Digital Sky Survey visible (ugriz) photometry and the UKIRT Infrared Deep Sky Survey Large Area Survey near-IR (YJHK) photometry. We evaluate the effects of adding near-IR magnitudes and angular size as features for the training, validation, and testing of GPZ and find that these improve the accuracy of the results by ˜15-20 per cent. In addition, we explore a post-processing method of shifting the probability distributions of the estimated redshifts based on their Quantile-Quantile plots and find that it improves the bias by ˜40 per cent. Finally, we investigate the effects of using more precise photometry obtained from the Hyper Suprime-Cam Subaru Strategic Program Data Release 1 and find that it produces significant improvements in accuracy, similar to the effect of including additional features.

  14. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  15. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  16. Rate of convergence of k-step Newton estimators to efficient likelihood estimators

    Treesearch

    Steve Verrill

    2007-01-01

    We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...

  17. Efficient visual grasping alignment for cylinders

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.

  18. Efficient visual grasping alignment for cylinders

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.

  19. Real-time yield estimation based on deep learning

    NASA Astrophysics Data System (ADS)

    Rahnemoonfar, Maryam; Sheppard, Clay

    2017-05-01

    Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.

  20. Production mechanism of atomic nitrogen in atmospheric pressure pulsed corona discharge measured using two-photon absorption laser-induced fluorescence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teramoto, Yoshiyuki; Ono, Ryo; Oda, Tetsuji

    To study the production mechanism of atomic nitrogen, the temporal profile and spatial distribution of atomic nitrogen are measured in atmospheric pressure pulsed positive corona discharge using two-photon absorption laser-induced fluorescence. The absolute atomic nitrogen density in the streamer filaments is estimated from decay rate of atomic nitrogen in N{sub 2} discharge. The results indicate that the absolute atomic nitrogen density is approximately constant against discharge energy. When the discharge voltage is 21.5 kV, production yield of atomic nitrogen produced by an N{sub 2} discharge pulse is estimated to be 2.9 - 9.8 Multiplication-Sign 10{sup 13} atoms and the energymore » efficiency of atomic nitrogen production is estimated to be about 1.8 - 6.1 Multiplication-Sign 10{sup 16} atoms/J. The energy efficiency of atomic nitrogen production in N{sub 2} discharge is constant against the discharge energy, while that in N{sub 2}/O{sub 2} discharge increases with discharge energy. In the N{sub 2}/O{sub 2} discharge, two-step process of N{sub 2} dissociation plays significant role for atomic nitrogen production.« less

  1. Trophic transfer efficiency of DDT to lake trout (Salvelinus namaycush) from their prey

    USGS Publications Warehouse

    Madenjian, C.P.; O'Connor, D.V.

    2004-01-01

    The objective of our study was to determine the efficiency with which lake trout retain DDT from their natural food. Our estimate of DDT assimilation efficiency would represent the most realistic estimate, to date, for use in risk assessment models.

  2. Augmented Topological Descriptors of Pore Networks for Material Science.

    PubMed

    Ushizima, D; Morozov, D; Weber, G H; Bianchi, A G C; Sethian, J A; Bethel, E W

    2012-12-01

    One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.

  3. Remediating ethylbenzene-contaminated clayey soil by a surfactant-aided electrokinetic (SAEK) process.

    PubMed

    Yuan, Ching; Weng, Chih-Huang

    2004-10-01

    The objectives of this research are to investigate the remediation efficiency and electrokinetic behavior of ethylbenzene-contaminated clay by a surfactant-aided electrokinetic (SAEK) process under a potential gradient of 2 Vcm(-1). Experimental results indicated that the type of processing fluids played a key role in determining the removal performance of ethylbenzene from clay in the SAEK process. A mixed surfactant system consisted of 0.5% SDS and 2.0% PANNOX 110 showed the best performance of ethylbenzene removed in the SAEK system. The removal efficiency of ethylbenzene was determined to be 63-98% in SAEK system while only 40% was achieved in an electrokinetic system with tap water as processing fluid. It was found that ethylbenzene was accumulated in the vicinity of anode in an electrokinetic system with tap water as processing fluid. However, the concentration front of ethylbenzene was shifted toward cathode in the SAEK system. The electroosmotic permeability and power consumption were 0.17 x 10(-6)-3.01 x 10(-6) cm(2)V(-1)s(-1) and 52-123 kW h m(-3), respectively. The cost, including the expense of energy and surfactants, was estimated to be 5.15-12.65 USD m(-3) for SAEK systems, which was 2.0-4.9 times greater than that in the system of electrokinetic alone (2.6 USD m(-3)). Nevertheless, by taking the remediation efficiency of ethylbenzene and the energy expenditure into account for the overall process performance evaluation, the system SAEK was still a cost-effective alternative treatment method.

  4. The charger transfer electronic coupling in diabatic perspective: A multi-state density functional theory study

    NASA Astrophysics Data System (ADS)

    Guo, Xinwei; Qu, Zexing; Gao, Jiali

    2018-01-01

    The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.

  5. Estimation of rate constant for VE excitation of the С2(D1Σ) state in Не-СО-О2 discharge plasma

    NASA Astrophysics Data System (ADS)

    Grigorian, G.; Cenian, Adam

    2013-01-01

    The paper discusses the experimental results pointing to the efficient channel of the CO vibrational to the C2 electronic energy-transfer. The radiation spectra D1Σu - X1Σg , known as Mulliken bands, are investigated and the relation of their kinetics to a vibrational excitation of CO molecules in the He-CO-O2 plasma is discussed. The rate constant for VE process ( CO(v >= 25) + C2 → CO(v - 25) + C2(D1Σu) ) is estimated, kVE ~ 10-14 см3/с.

  6. Relative Navigation for Formation Flying of Spacecraft

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Du, Ju-Young; Hughes, Declan; Junkins, John L.; Crassidis, John L.

    2001-01-01

    This paper presents a robust and efficient approach for relative navigation and attitude estimation of spacecraft flying in formation. This approach uses measurements from a new optical sensor that provides a line of sight vector from the master spacecraft to the secondary satellite. The overall system provides a novel, reliable, and autonomous relative navigation and attitude determination system, employing relatively simple electronic circuits with modest digital signal processing requirements and is fully independent of any external systems. Experimental calibration results are presented, which are used to achieve accurate line of sight measurements. State estimation for formation flying is achieved through an optimal observer design. Also, because the rotational and translational motions are coupled through the observation vectors, three approaches are suggested to separate both signals just for stability analysis. Simulation and experimental results indicate that the combined sensor/estimator approach provides accurate relative position and attitude estimates.

  7. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  8. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  9. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  10. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    PubMed

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  11. Response surface methodology for ozonation of trifluralin using advanced oxidation processes in an airlift photoreactor

    NASA Astrophysics Data System (ADS)

    Behin, J.; Farhadian, N.

    2017-10-01

    Degradation of trifluralin, as a wide used pesticide, was investigated by advance oxidation process comprising O3/UV/H2O2 in a concentric tube airlift photoreactor. Main and interactive effects of three independent factors including pH (5-9), superficial gas velocity (0.05-0.15 cm/s) and time (20-60 min) on the removal efficiency were assessed using central composite face-centered design and response surface method (RSM). The RSM allows to solve multivariable equations and to estimate simultaneously the relative importance of several contributing parameters even in the presence of complex interaction. Airlift photoreactor imposed a synergistic effect combining good mixing intensity merit with high ozone transfer rate. Mixing in the airlift photoreactor enhanced the UV light usage efficiency and its availability. Complete degradation of trifluralin was achieved under optimum conditions of pH 9 and superficial gas velocity 0.15 cm/s after 60 min of reaction time. Under these conditions, degradation of trifluralin was performed in a bubble column photoreactor of similar volume and a lower efficiency was observed.

  12. Nested sparse grid collocation method with delay and transformation for subsurface flow and transport problems

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-06-01

    In numerical modeling of subsurface flow and transport problems, formation properties may not be deterministically characterized, which leads to uncertainty in simulation results. In this study, we propose a sparse grid collocation method, which adopts nested quadrature rules with delay and transformation to quantify the uncertainty of model solutions. We show that the nested Kronrod-Patterson-Hermite quadrature is more efficient than the unnested Gauss-Hermite quadrature. We compare the convergence rates of various quadrature rules including the domain truncation and domain mapping approaches. To further improve accuracy and efficiency, we present a delayed process in selecting quadrature nodes and a transformed process for approximating unsmooth or discontinuous solutions. The proposed method is tested by an analytical function and in one-dimensional single-phase and two-phase flow problems with different spatial variances and correlation lengths. An additional example is given to demonstrate its applicability to three-dimensional black-oil models. It is found from these examples that the proposed method provides a promising approach for obtaining satisfactory estimation of the solution statistics and is much more efficient than the Monte-Carlo simulations.

  13. Online Denoising Based on the Second-Order Adaptive Statistics Model.

    PubMed

    Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei

    2017-07-20

    Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.

  14. Cost studies for commercial fuselage crown designs

    NASA Technical Reports Server (NTRS)

    Walker, T. H.; Smith, P. J.; Truslove, G.; Willden, K. S.; Metschan, S. L.; Pfahl, C. L.

    1991-01-01

    Studies were conducted to evaluate the cost and weight potential of advanced composite design concepts in the crown region of a commercial transport. Two designs from each of three design families were developed using an integrated design-build team. A range of design concepts and manufacturing processes were included to allow isolation and comparison of cost centers. Detailed manufacturing/assembly plans were developed as the basis for cost estimates. Each of the six designs was found to have advantages over the 1995 aluminum benchmark in cost and weight trade studies. Large quadrant panels and cobonded frames were found to save significant assembly labor costs. Comparisons of high- and intermediate-performance fiber systems were made for skin and stringer applications. Advanced tow placement was found to be an efficient process for skin lay up. Further analysis revealed attractive processes for stringers and frames. Optimized designs were informally developed for each design family, combining the most attractive concepts and processes within that family. A single optimized design was selected as the most promising, and the potential for further optimization was estimated. Technical issues and barriers were identified.

  15. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    USGS Publications Warehouse

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  16. Regularity of a renewal process estimated from binary data.

    PubMed

    Rice, John D; Strawderman, Robert L; Johnson, Brent A

    2017-10-09

    Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.

  17. Seasonal copepod lipid pump promotes carbon sequestration in the deep North Atlantic

    PubMed Central

    Jónasdóttir, Sigrún Huld; Visser, André W.; Richardson, Katherine; Heath, Michael R.

    2015-01-01

    Estimates of carbon flux to the deep oceans are essential for our understanding of global carbon budgets. Sinking of detrital material (“biological pump”) is usually thought to be the main biological component of this flux. Here, we identify an additional biological mechanism, the seasonal “lipid pump,” which is highly efficient at sequestering carbon into the deep ocean. It involves the vertical transport and metabolism of carbon rich lipids by overwintering zooplankton. We show that one species, the copepod Calanus finmarchicus overwintering in the North Atlantic, sequesters an amount of carbon equivalent to the sinking flux of detrital material. The efficiency of the lipid pump derives from a near-complete decoupling between nutrient and carbon cycling—a “lipid shunt,” and its direct transport of carbon through the mesopelagic zone to below the permanent thermocline with very little attenuation. Inclusion of the lipid pump almost doubles the previous estimates of deep-ocean carbon sequestration by biological processes in the North Atlantic. PMID:26338976

  18. Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.

    PubMed

    von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar

    2017-09-01

    Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.

  19. Energy use in the New Zealand food system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patterson, M.G.; Earle, M.D.

    1985-03-01

    The study covered the total energy requirements of the production, processing, wholesale distribution, retailing, shopping and household sectors of the food system in New Zealand. This included the direct energy requirements, and the indirect energy requirements in supplying materials, buildings and equipment. Data were collected from a wide range of literature sources, and converted into forms required for this research project. Also, data were collected in supplementary sample surveys at the wholesale distribution, retailing and shopping sectors. The details of these supplementary surveys are outlined in detailed survey reports fully referenced in the text. From these base data, the totalmore » energy requirements per unit product (MJ/kg) were estimated for a wide range of food chain steps. Some clear alternatives in terms of energy efficiency emerged from a comparison of these estimates. For example, it was found that it was most energy efficient to use dehydrated vegetables, followed by fresh vegetables, freeze dried vegetables, canned vegetables and then finally frozen vegetables.« less

  20. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  1. Using multi-level remote sensing and ground data to estimate forest biomass resources in remote regions: a case study in the boreal forests of interior Alaska

    Treesearch

    Hans-Erik Andersen; Strunk Jacob; Hailemariam Temesgen; Donald Atwood; Ken Winterberger

    2012-01-01

    The emergence of a new generation of remote sensing and geopositioning technologies, as well as increased capabilities in image processing, computing, and inferential techniques, have enabled the development and implementation of increasingly efficient and cost-effective multilevel sampling designs for forest inventory. In this paper, we (i) describe the conceptual...

  2. Efficient QR sequential least square algorithm for high frequency GNSS precise point positioning seismic application

    NASA Astrophysics Data System (ADS)

    Barbu, Alina L.; Laurent-Varin, Julien; Perosanz, Felix; Mercier, Flavien; Marty, Jean-Charles

    2018-01-01

    The implementation into the GINS CNES geodetic software of a more efficient filter was needed to satisfy the users who wanted to compute high-rate GNSS PPP solutions. We selected the SRI approach and a QR factorization technique including an innovative algorithm which optimizes the matrix reduction step. A full description of this algorithm is given for future users. The new capacities of the software have been tested using a set of 1 Hz data from the Japanese GEONET network including the Mw 9.0 2011 Tohoku earthquake. Station coordinates solution agreed at a sub-decimeter level with previous publications as well as with solutions we computed with the National Resource Canada software. An additional benefit from the implementation of the SRI filter is the capability to estimate high-rate tropospheric parameters too. As the CPU time to estimate a 1 Hz kinematic solution from 1 h of data is now less than 1 min we could produced series of coordinates for the full 1300 stations of the Japanese network. The corresponding movie shows the impressive co-seismic deformation as well as the wave propagation along the island. The processing was straightforward using a cluster of PCs which illustrates the new potentiality of the GINS software for massive network high rate PPP processing.

  3. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  4. The evolution of an impact-generated atmosphere

    NASA Technical Reports Server (NTRS)

    Lange, M. A.; Ahrens, T. J.

    1982-01-01

    The minimum impact velocities and pressures required to form a primary H2O atmosphere during planetary accretion from chondritelike planetessimals are determined by means of shock wave and thermodynamic data for rock-forming and volatile-bearing minerals. Attenuation of impact-induced shock pressure is modelled to the extent that the amount of released water can be estimated as a function of projectile radius, impact velocity, weight fraction of target water, target porosity, and dehydration efficiency. The two primary processes considered are the impact release of water bound in such hydrous minerals as serpentine, and the subsequent reincorporation of free water by hydration of forsterite and enstatite. These processes are described in terms of model calculations for the accretion of the earth. It is concluded that the concept of dehydration efficiency is of dominant importance in determining the degree to which an accreting planet acquires an atmosphere during its formation.

  5. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  6. The Stochastic Parcel Model: A deterministic parameterization of stochastically entraining convection

    DOE PAGES

    Romps, David M.

    2016-03-01

    Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less

  7. Super-resolution photon-efficient imaging by nanometric double-helix point spread function localization of emitters (SPINDLE)

    PubMed Central

    Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael

    2012-01-01

    Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521

  8. Commercial Discount Rate Estimation for Efficiency Standards Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, K. Sydny

    2016-04-13

    Underlying each of the Department of Energy's (DOE's) federal appliance and equipment standards are a set of complex analyses of the projected costs and benefits of regulation. Any new or amended standard must be designed to achieve significant additional energy conservation, provided that it is technologically feasible and economically justified (42 U.S.C. 6295(o)(2)(A)). A proposed standard is considered economically justified when its benefits exceed its burdens, as represented by the projected net present value of costs and benefits. DOE performs multiple analyses to evaluate the balance of costs and benefits of commercial appliance and equipment e efficiency standards, at themore » national and individual building or business level, each framed to capture different nuances of the complex impact of standards on the commercial end user population. The Life-Cycle Cost (LCC) analysis models the combined impact of appliance first cost and operating cost changes on a representative commercial building sample in order to identify the fraction of customers achieving LCC savings or incurring net cost at the considered efficiency levels.1 Thus, the choice of commercial discount rate value(s) used to calculate the present value of energy cost savings within the Life-Cycle Cost model implicitly plays a key role in estimating the economic impact of potential standard levels.2 This report is intended to provide a more in-depth discussion of the commercial discount rate estimation process than can be readily included in standard rulemaking Technical Support Documents (TSDs).« less

  9. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    NASA Astrophysics Data System (ADS)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  10. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  11. Micropollutant degradation, bacterial inactivation and regrowth risk in wastewater effluents: Influence of the secondary (pre)treatment on the efficiency of Advanced Oxidation Processes.

    PubMed

    Giannakis, Stefanos; Voumard, Margaux; Grandjean, Dominique; Magnet, Anoys; De Alencastro, Luiz Felippe; Pulgarin, César

    2016-10-01

    In this work, disinfection by 5 Advanced Oxidation Processes was preceded by 3 different secondary treatment systems present in the wastewater treatment plant of Vidy, Lausanne (Switzerland). 5 AOPs after two biological treatment methods (conventional activated sludge and moving bed bioreactor) and a physiochemical process (coagulation-flocculation) were tested in laboratory scale. The dependence among AOPs efficiency and secondary (pre)treatment was estimated by following the bacterial concentration i) before secondary treatment, ii) after the different secondary treatment methods and iii) after the various AOPs. Disinfection and post-treatment bacterial regrowth were the evaluation indicators. The order of efficiency was Moving Bed Bioreactor > Activated Sludge > Coagulation-Flocculation > Primary Treatment. As far as the different AOPs are concerned, the disinfection kinetics were: UVC/H2O2 > UVC and solar photo-Fenton > Fenton or solar light. The contextualization and parallel study of microorganisms with the micropollutants of the effluents revealed that higher exposure times were necessary for complete degradation compared to microorganisms for the UV-based processes and inversed for the Fenton-related ones. Nevertheless, in the Fenton-related systems, the nominal 80% removal of micropollutants deriving from the Swiss legislation, often took place before the elimination of bacterial regrowth risk. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  13. Comparison of empirical estimate of clinical pretest probability with the Wells score for diagnosis of deep vein thrombosis.

    PubMed

    Wang, Bo; Lin, Yin; Pan, Fu-shun; Yao, Chen; Zheng, Zi-Yu; Cai, Dan; Xu, Xiang-dong

    2013-01-01

    Wells score has been validated for estimation of pretest probability in patients with suspected deep vein thrombosis (DVT). In clinical practice, many clinicians prefer to use empirical estimation rather than Wells score. However, which method is better to increase the accuracy of clinical evaluation is not well understood. Our present study compared empirical estimation of pretest probability with the Wells score to investigate the efficiency of empirical estimation in the diagnostic process of DVT. Five hundred and fifty-five patients were enrolled in this study. One hundred and fifty patients were assigned to examine the interobserver agreement for Wells score between emergency and vascular clinicians. The other 405 patients were assigned to evaluate the pretest probability of DVT on the basis of the empirical estimation and Wells score, respectively, and plasma D-dimer levels were then determined in the low-risk patients. All patients underwent venous duplex scans and had a 45-day follow up. Weighted Cohen's κ value for interobserver agreement between emergency and vascular clinicians of the Wells score was 0.836. Compared with Wells score evaluation, empirical assessment increased the sensitivity, specificity, Youden's index, positive likelihood ratio, and positive and negative predictive values, but decreased negative likelihood ratio. In addition, the appropriate D-dimer cutoff value based on Wells score was 175 μg/l and 108 patients were excluded. Empirical assessment increased the appropriate D-dimer cutoff point to 225 μg/l and 162 patients were ruled out. Our findings indicated that empirical estimation not only improves D-dimer assay efficiency for exclusion of DVT but also increases clinical judgement accuracy in the diagnosis of DVT.

  14. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  15. Validation of abundance estimates from mark–recapture and removal techniques for rainbow trout captured by electrofishing in small streams

    USGS Publications Warehouse

    Rosenberger, Amanda E.; Dunham, Jason B.

    2005-01-01

    Estimation of fish abundance in streams using the removal model or the Lincoln - Peterson mark - recapture model is a common practice in fisheries. These models produce misleading results if their assumptions are violated. We evaluated the assumptions of these two models via electrofishing of rainbow trout Oncorhynchus mykiss in central Idaho streams. For one-, two-, three-, and four-pass sampling effort in closed sites, we evaluated the influences of fish size and habitat characteristics on sampling efficiency and the accuracy of removal abundance estimates. We also examined the use of models to generate unbiased estimates of fish abundance through adjustment of total catch or biased removal estimates. Our results suggested that the assumptions of the mark - recapture model were satisfied and that abundance estimates based on this approach were unbiased. In contrast, the removal model assumptions were not met. Decreasing sampling efficiencies over removal passes resulted in underestimated population sizes and overestimates of sampling efficiency. This bias decreased, but was not eliminated, with increased sampling effort. Biased removal estimates based on different levels of effort were highly correlated with each other but were less correlated with unbiased mark - recapture estimates. Stream size decreased sampling efficiency, and stream size and instream wood increased the negative bias of removal estimates. We found that reliable estimates of population abundance could be obtained from models of sampling efficiency for different levels of effort. Validation of abundance estimates requires extra attention to routine sampling considerations but can help fisheries biologists avoid pitfalls associated with biased data and facilitate standardized comparisons among studies that employ different sampling methods.

  16. A multi-objective framework to predict flows of ungauged rivers within regions of sparse hydrometeorologic observation

    NASA Astrophysics Data System (ADS)

    Alipour, M.; Kibler, K. M.

    2017-12-01

    Despite advances in flow prediction, managers of ungauged rivers located within broad regions of sparse hydrometeorologic observation still lack prescriptive methods robust to the data challenges of such regions. We propose a multi-objective streamflow prediction framework for regions of minimum observation to select models that balance runoff efficiency with choice of accurate parameter values. We supplement sparse observed data with uncertain or low-resolution information incorporated as `soft' a priori parameter estimates. The performance of the proposed framework is tested against traditional single-objective and constrained single-objective calibrations in two catchments in a remote area of southwestern China. We find that the multi-objective approach performs well with respect to runoff efficiency in both catchments (NSE = 0.74 and 0.72), within the range of efficiencies returned by other models (NSE = 0.67 - 0.78). However, soil moisture capacity estimated by the multi-objective model resonates with a priori estimates (parameter residuals of 61 cm versus 289 and 518 cm for maximum soil moisture capacity in one catchment, and 20 cm versus 246 and 475 cm in the other; parameter residuals of 0.48 versus 0.65 and 0.7 for soil moisture distribution shape factor in one catchment, and 0.91 versus 0.79 and 1.24 in the other). Thus, optimization to a multi-criteria objective function led to very different representations of soil moisture capacity as compared to models selected by single-objective calibration, without compromising runoff efficiency. These different soil moisture representations may translate into considerably different hydrological behaviors. The proposed approach thus offers a preliminary step towards greater process understanding in regions of severe data limitations. For instance, the multi-objective framework may be an adept tool to discern between models of similar efficiency to select models that provide the "right answers for the right reasons". Managers may feel more confident to utilize such models to predict flows in fully ungauged areas.

  17. The relationship between inadvertent ingestion and dermal exposure pathways: a new integrated conceptual model and a database of dermal and oral transfer efficiencies.

    PubMed

    Gorman Ng, Melanie; Semple, Sean; Cherrie, John W; Christopher, Yvette; Northage, Christine; Tielemans, Erik; Veroughstraete, Violaine; Van Tongeren, Martie

    2012-11-01

    Occupational inadvertent ingestion exposure is ingestion exposure due to contact between the mouth and contaminated hands or objects. Although individuals are typically oblivious to their exposure by this route, it is a potentially significant source of occupational exposure for some substances. Due to the continual flux of saliva through the oral cavity and the non-specificity of biological monitoring to routes of exposure, direct measurement of exposure by the inadvertent ingestion route is challenging; predictive models may be required to assess exposure. The work described in this manuscript has been carried out as part of a project to develop a predictive model for estimating inadvertent ingestion exposure in the workplace. As inadvertent ingestion exposure mainly arises from hand-to-mouth contact, it is closely linked to dermal exposure. We present a new integrated conceptual model for dermal and inadvertent ingestion exposure that should help to increase our understanding of ingestion exposure and our ability to simultaneously estimate exposure by the dermal and ingestion routes. The conceptual model consists of eight compartments (source, air, surface contaminant layer, outer clothing contaminant layer, inner clothing contaminant layer, hands and arms layer, perioral layer, and oral cavity) and nine mass transport processes (emission, deposition, resuspension or evaporation, transfer, removal, redistribution, decontamination, penetration and/or permeation, and swallowing) that describe event-based movement of substances between compartments (e.g. emission, deposition, etc.). This conceptual model is intended to guide the development of predictive exposure models that estimate exposure from both the dermal and the inadvertent ingestion pathways. For exposure by these pathways the efficiency of transfer of materials between compartments (for example from surfaces to hands, or from hands to the mouth) are important determinants of exposure. A database of transfer efficiency data relevant for dermal and inadvertent ingestion exposure was developed, containing 534 empirically measured transfer efficiencies measured between 1980 and 2010 and reported in the peer-reviewed and grey literature. The majority of the reported transfer efficiencies (84%) relate to transfer between surfaces and hands, but the database also includes efficiencies for other transfer scenarios, including surface-to-glove, hand-to-mouth, and skin-to-skin. While the conceptual model can provide a framework for a predictive exposure assessment model, the database provides detailed information on transfer efficiencies between the various compartments. Together, the conceptual model and the database provide a basis for the development of a quantitative tool to estimate inadvertent ingestion exposure in the workplace.

  18. A testing-coverage software reliability model considering fault removal efficiency and error generation.

    PubMed

    Li, Qiuying; Pham, Hoang

    2017-01-01

    In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.

  19. Using the entire history in the analysis of nested case cohort samples.

    PubMed

    Rivera, C L; Lumley, T

    2016-08-15

    Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  1. Investigation of signal processing algorithms for an embedded microcontroller-based wearable pulse oximeter.

    PubMed

    Johnston, W S; Mendelson, Y

    2006-01-01

    Despite steady progress in the miniaturization of pulse oximeters over the years, significant challenges remain since advanced signal processing must be implemented efficiently in real-time by a relatively small size wearable device. The goal of this study was to investigate several potential digital signal processing algorithms for computing arterial oxygen saturation (SpO(2)) and heart rate (HR) in a battery-operated wearable reflectance pulse oximeter that is being developed in our laboratory for use by medics and first responders in the field. We found that a differential measurement approach, combined with a low-pass filter (LPF), yielded the most suitable signal processing technique for estimating SpO(2), while a signal derivative approach produced the most accurate HR measurements.

  2. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  4. Cost-Benefit of Improving the Efficiency of Room Air Conditioners (Inverter and Fixed Speed) in India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phadke, Amol; Shah, Nihar; Abhyankar, Nikit

    Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant,and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. The finding that significant efficiency improvement is cost effective from a consumer perspective is robustmore » over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one-star level) should be evaluated rigorously considering significant benefits to consumers, energy security, and environment« less

  5. Improving primary health care facility performance in Ghana: efficiency analysis and fiscal space implications.

    PubMed

    Novignon, Jacob; Nonvignon, Justice

    2017-06-12

    Health centers in Ghana play an important role in health care delivery especially in deprived communities. They usually serve as the first line of service and meet basic health care needs. Unfortunately, these facilities are faced with inadequate resources. While health policy makers seek to increase resources committed to primary healthcare, it is important to understand the nature of inefficiencies that exist in these facilities. Therefore, the objectives of this study are threefold; (i) estimate efficiency among primary health facilities (health centers), (ii) examine the potential fiscal space from improved efficiency and (iii) investigate the efficiency disparities in public and private facilities. Data was from the 2015 Access Bottlenecks, Cost and Equity (ABCE) project conducted by the Institute for Health Metrics and Evaluation. The Stochastic Frontier Analysis (SFA) was used to estimate efficiency of health facilities. Efficiency scores were then used to compute potential savings from improved efficiency. Outpatient visits was used as output while number of personnel, hospital beds, expenditure on other capital items and administration were used as inputs. Disparities in efficiency between public and private facilities was estimated using the Nopo matching decomposition procedure. Average efficiency score across all health centers included in the sample was estimated to be 0.51. Also, average efficiency was estimated to be about 0.65 and 0.50 for private and public facilities, respectively. Significant disparities in efficiency were identified across the various administrative regions. With regards to potential fiscal space, we found that, on average, facilities could save about GH₵11,450.70 (US$7633.80) if efficiency was improved. We also found that fiscal space from efficiency gains varies across rural/urban as well as private/public facilities, if best practices are followed. The matching decomposition showed an efficiency gap of 0.29 between private and public facilities. There is need for primary health facility managers to improve productivity via effective and efficient resource use. Efforts to improve efficiency should focus on training health workers and improving facility environment alongside effective monitoring and evaluation exercises.

  6. Monte Carlo simulation of prompt γ-ray emission in proton therapy using a specific track length estimator

    NASA Astrophysics Data System (ADS)

    El Kanawati, W.; Létang, J. M.; Dauvergne, D.; Pinto, M.; Sarrut, D.; Testa, É.; Freud, N.

    2015-10-01

    A Monte Carlo (MC) variance reduction technique is developed for prompt-γ emitters calculations in proton therapy. Prompt-γ emitted through nuclear fragmentation reactions and exiting the patient during proton therapy could play an important role to help monitoring the treatment. However, the estimation of the number and the energy of emitted prompt-γ per primary proton with MC simulations is a slow process. In order to estimate the local distribution of prompt-γ emission in a volume of interest for a given proton beam of the treatment plan, a MC variance reduction technique based on a specific track length estimator (TLE) has been developed. First an elemental database of prompt-γ emission spectra is established in the clinical energy range of incident protons for all elements in the composition of human tissues. This database of the prompt-γ spectra is built offline with high statistics. Regarding the implementation of the prompt-γ TLE MC tally, each proton deposits along its track the expectation of the prompt-γ spectra from the database according to the proton kinetic energy and the local material composition. A detailed statistical study shows that the relative efficiency mainly depends on the geometrical distribution of the track length. Benchmarking of the proposed prompt-γ TLE MC technique with respect to an analogous MC technique is carried out. A large relative efficiency gain is reported, ca. 105.

  7. Efficient statistical mapping of avian count data

    USGS Publications Warehouse

    Royle, J. Andrew; Wikle, C.K.

    2005-01-01

    We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.

  8. Improved recovery demonstration for Williston Basin carbonates. Annual report, June 10, 1995--June 9, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrell, L.A.; Sippel, M.A.

    1996-09-01

    The purpose of this project is to demonstrate targeted infill and extension drilling opportunities, better determinations of oil-in-place, methods for improved completion efficiency and the suitability of waterflooding in Red River and Ratcliffe shallow-shelf carbonate reservoirs in the Williston Basin, Montana, North Dakota and South Dakota. Improved reservoir characterization utilizing three-dimensional and multi-component seismic are being investigated for identification of structural and stratigraphic reservoir compartments. These seismic characterization tools are integrated with geological and engineering studies. Improved completion efficiency is being tested with extended-reach jetting lance and other ultra-short-radius lateral technologies. Improved completion efficiency, additional wells at closer spacing andmore » better estimates of oil in place will result in additional oil recovery by primary and enhanced recovery processes.« less

  9. Solid State Lasers from an Efficiency Perspective

    NASA Technical Reports Server (NTRS)

    Barnes, Norman P.

    2007-01-01

    Solid state lasers have remained a vibrant area of research because several major innovations expanded their capability. Major innovations are presented with emphasis focused on the laser efficiency. A product of efficiencies approach is developed and applied to describe laser performance. Efficiency factors are presented in closed form where practical and energy transfer effects are included where needed. In turn, efficiency factors are used to estimate threshold and slope efficiency, allowing a facile estimate of performance. Spectroscopic, thermal, and mechanical data are provided for common solid state laser materials.

  10. Overall effect of carbon production and nutrient release in sludge holding tank on mainstream biological nutrient removal efficiency.

    PubMed

    Jabari, Pouria; Yuan, Qiuyan; Oleszkiewicz, Jan A

    2017-09-11

    The potential of hydrolysis/fermentation of activated sludge in sludge holding tank (SHT) to produce additional carbon for the biological nutrient removal (BNR) process was investigated. The study was conducted in anaerobic batch tests using the BNR sludge (from a full-scale Westside process) and the mixture of BNR sludge with conventional non-BNR activated sludge (to have higher biodegradable particulate chemical oxygen demand (bpCOD) in sludge). The BioWin 4.1 was used to simulate the anaerobic batch test of the BNR sludge. Also, the overall effect of FCOD production and nutrient release on BNR efficiency of the Westside process was estimated. The experimental results showed that the phosphorous uptake of sludge increased during hydrolysis/ fermentation condition up to the point when poly-P was completely utilized; afterwards, it decreased significantly. The BioWin simulation could not predict the loss of aerobic phosphorous uptake after poly-P was depleted. The results showed that in the case of activated sludge with relatively higher bpCOD (originating from plants with short sludge retention time or without primary sedimentation), beneficial effect of SHT on BNR performance is feasible. In order to increase the potential of SHT to enhance BNR efficiency, a relatively low retention time and high sludge load is recommended.

  11. Ownership and technical efficiency of hospitals: evidence from Ghana using data envelopment analysis.

    PubMed

    Jehu-Appiah, Caroline; Sekidde, Serufusa; Adjuik, Martin; Akazili, James; Almeida, Selassi D; Nyonator, Frank; Baltussen, Rob; Asbu, Eyob Zere; Kirigia, Joses Muthuri

    2014-04-08

    In order to measure and analyse the technical efficiency of district hospitals in Ghana, the specific objectives of this study were to (a) estimate the relative technical and scale efficiency of government, mission, private and quasi-government district hospitals in Ghana in 2005; (b) estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient; and (c) use Tobit regression analysis to estimate the impact of ownership on hospital efficiency. In the first stage, we used data envelopment analysis (DEA) to estimate the efficiency of 128 hospitals comprising of 73 government hospitals, 42 mission hospitals, 7 quasi-government hospitals and 6 private hospitals. In the second stage, the estimated DEA efficiency scores are regressed against hospital ownership variable using a Tobit model. This was a retrospective study. In our DEA analysis, using the variable returns to scale model, out of 128 district hospitals, 31 (24.0%) were 100% efficient, 25 (19.5%) were very close to being efficient with efficiency scores ranging from 70% to 99.9% and 71 (56.2%) had efficiency scores below 50%. The lowest-performing hospitals had efficiency scores ranging from 21% to 30%.Quasi-government hospitals had the highest mean efficiency score (83.9%) followed by public hospitals (70.4%), mission hospitals (68.6%) and private hospitals (55.8%). However, public hospitals also got the lowest mean technical efficiency scores (27.4%), implying they have some of the most inefficient hospitals.Regarding regional performance, Northern region hospitals had the highest mean efficiency score (83.0%) and Volta Region hospitals had the lowest mean score (43.0%).From our Tobit regression, we found out that while quasi-government ownership is positively associated with hospital technical efficiency, private ownership negatively affects hospital efficiency. It would be prudent for policy-makers to examine the least efficient hospitals to correct widespread inefficiency. This would include reconsidering the number of hospitals and their distribution, improving efficiency and reducing duplication by closing or scaling down hospitals with efficiency scores below a certain threshold. For private hospitals with inefficiency related to large size, there is a need to break down such hospitals into manageable sizes.

  12. Ownership and technical efficiency of hospitals: evidence from Ghana using data envelopment analysis

    PubMed Central

    2014-01-01

    Background In order to measure and analyse the technical efficiency of district hospitals in Ghana, the specific objectives of this study were to (a) estimate the relative technical and scale efficiency of government, mission, private and quasi-government district hospitals in Ghana in 2005; (b) estimate the magnitudes of output increases and/or input reductions that would have been required to make relatively inefficient hospitals more efficient; and (c) use Tobit regression analysis to estimate the impact of ownership on hospital efficiency. Methods In the first stage, we used data envelopment analysis (DEA) to estimate the efficiency of 128 hospitals comprising of 73 government hospitals, 42 mission hospitals, 7 quasi-government hospitals and 6 private hospitals. In the second stage, the estimated DEA efficiency scores are regressed against hospital ownership variable using a Tobit model. This was a retrospective study. Results In our DEA analysis, using the variable returns to scale model, out of 128 district hospitals, 31 (24.0%) were 100% efficient, 25 (19.5%) were very close to being efficient with efficiency scores ranging from 70% to 99.9% and 71 (56.2%) had efficiency scores below 50%. The lowest-performing hospitals had efficiency scores ranging from 21% to 30%. Quasi-government hospitals had the highest mean efficiency score (83.9%) followed by public hospitals (70.4%), mission hospitals (68.6%) and private hospitals (55.8%). However, public hospitals also got the lowest mean technical efficiency scores (27.4%), implying they have some of the most inefficient hospitals. Regarding regional performance, Northern region hospitals had the highest mean efficiency score (83.0%) and Volta Region hospitals had the lowest mean score (43.0%). From our Tobit regression, we found out that while quasi-government ownership is positively associated with hospital technical efficiency, private ownership negatively affects hospital efficiency. Conclusions It would be prudent for policy-makers to examine the least efficient hospitals to correct widespread inefficiency. This would include reconsidering the number of hospitals and their distribution, improving efficiency and reducing duplication by closing or scaling down hospitals with efficiency scores below a certain threshold. For private hospitals with inefficiency related to large size, there is a need to break down such hospitals into manageable sizes. PMID:24708886

  13. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  14. Exploration, Sampling, And Reconstruction of Free Energy Surfaces with Gaussian Process Regression.

    PubMed

    Mones, Letif; Bernstein, Noam; Csányi, Gábor

    2016-10-11

    Practical free energy reconstruction algorithms involve three separate tasks: biasing, measuring some observable, and finally reconstructing the free energy surface from those measurements. In more than one dimension, adaptive schemes make it possible to explore only relatively low lying regions of the landscape by progressively building up the bias toward the negative of the free energy surface so that free energy barriers are eliminated. Most schemes use the final bias as their best estimate of the free energy surface. We show that large gains in computational efficiency, as measured by the reduction of time to solution, can be obtained by separating the bias used for dynamics from the final free energy reconstruction itself. We find that biasing with metadynamics, measuring a free energy gradient estimator, and reconstructing using Gaussian process regression can give an order of magnitude reduction in computational cost.

  15. Investigation of signal models and methods for evaluating structures of processing telecommunication information exchange systems under acoustic noise conditions

    NASA Astrophysics Data System (ADS)

    Kropotov, Y. A.; Belov, A. A.; Proskuryakov, A. Y.; Kolpakov, A. A.

    2018-05-01

    The paper considers models and methods for estimating signals during the transmission of information messages in telecommunication systems of audio exchange. One-dimensional probability distribution functions that can be used to isolate useful signals, and acoustic noise interference are presented. An approach to the estimation of the correlation and spectral functions of the parameters of acoustic signals is proposed, based on the parametric representation of acoustic signals and the components of the noise components. The paper suggests an approach to improving the efficiency of interference cancellation and highlighting the necessary information when processing signals from telecommunications systems. In this case, the suppression of acoustic noise is based on the methods of adaptive filtering and adaptive compensation. The work also describes the models of echo signals and the structure of subscriber devices in operational command telecommunications systems.

  16. A Reconstruction Method for the Estimation of Temperatures of Multiple Sources Applied for Nanoparticle-Mediated Hyperthermia.

    PubMed

    Steinberg, Idan; Tamir, Gil; Gannot, Israel

    2018-03-16

    Solid malignant tumors are one of the leading causes of death worldwide. Many times complete removal is not possible and alternative methods such as focused hyperthermia are used. Precise control of the hyperthermia process is imperative for the successful application of such treatment. To that end, this research presents a fast method that enables the estimation of deep tissue heat distribution by capturing and processing the transient temperature at the boundary based on a bio-heat transfer model. The theoretical model is rigorously developed and thoroughly validated by a series of experiments. A 10-fold improvement is demonstrated in resolution and visibility on tissue mimicking phantoms. The inverse problem is demonstrated as well with a successful application of the model for imaging deep-tissue embedded heat sources. Thereby, allowing the physician then ability to dynamically evaluate the hyperthermia treatment efficiency in real time.

  17. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  18. An energy- and resource-saving technology for utilizing the sludge from thermal power station water treatment facilities

    NASA Astrophysics Data System (ADS)

    Nikolaeva, L. A.; Khusaenova, A. Z.

    2014-05-01

    A method for utilizing production wastes is considered, and a process circuit arrangement is proposed for utilizing a mixture of activated silt and sludge from chemical water treatment by incinerating it with possible heat recovery. The sorption capacity of the products from combusting a mixture of activated silt and sludge with respect to gaseous emissions is experimentally determined. A periodic-duty adsorber charged with a fixed bed of sludge is calculated, and the heat-recovery boiler efficiency is estimated together with the technical-economic indicators of the proposed utilization process circuit arrangement.

  19. Consistent and efficient processing of ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.

  20. Analysis and Modeling of Ground Operations at Hub Airports

    NASA Technical Reports Server (NTRS)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  1. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  2. Forecasting jobs in the supply chain for investments in residential energy efficiency retrofits in Florida

    NASA Astrophysics Data System (ADS)

    Fobair, Richard C., II

    This research presents a model for forecasting the numbers of jobs created in the energy efficiency retrofit (EER) supply chain resulting from an investment in upgrading residential buildings in Florida. This investigation examined material supply chains stretching from mining to project installation for three product types: insulation, windows/doors, and heating, ventilating, and air conditioning (HVAC) systems. Outputs from the model are provided for the project, sales, manufacturing, and mining level. The model utilizes reverse-estimation to forecast the numbers of jobs that result from an investment. Reverse-estimation is a process that deconstructs a total investment into its constituent parts. In this research, an investment is deconstructed into profit, overhead, and hard costs for each level of the supply chain and over multiple iterations of inter-industry exchanges. The model processes an investment amount, the type of work and method of contracting into a prediction of the number of jobs created. The deconstruction process utilizes data from the U.S. Economic Census. At each supply chain level, the cost of labor is reconfigured into full-time equivalent (FTE) jobs (i.e. equivalent to 40 hours per week for 52 weeks) utilizing loaded labor rates and a typical employee mix. The model is sensitive to adjustable variables, such as percentage of work performed per type of product, allocation of worker time per skill level, annual hours for FTE calculations, wage rate, and benefits. This research provides several new insights into job creation. First, it provides definitions that can be used for future research on jobs in supply chains related to energy efficiency. Second, it provides a methodology for future investigators to calculate jobs in a supply chain resulting from an investment in energy efficiency upgrades to a building. The methodology used in this research is unique because it examines gross employment at the sub-industry level for specific commodities. Most research on employment examines the net employment change (job creation less job destruction) at levels for regions, industries, and the aggregate economy. Third, it provides a forecast of the numbers of jobs for an investment in energy efficiency over the entire supply chain for the selected industries and the job factors for major levels of the supply chain.

  3. A simulation study of turbofan engine deterioration estimation using Kalman filtering techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Heather H.

    1991-01-01

    Deterioration of engine components may cause off-normal engine operation. The result is an unecessary loss of performance, because the fixed schedules are designed to accommodate a wide range of engine health. These fixed control schedules may not be optimal for a deteriorated engine. This problem may be solved by including a measure of deterioration in determining the control variables. These engine deterioration parameters usually cannot be measured directly but can be estimated. A Kalman filter design is presented for estimating two performance parameters that account for engine deterioration: high and low pressure turbine delta efficiencies. The delta efficiency parameters model variations of the high and low pressure turbine efficiencies from nominal values. The filter has a design condition of Mach 0.90, 30,000 ft altitude, and 47 deg power level angle (PLA). It was evaluated using a nonlinear simulation of the F100 engine model derivative (EMD) engine, at the design Mach number and altitude over a PLA range of 43 to 55 deg. It was found that known high pressure turbine delta efficiencies of -2.5 percent and low pressure turbine delta efficiencies of -1.0 percent can be estimated with an accuracy of + or - 0.25 percent efficiency with a Kalman filter. If both the high and low pressure turbine are deteriorated, the delta efficiencies of -2.5 percent to both turbines can be estimated with the same accuracy.

  4. Modeling qRT-PCR dynamics with application to cancer biomarker quantification.

    PubMed

    Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A

    2017-01-01

    Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.

  5. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  6. Development of numerical processing in children with typical and dyscalculic arithmetic skills—a longitudinal study

    PubMed Central

    Landerl, Karin

    2013-01-01

    Numerical processing has been demonstrated to be closely associated with arithmetic skills, however, our knowledge on the development of the relevant cognitive mechanisms is limited. The present longitudinal study investigated the developmental trajectories of numerical processing in 42 children with age-adequate arithmetic development and 41 children with dyscalculia over a 2-year period from beginning of Grade 2, when children were 7; 6 years old, to beginning of Grade 4. A battery of numerical processing tasks (dot enumeration, non-symbolic and symbolic comparison of one- and two-digit numbers, physical comparison, number line estimation) was given five times during the study (beginning and middle of each school year). Efficiency of numerical processing was a very good indicator of development in numerical processing while within-task effects remained largely constant and showed low long-term stability before middle of Grade 3. Children with dyscalculia showed less efficient numerical processing reflected in specifically prolonged response times. Importantly, they showed consistently larger slopes for dot enumeration in the subitizing range, an untypically large compatibility effect when processing two-digit numbers, and they were consistently less accurate in placing numbers on a number line. Thus, we were able to identify parameters that can be used in future research to characterize numerical processing in typical and dyscalculic development. These parameters can also be helpful for identification of children who struggle in their numerical development. PMID:23898310

  7. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    PubMed

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  8. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  9. An open-loop system design for deep space signal processing applications

    NASA Astrophysics Data System (ADS)

    Tang, Jifei; Xia, Lanhua; Mahapatra, Rabi

    2018-06-01

    A novel open-loop system design with high performance is proposed for space positioning and navigation signal processing. Divided by functions, the system has four modules, bandwidth selectable data recorder, narrowband signal analyzer, time-delay difference of arrival estimator and ANFIS supplement processor. A hardware-software co-design approach is made to accelerate computing capability and improve system efficiency. Embedded with the proposed signal processing algorithms, the designed system is capable of handling tasks with high accuracy over long period of continuous measurements. The experiment results show the Doppler frequency tracking root mean square error during 3 h observation is 0.0128 Hz, while the TDOA residue analysis in correlation power spectrum is 0.1166 rad.

  10. Improving the thermal efficiency of a jaggery production module using a fire-tube heat exchanger.

    PubMed

    La Madrid, Raul; Orbegoso, Elder Mendoza; Saavedra, Rafael; Marcelo, Daniel

    2017-12-15

    Jaggery is a product obtained after heating and evaporation processes have been applied to sugar cane juice via the addition of thermal energy, followed by the crystallisation process through mechanical agitation. At present, jaggery production uses furnaces and pans that are designed empirically based on trial and error procedures, which results in low ranges of thermal efficiency operation. To rectify these deficiencies, this study proposes the use of fire-tube pans to increase heat transfer from the flue gases to the sugar cane juice. With the aim of increasing the thermal efficiency of a jaggery installation, a computational fluid dynamic (CFD)-based model was used as a numerical tool to design a fire-tube pan that would replace the existing finned flat pan. For this purpose, the original configuration of the jaggery furnace was simulated via a pre-validated CFD model in order to calculate its current thermal performance. Then, the newly-designed fire-tube pan was virtually replaced in the jaggery furnace with the aim of numerically estimating the thermal performance at the same operating conditions. A comparison of both simulations highlighted the growth of the heat transfer rate at around 105% in the heating/evaporation processes when the fire-tube pan replaced the original finned flat pan. This enhancement impacted the jaggery production installation, whereby the thermal efficiency of the installation increased from 31.4% to 42.8%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Time delay estimation using new spectral and adaptive filtering methods with applications to underwater target detection

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammed A.

    1997-11-01

    In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.

  12. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  13. Hypergolic oxidizer and fuel scrubber emissions

    NASA Technical Reports Server (NTRS)

    Parrish, Clyde F.; Barile, Ronald G.; Curran, Dan; Hodge, Tim; Lueck, Dale E.; Young, Rebecca C.

    1995-01-01

    Hypergolic fuels and oxidizer are emitted to the environment during fueling and deservicing shuttle and other spacecraft. Such emissions are difficult to measure due to the intermittent purge flow and to the presence of suspended scrubber liquor. A new method for emissions monitoring was introduced in a previous paper. This paper is a summary of the results of a one-year study of shuttle launch pads and orbiter processing facilities (OPF's) which proved that emissions can be determined from field scrubbers without direct measurement of vent flow rate and hypergol concentration. This new approach is based on the scrubber efficiency, which was measured during normal operations, and on the accumulated weight of hypergol captured in the scrubber liquor, which is part of the routine monitoring data of scrubber liquors. To validate this concept, three qualification tests were performed, logs were prepared for each of 16 hypergol scrubbers at KSC, the efficiencies of KSC scrubbers were measured during normal operations, and an estimate of the annual emissions was made based on the efficiencies and the propellant buildup data. The results have confirmed that the emissions from the KSC scrubbers can be monitored by measuring the buildup of hypergol propellant in the liquor, and then using the appropriate efficiency to calculate the emissions. There was good agreement between the calculated emissions based on outlet concentration and flow rate, and the emissions calculated from the propellant buildup and efficiency. The efficiencies of 12 KSC scrubbers, measured under actual servicing operations and special test conditions, were assumed to be valid for all subsequent operations until a significant change in hardware occurred. An estimate of the total emissions from 16 scrubbers for three years showed that 0.3 kg/yr of fuel and 234 kg/yr of oxidizer were emitted.

  14. Efficiency in the Community College Sector: Stochastic Frontier Analysis

    ERIC Educational Resources Information Center

    Agasisti, Tommaso; Belfield, Clive

    2017-01-01

    This paper estimates technical efficiency scores across the community college sector in the United States. Using stochastic frontier analysis and data from the Integrated Postsecondary Education Data System for 2003-2010, we estimate efficiency scores for 950 community colleges and perform a series of sensitivity tests to check for robustness. We…

  15. Efficiency of thermoelectric conversion in ferroelectric film capacitive structures

    NASA Astrophysics Data System (ADS)

    Volpyas, V. A.; Kozyrev, A. B.; Soldatenkov, O. I.; Tepina, E. R.

    2012-06-01

    Thermal heating/cooling conditions for metal-insulator-metal structures based on barium strontium titanate ferroelectric films are studied by numerical methods with the aim of their application in capacitive thermoelectric converters. A correlation between the thermal and capacitive properties of thin-film ferroelectric capacitors is considered. The time of the temperature response and the rate of variation of the capacitive properties of the metal-insulator-metal structures are determined by analyzing the dynamics of thermal processes. Thermophysical calculations are carried out that take into consideration the real electrical properties of barium strontium titanate ferroelectric films and allow estimation of thermal modulation parameters and the efficiency of capacitive thermoelectric converters on their basis.

  16. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  17. A novel multireceiver communications system configuration based on optimal estimation theory

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1990-01-01

    A multireceiver configuration for the purpose of carrier arraying and/or signal arraying is presented. Such a problem arises for example, in the NASA Deep Space Network where the same data-modulated signal from a spacecraft is received by a number of geographically separated antennas and the data detection must be efficiently performed on the basis of the various received signals. The proposed configuration is arrived at by formulating the carrier and/or signal arraying problem as an optimal estimation problem. Two specific solutions are proposed. The first solution is to simultaneously and optimally estimate the various phase processes received at different receivers with coupled phase locked loops (PLLs) wherein the individual PLLs acquire and track their respective receivers' phase processes, but are aided by each other in an optimal manner. However, when the phase processes are relatively weakly correlated, and for the case of relatively high values of symbol energy-to-noise spectral density ratio, a novel configuration for combining the data modulated, loop-output signals is proposed. The scheme can be extended to the case of low symbol energy-to-noise case by performing the combining/detection process over a multisymbol period. Such a configuration results in the minimization of the effective radio loss at the combiner output, and thus a maximization of energy per bit to noise-power spectral density ration is achieved.

  18. Trophic transfer efficiency of methylmercury and inorganic mercury to lake trout Salvelinus namaycush from its prey

    USGS Publications Warehouse

    Madenijian, C.P.; David, S.R.; Krabbenhoft, D.P.

    2012-01-01

    Based on a laboratory experiment, we estimated the net trophic transfer efficiency of methylmercury to lake trout Salvelinus namaycush from its prey to be equal to 76.6 %. Under the assumption that gross trophic transfer efficiency of methylmercury to lake trout from its prey was equal to 80 %, we estimated that the rate at which lake trout eliminated methylmercury was 0.000244 day−1. Our laboratory estimate of methylmercury elimination rate was 5.5 times lower than the value predicted by a published regression equation developed from estimates of methylmercury elimination rates for fish available from the literature. Thus, our results, in conjunction with other recent findings, suggested that methylmercury elimination rates for fish have been overestimated in previous studies. In addition, based on our laboratory experiment, we estimated that the net trophic transfer efficiency of inorganic mercury to lake trout from its prey was 63.5 %. The lower net trophic transfer efficiency for inorganic mercury compared with that for methylmercury was partly attributable to the greater elimination rate for inorganic mercury. We also found that the efficiency with which lake trout retained either methylmercury or inorganic mercury from their food did not appear to be significantly affected by the degree of their swimming activity.

  19. Performance and techno-economic assessment of several solid-liquid separation technologies for processing dilute-acid pretreated corn stover.

    PubMed

    Sievers, David A; Tao, Ling; Schell, Daniel J

    2014-09-01

    Solid-liquid separation of pretreated lignocellulosic biomass slurries is a critical unit operation employed in several different processes for production of fuels and chemicals. An effective separation process achieves good recovery of solute (sugars) and efficient dewatering of the biomass slurry. Dilute acid pretreated corn stover slurries were subjected to pressure and vacuum filtration and basket centrifugation to evaluate the technical and economic merits of these technologies. Experimental performance results were used to perform detailed process simulations and economic analysis using a 2000 tonne/day biorefinery model to determine differences between the various filtration methods and their process settings. The filtration processes were able to successfully separate pretreated slurries into liquor and solid fractions with estimated sugar recoveries of at least 95% using a cake washing process. A continuous vacuum belt filter produced the most favorable process economics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis

    PubMed Central

    Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.

    2015-01-01

    Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324

  1. Energy-balanced algorithm for RFID estimation

    NASA Astrophysics Data System (ADS)

    Zhao, Jumin; Wang, Fangyuan; Li, Dengao; Yan, Lijuan

    2016-10-01

    RFID has been widely used in various commercial applications, ranging from inventory control, supply chain management to object tracking. It is necessary for us to estimate the number of RFID tags deployed in a large area periodically and automatically. Most of the prior works use passive tags to estimate and focus on designing time-efficient algorithms that can estimate tens of thousands of tags in seconds. But for a RFID reader to access tags in a large area, active tags are likely to be used due to their longer operational ranges. But these tags use their own battery as energy supplier. Hence, conserving energy for active tags becomes critical. Some prior works have studied how to reduce energy expenditure of a RFID reader when it reads tags IDs. In this paper, we study how to reduce the amount of energy consumed by active tags during the process of estimating the number of tags in a system and make the energy every tag consumed balanced approximately. We design energy-balanced estimation algorithm that can achieve our goal we mentioned above.

  2. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  3. Cost-Benefit of Improving the Efficiency of Room Air Conditioners (Inverter and Fixed Speed) in India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Nihar; Abhyankar, Nikit; Park, Won Young

    Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. We assess several efficiency levels, two of which are summarized below in the report.more » The finding that significant efficiency improvement is cost effective from a consumer perspective is robust over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one star level) should be evaluated rigorously considering significant benefits to consumers, energy security and environment.« less

  4. ADHD performance reflects inefficient but not impulsive information processing: a diffusion model analysis.

    PubMed

    Metin, Baris; Roeyers, Herbert; Wiersema, Jan R; van der Meere, Jaap J; Thompson, Margaret; Sonuga-Barke, Edmund

    2013-03-01

    Attention-deficit/hyperactivity disorder (ADHD) is associated with performance deficits across a broad range of tasks. Although individual tasks are designed to tap specific cognitive functions (e.g., memory, inhibition, planning, etc.), these deficits could also reflect general effects related to either inefficient or impulsive information processing or both. These two components cannot be isolated from each other on the basis of classical analysis in which mean reaction time (RT) and mean accuracy are handled separately. Seventy children with a diagnosis of combined type ADHD and 50 healthy controls (between 6 and 17 years) performed two tasks: a simple two-choice RT (2-CRT) task and a conflict control task (CCT) that required higher levels of executive control. RT and errors were analyzed using the Ratcliff diffusion model, which divides decisional time into separate estimates of information processing efficiency (called "drift rate") and speed-accuracy tradeoff (SATO, called "boundary"). The model also provides an estimate of general nondecisional time. Results were the same for both tasks independent of executive load. ADHD was associated with lower drift rate and less nondecisional time. The groups did not differ in terms of boundary parameter estimates. RT and accuracy performance in ADHD appears to reflect inefficient rather than impulsive information processing, an effect independent of executive function load. The results are consistent with models in which basic information processing deficits make an important contribution to the ADHD cognitive phenotype. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. Two essays on efficiency in the electric power industry: Measurement of technical and allocative efficiency

    NASA Astrophysics Data System (ADS)

    Gardiner, John Corby

    The electric power industry market structure has changed over the last twenty years since the passage of the Public Utility Regulatory Policies Act (PURPA). These changes include the entry by unregulated generator plants and, more recently, the deregulation of entry and price in the retail generation market. Such changes have introduced and expanded competitive forces on the incumbent electric power plants. Proponents of this deregulation argued that the enhanced competition would lead to a more efficient allocation of resources. Previous studies of power plant technical and allocative efficiency have failed to measure technical and allocative efficiency at the plant level. In contrast, this study uses panel data on 35 power plants over 59 years to estimate technical and allocative efficiency of each plant. By using a flexible functional form, which is not constrained by the assumption that regulation is constant over the 59 years sampled, the estimation procedure accounts for changes in both state and national regulatory/energy policies that may have occurred over the sample period. The empirical evidence presented shows that most of the power plants examined have operated more efficiently since the passage of PURPA and the resultant increase of competitive forces. Chapter 2 extends the model used in Chapter 1 and clarifies some issues in the efficiency literature by addressing the case where homogeneity does not hold. A more general model is developed for estimating both input and output inefficiency simultaneously. This approach reveals more information about firm inefficiency than the single estimation approach that has previously been used in the literature. Using the more general model, estimates are provided on the type of inefficiency that occurs as well as the cost of inefficiency by type of inefficiency. In previous studies, the ranking of firms by inefficiency has been difficult because of the cardinal and ordinal differences between different types of inefficiency estimates. However, using the general approach, this study illustrates that plants can be ranked by overall efficiency.

  6. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  7. Modelling the cancer growth process by Stochastic Differential Equations with the effect of Chondroitin Sulfate (CS) as anticancer therapeutics

    NASA Astrophysics Data System (ADS)

    Syahidatul Ayuni Mazlan, Mazma; Rosli, Norhayati; Jauhari Arief Ichwan, Solachuddin; Suhaity Azmi, Nina

    2017-09-01

    A stochastic model is introduced to describe the growth of cancer affected by anti-cancer therapeutics of Chondroitin Sulfate (CS). The parameters values of the stochastic model are estimated via maximum likelihood function. The numerical method of Euler-Maruyama will be employed to solve the model numerically. The efficiency of the stochastic model is measured by comparing the simulated result with the experimental data.

  8. Using Firn Air for Facility Cooling at the WAIS Divide Site

    DTIC Science & Technology

    2014-09-17

    reduce logistics costs at remote field camps where it is critical to maintain proper temperatures to preserve sensitive deep ice cores. We assessed the...feasibility of using firn air for cooling at the West Antarc- tic Ice Sheet (WAIS) Divide ice core drilling site as a means to adequately and...efficiently refrigerate ice cores during storage and processing. We used estimates of mean annual temperature, temperature variations, and firn

  9. Efficient Processing of Acoustic Signals for High Rate Information Transmission over Sparse Underwater Channels

    DTIC Science & Technology

    2016-09-02

    the fractionally-spaced channel estimators and the short feedforward equalizer filters . Receiver algorithm is applied to real data transmitted at 10...multichannel decision-feedback equalizer (DFE)[1]. This receiver consists of a bank of adaptive feedforwad filters , one per array element, followed by a...decision-feedback filter . It has been implemented in the prototype high-rate acoustic modem developed at the Woods Hole Oceanographic Institution, and

  10. Estimating Velocities of Glaciers Using Sentinel-1 SAR Imagery

    NASA Astrophysics Data System (ADS)

    Gens, R.; Arnoult, K., Jr.; Friedl, P.; Vijay, S.; Braun, M.; Meyer, F. J.; Gracheva, V.; Hogenson, K.

    2017-12-01

    In an international collaborative effort, software has been developed to estimate the velocities of glaciers by using Sentinel-1 Synthetic Aperture Radar (SAR) imagery. The technique, initially designed by the University of Erlangen-Nuremberg (FAU), has been previously used to quantify spatial and temporal variabilities in the velocities of surging glaciers in the Pakistan Karakoram. The software estimates surface velocities by first co-registering image pairs to sub-pixel precision and then by estimating local offsets based on cross-correlation. The Alaska Satellite Facility (ASF) at the University of Alaska Fairbanks (UAF) has modified the software to make it more robust and also capable of migration into the Amazon Cloud. Additionally, ASF has implemented a prototype that offers the glacier tracking processing flow as a subscription service as part of its Hybrid Pluggable Processing Pipeline (HyP3). Since the software is co-located with ASF's cloud-based Sentinel-1 archive, processing of large data volumes is now more efficient and cost effective. Velocity maps are estimated for Single Look Complex (SLC) SAR image pairs and a digital elevation model (DEM) of the local topography. A time series of these velocity maps then allows the long-term monitoring of these glaciers. Due to the all-weather capabilities and the dense coverage of Sentinel-1 data, the results are complementary to optically generated ones. Together with the products from the Global Land Ice Velocity Extraction project (GoLIVE) derived from Landsat 8 data, glacier speeds can be monitored more comprehensively. Examples from Sentinel-1 SAR-derived results are presented along with optical results for the same glaciers.

  11. Building Energy Model Development for Retrofit Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chasar, David; McIlvaine, Janet; Blanchard, Jeremy

    2012-09-30

    Based on previous research conducted by Pacific Northwest National Laboratory and Florida Solar Energy Center providing technical assistance to implement 22 deep energy retrofits across the nation, 6 homes were selected in Florida and Texas for detailed post-retrofit energy modeling to assess realized energy savings (Chandra et al, 2012). However, assessing realized savings can be difficult for some homes where pre-retrofit occupancy and energy performance are unknown. Initially, savings had been estimated using a HERS Index comparison for these homes. However, this does not account for confounding factors such as occupancy and weather. This research addresses a method to moremore » reliably assess energy savings achieved in deep energy retrofits for which pre-retrofit utility bills or occupancy information in not available. A metered home, Riverdale, was selected as a test case for development of a modeling procedure to account occupancy and weather factors, potentially creating more accurate estimates of energy savings. This “true up” procedure was developed using Energy Gauge USA software and post-retrofit homeowner information and utility bills. The 12 step process adjusts the post-retrofit modeling results to correlate with post-retrofit utility bills and known occupancy information. The “trued” post retrofit model is then used to estimate pre-retrofit energy consumption by changing the building efficiency characteristics to reflect the pre-retrofit condition, but keeping all weather and occupancy-related factors the same. This creates a pre-retrofit model that is more comparable to the post-retrofit energy use profile and can improve energy savings estimates. For this test case, a home for which pre- and post- retrofit utility bills were available was selected for comparison and assessment of the accuracy of the “true up” procedure. Based on the current method, this procedure is quite time intensive. However, streamlined processing spreadsheets or incorporation into existing software tools would improve the efficiency of the process. Retrofit activity appears to be gaining market share, and this would be a potentially valuable capability with relevance to marketing, program management, and retrofit success metrics.« less

  12. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  13. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  14. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  15. The Characterization of Atmospheric Boundary Layer Depth and Turbulence in a Mixed Rural and Urban Convective Environment

    NASA Astrophysics Data System (ADS)

    Hicks, Micheal M.

    A comprehensive analysis of surface-atmosphere flux exchanges over a mixed rural and urban convective environment is conducted at Howard University Beltsville, MD Research Campus. This heterogeneous site consists of rural, suburban and industrial surface covers to its south, east and west, within a 2 km radius of a flux sensor. The eddy covariance method is utilized to estimate surface-atmosphere flux exchanges of momentum, heat and moisture. The attributes of these surface flux exchanges are contrasted to those of classical homogeneous sites and assessed for accuracy, to evaluate the following: (I) their similarity to conventional convective boundary layer (CBL) processes and (II) their representativeness of the surrounding environment's turbulent properties. Both evaluations are performed as a function of upwind surface conditions. In particular, the flux estimates' obedience to spectrum power laws and similarity theory relationships is used for performing the first evaluation, and their ability to close the surface energy balance and accurately model CBL heights is used for the latter. An algorithm that estimates atmospheric boundary layer heights from observed lidar extinction backscatter was developed, tested and applied in this study. The derived lidar based CBL heights compared well with those derived from balloon borne soundings, with an overall Pearson correlation coefficient and standard deviation of 0.85 and 223 m, respectively. This algorithm assisted in the evaluation of the response of CBL processes to surface heterogeneity, by deriving high temporal CBL heights and using them as independent references of the surrounding area averaged sensible heat fluxes. This study found that the heterogeneous site under evaluation was rougher than classical homogeneous sites, with slower dissipation rates of turbulent kinetic energy. Flux measurements downwind of the industrial complexes exhibited enhanced efficiency in surface-atmosphere momentum, heat, and moisture transport relative to their similarity theory predictions. In addition, these enhanced heat flux estimates ingested into the CBL slab model overestimated observed CBL heights. More spatial flux observations are needed to better understand the role that the industrial complexes are playing in enhancing the efficiency of turbulent processes, which may have important implications on the role humans are assuming in regional climate change.

  16. [Criteria of estimation of state forensic-medical expert activity administration].

    PubMed

    Klevno, V A; Loban, I E

    2008-01-01

    Criteria of estimation of state forensic medical activity administration were systematized and their content was considered. New integral index of administration efficiency - index of technological efficiency - was developed and proved.

  17. Guidelines to indirectly measure and enhance detection efficiency of stationary PIT tag interrogation systems in streams

    USGS Publications Warehouse

    Connolly, Patrick J.; Wolf, Keith; O'Neal, Jennifer S.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  18. Guidelines for calculating and enhancing detection efficiency of PIT tag interrogation systems

    USGS Publications Warehouse

    Connolly, Patrick J.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  19. Efficient estimation of Pareto model: Some modified percentile estimators.

    PubMed

    Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali

    2018-01-01

    The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.

  20. Valuation Diagramming and Accounting of Transactive Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhmalbaf, Atefe; Hammerstrom, Donald J.; Huang, Qiuhua

    Transactive energy (TE) systems support both economic and technical objectives of a power system including efficiency and reliability. TE systems utilize value-driven mechanisms to coordinate and balance responsive supply and demand in the power system. Economic performance of TE systems cannot be assessed without estimating their value. Estimating the potential value of transactive energy systems requires a systematic valuation methodology that can capture value exchanges among different stakeholders (i.e., actors) and ultimately estimate impact of one TE design and compare it against another one. Such a methodology can help decision makers choose the alternative that results in preferred outcomes. Thismore » paper presents a valuation methodology developed to assess value of TE systems. A TE use-case example is discussed, and metrics identified in the valuation process are quantified using a TE simulation program.« less

Top