Sample records for proposed cost function

  1. Doubly stochastic radial basis function methods

    NASA Astrophysics Data System (ADS)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  2. Designing the Architecture of Hierachical Neural Networks Model Attention, Learning and Goal-Oriented Behavior

    DTIC Science & Technology

    1993-12-31

    19,23,25,26,27,28,32,33,35,41]) - A new cost function is postulated and an algorithm that employs this cost function is proposed for the learning of...updates the controller parameters from time to time [53]. The learning control algorithm consist of updating the parameter estimates as used in the...proposed cost function with the other learning type algorithms , such as based upon learning of iterative tasks [Kawamura-85], variable structure

  3. On a cost functional for H2/H(infinity) minimization

    NASA Technical Reports Server (NTRS)

    Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis

    1990-01-01

    A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.

  4. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  5. Analyzing the Effect of Multi-fuel and Practical Constraints on Realistic Economic Load Dispatch using Novel Two-stage PSO

    NASA Astrophysics Data System (ADS)

    Chintalapudi, V. S.; Sirigiri, Sivanagaraju

    2017-04-01

    In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.

  6. Practice expenses in the MFS (Medicare fee schedule): the service-class approach.

    PubMed

    Latimer, E A; Kane, N M

    1995-01-01

    The practice expense component of the Medicare fee schedule (MFS), which is currently based on historical charges and rewards physician procedures at the expense of cognitive services, is due to be changed by January 1, 1998. The Physician Payment Review Commission (PPRC) and others have proposed microcosting direct costs and allocating all indirect costs on a common basis, such as physician time or work plus direct costs. Without altering the treatment of direct costs, the service-class approach disaggregates indirect costs into six practice function costs. The practice function costs are then allocated to classes of services using cost-accounting and statistical methods. This approach would make the practice expense component more resource-based than other proposed alternatives.

  7. 2 CFR Appendix E to Part 225 - State and Local Indirect Cost Rate Proposals

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    .... 1. General. a. Where a governmental unit's department or agency has only one major function, or where all its major functions benefit from the indirect costs to approximately the same degree, the...'s department or agency has several major functions which benefit from its indirect costs in varying...

  8. 2 CFR Appendix E to Part 225 - State and Local Indirect Cost Rate Proposals

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    .... 1. General. a. Where a governmental unit's department or agency has only one major function, or where all its major functions benefit from the indirect costs to approximately the same degree, the...'s department or agency has several major functions which benefit from its indirect costs in varying...

  9. Low cost Ku-band earth terminals for voice/data/facsimile

    NASA Technical Reports Server (NTRS)

    Kelley, R. L.

    1977-01-01

    A Ku-band satellite earth terminal capable of providing two way voice/facsimile teleconferencing, 128 Kbps data, telephone, and high-speed imagery services is proposed. Optimized terminal cost and configuration are presented as a function of FDMA and TDMA approaches to multiple access. The entire terminal from the antenna to microphones, speakers and facsimile equipment is considered. Component cost versus performance has been projected as a function of size of the procurement and predicted hardware innovations and production techniques through 1985. The lowest cost combinations of components has been determined in a computer optimization algorithm. The system requirements including terminal EIRP and G/T, satellite size, power per spacecraft transponder, satellite antenna characteristics, and link propagation outage were selected using a computerized system cost/performance optimization algorithm. System cost and terminal cost and performance requirements are presented as a function of the size of a nationwide U.S. network. Service costs are compared with typical conference travel costs to show the viability of the proposed terminal.

  10. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  11. Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Chih; Hsiao, Shen-Fu

    In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.

  12. Improving the quantum cost of reversible Boolean functions using reorder algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Taghreed; Younes, Ahmed; Elsayed, Ashraf

    2018-05-01

    This paper introduces a novel algorithm to synthesize a low-cost reversible circuits for any Boolean function with n inputs represented as a Positive Polarity Reed-Muller expansion. The proposed algorithm applies a predefined rules to reorder the terms in the function to minimize the multi-calculation of common parts of the Boolean function to decrease the quantum cost of the reversible circuit. The paper achieves a decrease in the quantum cost and/or the circuit length, on average, when compared with relevant work in the literature.

  13. 76 FR 18222 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-01

    ... proposed information collection for the proper performance of the agency's functions; (2) the accuracy of... currently approved collection; Title of Information Collection: Independent Renal Dialysis Facility Cost Report; Use: The Independent Renal Dialysis Facility Cost Report, is filed annually by providers...

  14. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  15. A flexible model for correlated medical costs, with application to medical expenditure panel survey data.

    PubMed

    Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A

    2016-03-15

    We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.

  16. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    PubMed

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  17. Component Cost Reduction by Value Engineering: A Case Study

    NASA Astrophysics Data System (ADS)

    Kalluri, Vinayak; Kodali, Rambabu

    2017-04-01

    The concept value engineering (VE) acts to increase the value of a product through the improvement in existent functions without increasing their costs. In other words, VE is a function oriented, systematic team approach study to provide value in a product, system or service. The authors systematically explore VE through the six step framework proposed by SAVE and a case study is presented to address the concern of reduction in cost without compromising the function of a hydraulic steering cylinder through the aforementioned VE framework.

  18. Joint brain connectivity estimation from diffusion and functional MRI data

    NASA Astrophysics Data System (ADS)

    Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.

    2015-03-01

    Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information flow is introduced and used to model the propagation of information between GM areas through WM fiber bundles. The link capacity, i.e., ability to transfer information, is characterized by the relative strength of fiber bundles, e.g., fiber count gathered from the tractography of dMRI data. The node information demand is considered to be proportional to the correlation between neural activity at various cortical areas involved in a particular functional mode (e.g. visual, motor, etc.). These two properties lead to the link capacity and node demand constraints in the proposed model. Moreover, the information flow of a link cannot exceed the demand from either end node. This is captured by the feasibility constraints. Two different cost functions are considered in the optimization formulation in this paper. The first cost function, the reciprocal of fiber strength represents the unit cost for information passing through the link. In the second cost function, a min-max (minimizing the maximal link load) approach is used to balance the usage of each link. Optimizing the first cost function selects the pathway with strongest fiber strength for information propagation. In the second case, the optimization procedure finds all the possible propagation pathways and allocates the flow proportionally to their strength. Additionally, a penalty term is incorporated with both the cost functions to capture the possible missing and weak anatomical connections. With this set of constraints and the proposed cost functions, solving the network optimization problem recovers missing and weak anatomical connections supported by the functional information and provides the functional-associated anatomical subnetworks. Feasibility is demonstrated using realistic diffusion and functional MRI phantom data. It is shown that the proposed model recovers the maximum number of true connections, with fewest number of false connections when compared with the connectivity derived from a joint probabilistic model using the expectation-maximization (EM) algorithm presented in a prior work. We also apply the proposed method to data provided by the Human Connectome Project (HCP).

  19. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  20. 7 CFR 1709.117 - Application requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and the eligible extremely high energy cost communities to be served. (6) Project management. The... perform project management functions. If the applicant proposes to use the equipment or design.... Each application must include a narrative proposal describing the proposed project and addressing...

  1. 7 CFR 1709.117 - Application requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and the eligible extremely high energy cost communities to be served. (6) Project management. The... perform project management functions. If the applicant proposes to use the equipment or design.... Each application must include a narrative proposal describing the proposed project and addressing...

  2. Analysis of the proposed utilization of TDRS system by the HEAO-C satellite

    NASA Technical Reports Server (NTRS)

    Weathers, G.

    1974-01-01

    The primary function of the study was to assess the impact upon the HEAO telecommunications system of the proposed relay satellite-to-ground-link configuration. The system is designed to perform the function of most of the NASA ground tracking and communications network at a net cost savings for NASA.

  3. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  4. Mentoring: A Typology of Costs for Higher Education Faculty

    ERIC Educational Resources Information Center

    Lunsford, Laura G.; Baker, Vicki; Griffin, Kimberly A.; Johnson, W. Brad

    2013-01-01

    In this theoretical paper, we apply a social exchange framework to understand mentors' negative experiences. We propose a typology of costs, categorized according to psychosocial and career mentoring functions. Our typology generates testable research propositions. Psychosocial costs of mentoring are burnout, anger, and grief or loss. Career…

  5. Regularization iteration imaging algorithm for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  6. Economic feasibility study for new technological alternatives in wastewater treatment processes: a review.

    PubMed

    Molinos-Senante, María; Hernández-Sancho, Francesc; Sala-Garrido, Ramón

    2012-01-01

    The concept of sustainability involves the integration of economic, environmental, and social aspects and this also applies in the field of wastewater treatment. Economic feasibility studies are a key tool for selecting the most appropriate option from a set of technological proposals. Moreover, these studies are needed to assess the viability of transferring new technologies from pilot-scale to full-scale. In traditional economic feasibility studies, the benefits that have no market price, such as environmental benefits, are not considered and are therefore underestimated. To overcome this limitation, we propose a new methodology to assess the economic viability of wastewater treatment technologies that considers internal and external impacts. The estimation of the costs is based on the use of cost functions. To quantify the environmental benefits from wastewater treatment, the distance function methodology is proposed to estimate the shadow price of each pollutant removed in the wastewater treatment. The application of this methodological approach by decision makers enables the calculation of the true costs and benefits associated with each alternative technology. The proposed methodology is presented as a useful tool to support decision making.

  7. Application of target costing in machining

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Bhaskaran; Kokatnur, Ameet; Gupta, Deepak P.

    2004-11-01

    In today's intensely competitive and highly volatile business environment, consistent development of low cost and high quality products meeting the functionality requirements is a key to a company's survival. Companies continuously strive to reduce the costs while still producing quality products to stay ahead in the competition. Many companies have turned to target costing to achieve this objective. Target costing is a structured approach to determine the cost at which a proposed product, meeting the quality and functionality requirements, must be produced in order to generate the desired profits. It subtracts the desired profit margin from the company's selling price to establish the manufacturing cost of the product. Extensive literature review revealed that companies in automotive, electronic and process industries have reaped the benefits of target costing. However target costing approach has not been applied in the machining industry, but other techniques based on Geometric Programming, Goal Programming, and Lagrange Multiplier have been proposed for application in this industry. These models follow a forward approach, by first selecting a set of machining parameters, and then determining the machining cost. Hence in this study we have developed an algorithm to apply the concepts of target costing, which is a backward approach that selects the machining parameters based on the required machining costs, and is therefore more suitable for practical applications in process improvement and cost reduction. A target costing model was developed for turning operation and was successfully validated using practical data.

  8. A Cost-Effective Energy-Recovering Sustain Driving Circuit for ac Plasma Display Panels

    NASA Astrophysics Data System (ADS)

    Lim, Jae Kwang; Tae, Heung-Sik; Choi, Byungcho; Kim, Seok Gi

    A new sustain driving circuit, featuring an energy-recovering function with simple structure and minimal component count, is proposed as a cost-effective solution for driving plasma display panels during the sustaining period. Compared with existing solutions, the proposed circuit reduces the number of semiconductor switches and reactive circuit components without compromising the circuit performance and gas-discharging characteristics. In addition, the proposed circuit utilizes the harness wire as an inductive circuit component, thereby further simplifying the circuit structure. The performance of the proposed circuit is confirmed with a 42-inch plasma display panel.

  9. A robust and fast active contour model for image segmentation with intensity inhomogeneity

    NASA Astrophysics Data System (ADS)

    Ding, Keyan; Weng, Guirong

    2018-04-01

    In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.

  10. A class of solution-invariant transformations of cost functions for minimum cost flow phase unwrapping.

    PubMed

    Hubig, Michael; Suchandt, Steffen; Adam, Nico

    2004-10-01

    Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.

  11. Hybrid Stochastic Search Technique based Suboptimal AGC Regulator Design for Power System using Constrained Feedback Control Strategy

    NASA Astrophysics Data System (ADS)

    Ibraheem, Omveer, Hasan, N.

    2010-10-01

    A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.

  12. Economic lot sizing in a production system with random demand

    NASA Astrophysics Data System (ADS)

    Lee, Shine-Der; Yang, Chin-Ming; Lan, Shu-Chuan

    2016-04-01

    An extended economic production quantity model that copes with random demand is developed in this paper. A unique feature of the proposed study is the consideration of transient shortage during the production stage, which has not been explicitly analysed in existing literature. The considered costs include set-up cost for the batch production, inventory carrying cost during the production and depletion stages in one replenishment cycle, and shortage cost when demand cannot be satisfied from the shop floor immediately. Based on renewal reward process, a per-unit-time expected cost model is developed and analysed. Under some mild condition, it can be shown that the approximate cost function is convex. Computational experiments have demonstrated that the average reduction in total cost is significant when the proposed lot sizing policy is compared with those with deterministic demand.

  13. MPC Design for Rapid Pump-Attenuation and Expedited Hyperglycemia Response to Treat T1DM with an Artificial Pancreas

    PubMed Central

    Gondhalekar, Ravi; Dassau, Eyal; Doyle, Francis J.

    2016-01-01

    The design of a Model Predictive Control (MPC) strategy for the closed-loop operation of an Artificial Pancreas (AP) for treating Type 1 Diabetes Mellitus (T1DM) is considered in this paper. The contribution of this paper is to propose two changes to the usual structure of the MPC problems typically considered for control of an AP. The first proposed change is to replace the symmetric, quadratic input cost function with an asymmetric, quadratic function, allowing negative control inputs to be penalized less than positive ones. This facilitates rapid pump-suspensions in response to predicted hypoglycemia, while simultaneously permitting the design of a conservative response to hyperglycemia. The second proposed change is to penalize the velocity of the predicted glucose level, where this velocity penalty is based on a cost function that is again asymmetric, but additionally state-dependent. This facilitates the accelerated response to acute, persistent hyperglycemic events, e.g., as induced by unannounced meals. The novel functionality is demonstrated by numerical examples, and the efficacy of the proposed MPC strategy verified using the University of Padova/Virginia metabolic simulator. PMID:28479660

  14. Noun-phrase anaphors and focus: the informational load hypothesis.

    PubMed

    Almor, A

    1999-10-01

    The processing of noun-phrase (NP) anaphors in discourse is argued to reflect constraints on the activation and processing of semantic information in working memory. The proposed theory views NP anaphor processing as an optimization process that is based on the principle that processing cost, defined in terms of activating semantic information, should serve some discourse function--identifying the antecedent, adding new information, or both. In a series of 5 self-paced reading experiments, anaphors' functionality was manipulated by changing the discourse focus, and their cost was manipulated by changing the semantic relation between the anaphors and their antecedents. The results show that reading times of NP anaphors reflect their functional justification: Anaphors were read faster when their cost had a better functional justification. These results are incompatible with any theory that treats NP anaphors as one homogeneous class regardless of discourse function and processing cost.

  15. 78 FR 67342 - Proposed Information Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ..., volunteer demographic information and level of psycho-social health and functioning. The survey is designed... relationship between various service activities and potential psycho-social health benefits. Lastly this study... minutes. Estimated Total Burden Hours: 785. Total Burden Cost (capital/startup): None. Total Burden Cost...

  16. Orbit Clustering Based on Transfer Cost

    NASA Technical Reports Server (NTRS)

    Gustafson, Eric D.; Arrieta-Camacho, Juan J.; Petropoulos, Anastassios E.

    2013-01-01

    We propose using cluster analysis to perform quick screening for combinatorial global optimization problems. The key missing component currently preventing cluster analysis from use in this context is the lack of a useable metric function that defines the cost to transfer between two orbits. We study several proposed metrics and clustering algorithms, including k-means and the expectation maximization algorithm. We also show that proven heuristic methods such as the Q-law can be modified to work with cluster analysis.

  17. Locally optimal control under unknown dynamics with learnt cost function: application to industrial robot positioning

    NASA Astrophysics Data System (ADS)

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.

  18. Experimental demonstration of using divergence cost-function in SPGD algorithm for coherent beam combining with tip/tilt control.

    PubMed

    Geng, Chao; Luo, Wen; Tan, Yi; Liu, Hongmei; Mu, Jinbo; Li, Xinyang

    2013-10-21

    A novel approach of tip/tilt control by using divergence cost function in stochastic parallel gradient descent (SPGD) algorithm for coherent beam combining (CBC) is proposed and demonstrated experimentally in a seven-channel 2-W fiber amplifier array with both phase-locking and tip/tilt control, for the first time to our best knowledge. Compared with the conventional power-in-the-bucket (PIB) cost function for SPGD optimization, the tip/tilt control using divergence cost function ensures wider correction range, automatic switching control of program, and freedom of camera's intensity-saturation. Homemade piezoelectric-ring phase-modulator (PZT PM) and adaptive fiber-optics collimator (AFOC) are developed to correct piston- and tip/tilt-type aberrations, respectively. The PIB cost function is employed for phase-locking via maximization of SPGD optimization, while the divergence cost function is used for tip/tilt control via minimization. An average of 432-μrad of divergence metrics in open loop has decreased to 89-μrad when tip/tilt control implemented. In CBC, the power in the full width at half maximum (FWHM) of the main lobe increases by 32 times, and the phase residual error is less than λ/15.

  19. Fitting of full Cobb-Douglas and full VRTS cost frontiers by solving goal programming problem

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Madhusudhana Rao, B.

    2017-11-01

    The present research article first defines two popular production functions viz, Cobb-Douglas and VRTS production frontiers and their dual cost functions and then derives their cost limited maximal outputs. This paper tells us that the cost limited maximal output is cost efficient. Here the one side goal programming problem is proposed by which the full Cobb-Douglas cost frontier, full VRTS frontier can be fitted. This paper includes the framing of goal programming by which stochastic cost frontier and stochastic VRTS frontiers are fitted. Hasan et al. [1] used a parameter approach Stochastic Frontier Approach (SFA) to examine the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur stock Exchange (KLSE) market over the period 2005-2010. AshkanHassani [2] exposed Cobb-Douglas Production Functions application in construction schedule crashing and project risk analysis related to the duration of construction projects. Nan Jiang [3] applied Stochastic Frontier analysis to a panel of New Zealand dairy forms in 1998/99-2006/2007.

  20. Simulation analysis of a microcomputer-based, low-cost Omega navigation system

    NASA Technical Reports Server (NTRS)

    Lilley, R. W.; Salter, R. J., Jr.

    1976-01-01

    The current status of research on a proposed micro-computer-based, low-cost Omega Navigation System (ONS) is described. The design approach emphasizes minimum hardware, maximum software, and the use of a low-cost, commercially-available microcomputer. Currently under investigation is the implementation of a low-cost navigation processor and its interface with an omega sensor to complete the hardware-based ONS. Sensor processor functions are simulated to determine how many of the sensor processor functions can be handled by innovative software. An input data base of live Omega ground and flight test data was created. The Omega sensor and microcomputer interface modules used to collect the data are functionally described. Automatic synchronization to the Omega transmission pattern is described as an example of the algorithms developed using this data base.

  1. Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.

    PubMed

    Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir

    2018-04-01

    In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.

  2. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  3. Multibody model reduction by component mode synthesis and component cost analysis

    NASA Technical Reports Server (NTRS)

    Spanos, J. T.; Mingori, D. L.

    1990-01-01

    The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.

  4. Event-Triggered Adaptive Dynamic Programming for Continuous-Time Systems With Control Constraints.

    PubMed

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2016-08-31

    In this paper, an event-triggered near optimal control structure is developed for nonlinear continuous-time systems with control constraints. Due to the saturating actuators, a nonquadratic cost function is introduced and the Hamilton-Jacobi-Bellman (HJB) equation for constrained nonlinear continuous-time systems is formulated. In order to solve the HJB equation, an actor-critic framework is presented. The critic network is used to approximate the cost function and the action network is used to estimate the optimal control law. In addition, in the proposed method, the control signal is transmitted in an aperiodic manner to reduce the computational and the transmission cost. Both the networks are only updated at the trigger instants decided by the event-triggered condition. Detailed Lyapunov analysis is provided to guarantee that the closed-loop event-triggered system is ultimately bounded. Three case studies are used to demonstrate the effectiveness of the proposed method.

  5. The specification of a hospital cost function. A comment on the recent literature.

    PubMed

    Breyer, F

    1987-06-01

    In the empirical estimation of hospital cost functions, two radically different types of specifications have been chosen to date, ad-hoc forms and flexible functional forms based on neoclassical production theory. This paper discusses the respective strengths and weaknesses of both approaches and emphasizes the apparently unreconcilable conflict between the goals of maintaining functional flexibility and keeping the number of variables manageable if at the same time patient heterogeneity is to be adequately reflected in the case mix variables. A new specification is proposed which strikes a compromise between these goals, and the underlying assumptions are discussed critically.

  6. An Analysis of Cost Analysis Methods Used during Contract Evaluation and Source Selection in Government Contracting.

    DTIC Science & Technology

    1986-12-01

    optimal value can be stated as, Marginal Productivity of Marginal Productivity of Good A Good B " Price of Good A Price of Good B This...contractor proposed production costs could be used. _11 i4 W Vi..:. II. CONTRACT PROPOSAL EVALUATION A. PRICE ANALYSIS Price analysis, in its broadest sense...enters the market with a supply function represented by line S2, then the new price will be reestablished at price OP2 and quantity OQ2. Price

  7. Multi-objective possibilistic model for portfolio selection with transaction cost

    NASA Astrophysics Data System (ADS)

    Jana, P.; Roy, T. K.; Mazumder, S. K.

    2009-06-01

    In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.

  8. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  10. A cost-effective methodology for the design of massively-parallel VLSI functional units

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Sriram, G.; Desouza, J.

    1993-01-01

    In this paper we propose a generalized methodology for the design of cost-effective massively-parallel VLSI Functional Units. This methodology is based on a technique of generating and reducing a massive bit-array on the mask-programmable PAcube VLSI array. This methodology unifies (maintains identical data flow and control) the execution of complex arithmetic functions on PAcube arrays. It is highly regular, expandable and uniform with respect to problem-size and wordlength, thereby reducing the communication complexity. The memory-functional unit interface is regular and expandable. Using this technique functional units of dedicated processors can be mask-programmed on the naked PAcube arrays, reducing the turn-around time. The production cost of such dedicated processors can be drastically reduced since the naked PAcube arrays can be mass-produced. Analysis of the the performance of functional units designed by our method yields promising results.

  11. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  12. Artificial neural networks using complex numbers and phase encoded weights.

    PubMed

    Michel, Howard E; Awwal, Abdul Ahad S

    2010-04-01

    The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.

  13. Optimal consensus algorithm integrated with obstacle avoidance

    NASA Astrophysics Data System (ADS)

    Wang, Jianan; Xin, Ming

    2013-01-01

    This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.

  14. Cost-benefit decision circuitry: proposed modulatory role for acetylcholine.

    PubMed

    Fobbs, Wambura C; Mizumori, Sheri J Y

    2014-01-01

    In order to select which action should be taken, an animal must weigh the costs and benefits of possible outcomes associate with each action. Such decisions, called cost-benefit decisions, likely involve several cognitive processes (including memory) and a vast neural circuitry. Rodent models have allowed research to begin to probe the neural basis of three forms of cost-benefit decision making: effort-, delay-, and risk-based decision making. In this review, we detail the current understanding of the functional circuits that subserve each form of decision making. We highlight the extensive literature by detailing the ability of dopamine to influence decisions by modulating structures within these circuits. Since acetylcholine projects to all of the same important structures, we propose several ways in which the cholinergic system may play a local modulatory role that will allow it to shape these behaviors. A greater understanding of the contribution of the cholinergic system to cost-benefit decisions will permit us to better link the decision and memory processes, and this will help us to better understand and/or treat individuals with deficits in a number of higher cognitive functions including decision making, learning, memory, and language. © 2014 Elsevier Inc. All rights reserved.

  15. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  16. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies.

    PubMed

    Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad

    2018-02-01

    The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  18. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  19. Optimal control of switching time in switched stochastic systems with multi-switching times and different costs

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian

    2017-08-01

    In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.

  20. A fuzzy cost-benefit function to select economical products for processing in a closed-loop supply chain

    NASA Astrophysics Data System (ADS)

    Pochampally, Kishore K.; Gupta, Surendra M.; Cullinane, Thomas P.

    2004-02-01

    The cost-benefit analysis of data associated with re-processing of used products often involves the uncertainty feature of cash-flow modeling. The data is not objective because of uncertainties in supply, quality and disassembly times of used products. Hence, decision-makers must rely on "fuzzy" data for analysis. The same parties that are involved in the forward supply chain often carry out the collection and re-processing of used products. It is therefore important that the cost-benefit analysis takes the data of both new products and used products into account. In this paper, a fuzzy cost-benefit function is proposed that is used to perform a multi-criteria economic analysis to select the most economical products to process in a closed-loop supply chain. Application of the function is detailed through an illustrative example.

  1. TH-CD-202-08: Feasibility Study of Planning Phase Optimization Using Patient Geometry-Driven Information for Better Dose Sparing of Organ at Risks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Kim, D; Kim, T

    2016-06-15

    Purpose: To propose a simple and effective cost value function to search optimal planning phase (gating window) and demonstrated its feasibility for respiratory correlated radiation therapy. Methods: We acquired 4DCT of 10 phases for 10 lung patients who have tumor located near OARs such as esophagus, heart, and spinal cord (i.e., central lung cancer patients). A simplified mathematical optimization function was established by using overlap volume histogram (OVH) between the target and organ at risk (OAR) at each phase and the tolerance dose of selected OARs to achieve surrounding OARs dose-sparing. For all patients and all phases, delineation of themore » target volume and selected OARs (esophagus, heart, and spinal cord) was performed (by one observer to avoid inter-observer variation), then cost values were calculated for all phases. After the breathing phases were ranked according to cost value function, the relationship between score and dose distribution at highest and lowest cost value phases were evaluated by comparing the mean/max dose. Results: A simplified mathematical cost value function showed noticeable difference from phase to phase, implying it is possible to find optimal phases for gating window. The lowest cost value which may result in lower mean/max dose to OARs was distributed at various phases for all patients. The mean doses of the OARs significantly decreased about 10% with statistical significance for all 3 OARs at the phase with the lowest cost value. Also, the max doses of the OARs were decreased about 2∼5% at the phase with the lowest cost value compared to the phase with the highest cost value. Conclusion: It is demonstrated that optimal phases (in dose distribution perspective) for gating window could exist differently through each patient and the proposed cost value function can be a useful tool for determining such phases without performing dose optimization calculations. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  2. Optimization Scheduling Model for Wind-thermal Power System Considering the Dynamic penalty factor

    NASA Astrophysics Data System (ADS)

    PENG, Siyu; LUO, Jianchun; WANG, Yunyu; YANG, Jun; RAN, Hong; PENG, Xiaodong; HUANG, Ming; LIU, Wanyu

    2018-03-01

    In this paper, a new dynamic economic dispatch model for power system is presented.Objective function of the proposed model presents a major novelty in the dynamic economic dispatch including wind farm: introduced the “Dynamic penalty factor”, This factor could be computed by using fuzzy logic considering both the variable nature of active wind power and power demand, and it could change the wind curtailment cost according to the different state of the power system. Case studies were carried out on the IEEE30 system. Results show that the proposed optimization model could mitigate the wind curtailment and the total cost effectively, demonstrate the validity and effectiveness of the proposed model.

  3. An Approach to Economic Dispatch with Multiple Fuels Based on Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Sriyanyong, Pichet

    2011-06-01

    Particle Swarm Optimization (PSO), a stochastic optimization technique, shows superiority to other evolutionary computation techniques in terms of less computation time, easy implementation with high quality solution, stable convergence characteristic and independent from initialization. For this reason, this paper proposes the application of PSO to the Economic Dispatch (ED) problem, which occurs in the operational planning of power systems. In this study, ED problem can be categorized according to the different characteristics of its cost function that are ED problem with smooth cost function and ED problem with multiple fuels. Taking the multiple fuels into account will make the problem more realistic. The experimental results show that the proposed PSO algorithm is more efficient than previous approaches under consideration as well as highly promising in real world applications.

  4. 77 FR 69539 - 30-Day Notice of Proposed Information Collection: Humphrey Evaluation Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-19

    ... comments to permit the Department to: Evaluate whether the proposed information collection is necessary for the proper functions of the Department. Evaluate the accuracy of our estimate of the time and cost... collection: This request for a new information collection will allow ECA/P/V to conduct a descriptive survey...

  5. The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements

    PubMed Central

    Berret, Bastien; Darlot, Christian; Jean, Frédéric; Pozzo, Thierry; Papaxanthis, Charalambos; Gauthier, Jean Paul

    2008-01-01

    An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements. PMID:18949023

  6. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  7. Embedded System Implementation of Sound Localization in Proximal Region

    NASA Astrophysics Data System (ADS)

    Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao

    A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.

  8. A soft computing-based approach to optimise queuing-inventory control problem

    NASA Astrophysics Data System (ADS)

    Alaghebandha, Mohammad; Hajipour, Vahid

    2015-04-01

    In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.

  9. A Cognitive Engineering Analysis of the Vertical Navigation (VNAV) Function

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Feary, Michael; Polson, Peter; Mumaw, Randall; Palmer, Everett

    2001-01-01

    A cognitive engineering analysis of the Flight Management System (FMS) Vertical Navigation (VNAV) function has identified overloading of the VNAV button and overloading of the Flight Mode Annunciator (FMA) used by the VNAV function. These two types of overloading, resulting in modal input devices and ambiguous feedback, are well known sources of operator confusion, and explain, in part, the operational issues experienced by airline pilots using VNAV in descent and approach. A proposal to modify the existing VNAV design to eliminate the overloading is discussed. The proposed design improves pilot's situational awareness of the VNAV function, and potentially reduces the cost of software development and improves safety.

  10. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  11. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  12. Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data

    PubMed Central

    CHEN, SHUAI; ZHAO, HONGWEI

    2013-01-01

    Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869

  13. A fuzzy inventory model with acceptable shortage using graded mean integration value method

    NASA Astrophysics Data System (ADS)

    Saranya, R.; Varadarajan, R.

    2018-04-01

    In many inventory models uncertainty is due to fuzziness and fuzziness is the closed possible approach to reality. In this paper, we proposed a fuzzy inventory model with acceptable shortage which is completely backlogged. We fuzzily the carrying cost, backorder cost and ordering cost using Triangular and Trapezoidal fuzzy numbers to obtain the fuzzy total cost. The purpose of our study is to defuzzify the total profit function by Graded Mean Integration Value Method. Further a numerical example is also given to demonstrate the developed crisp and fuzzy models.

  14. An Interactive Life Cycle Cost Forecasting Tool

    DTIC Science & Technology

    1990-03-01

    of Phase in period PO - Length of Phase out period PV - Present value viii AFIT/GOR/ENS/90M-17 Abstract A tool was developed for Monte Carlo...and B. Note that this is for a given configuration. The E represents effectiveness and is equated to some function of the quantity of systems A and B...purchased. Either strategy, maximizing effectiveness or minimizing cost, leads to some type of cost comparison among the proposed systems. The problem

  15. Nonparametric Discrete Survival Function Estimation with Uncertain Endpoints Using an Internal Validation Subsample

    PubMed Central

    Zee, Jarcy; Xie, Sharon X.

    2015-01-01

    Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510

  16. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  17. Investigating the performance of wavelet neural networks in ionospheric tomography using IGS data over Europe

    NASA Astrophysics Data System (ADS)

    Ghaffari Razin, Mir Reza; Voosoghi, Behzad

    2017-04-01

    Ionospheric tomography is a very cost-effective method which is used frequently to modeling of electron density distributions. In this paper, residual minimization training neural network (RMTNN) is used in voxel based ionospheric tomography. Due to the use of wavelet neural network (WNN) with back-propagation (BP) algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). To train the WNN with BP algorithm, two cost functions is defined: total and vertical cost functions. Using minimization of cost functions, temporal and spatial ionospheric variations is studied. The GPS measurements of the international GNSS service (IGS) in the central Europe have been used for constructing a 3-D image of the electron density. Three days (2009.04.15, 2011.07.20 and 2013.06.01) with different solar activity index is used for the processing. To validate and better assess reliability of the proposed method, 4 ionosonde and 3 testing stations have been used. Also the results of MRMTNN has been compared to that of the RMTNN method, international reference ionosphere model 2012 (IRI-2012) and spherical cap harmonic (SCH) method as a local ionospheric model. The comparison of MRMTNN results with RMTNN, IRI-2012 and SCH models shows that the root mean square error (RMSE) and standard deviation of the proposed approach are superior to those of the traditional method.

  18. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  19. Reliable Adaptive Data Aggregation Route Strategy for a Trade-off between Energy and Lifetime in WSNs

    PubMed Central

    Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue

    2014-01-01

    Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944

  20. Brake System Design Optimization : Volume 2. Supplemental Data.

    DOT National Transportation Integrated Search

    1981-04-01

    Existing freight car braking systems, components, and subsystems are characterized both physically and functionally, and life-cycle costs are examined. Potential improvements to existing systems previously proposed or available are identified and des...

  1. Brake System Design Optimization. Volume II : Supplemental Data.

    DOT National Transportation Integrated Search

    1981-06-01

    Existing freight car braking systems, components, and subsystems are characterized both physically and functionally, and life-cycle costs are examined. Potential improvements to existing systems previously proposed or available are identified and des...

  2. Visual pathways from the perspective of cost functions and multi-task deep neural networks.

    PubMed

    Scholte, H Steven; Losch, Max M; Ramakrishnan, Kandan; de Haan, Edward H F; Bohte, Sander M

    2018-01-01

    Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Active distribution network planning considering linearized system loss

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Wang, Mingqiang; Xu, Hao

    2018-02-01

    In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.

  4. Planar junctionless phototransistor: A potential high-performance and low-cost device for optical-communications

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new junctionless optical controlled field effect transistor (JL-OCFET) and its comprehensive theoretical model is proposed to achieve high optical performance and low cost fabrication process. Exhaustive study of the device characteristics and comparison between the proposed junctionless design and the conventional inversion mode structure (IM-OCFET) for similar dimensions are performed. Our investigation reveals that the proposed design exhibits an outstanding capability to be an alternative to the IM-OCFET due to the high performance and the weak signal detection benefit offered by this design. Moreover, the developed analytical expressions are exploited to formulate the objective functions to optimize the device performance using Genetic Algorithms (GAs) approach. The optimized JL-OCFET not only demonstrates good performance in terms of derived drain current and responsivity, but also exhibits superior signal to noise ratio, low power consumption, high-sensitivity, high ION/IOFF ratio and high-detectivity as compared to the conventional IM-OCFET counterpart. These characteristics make the optimized JL-OCFET potentially suitable for developing low cost and ultrasensitive photodetectors for high-performance and low cost inter-chips data communication applications.

  5. Fuel feasibility study for Red River Army Depot boiler plant. Final report. [Economic breakeven points for conversion to fossil fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ables, L.D.

    This paper establishes economic breakeven points for the conversion to various fossil fuels as a function of time and pollution constraints for the main boiler plant at Red River Army Depot in Texarkana, Texas. In carrying out the objectives of this paper, the author develops what he considers to be the basic conversion costs and operating costs for each fossil fuel under investigation. These costs are analyzed by the use of the present worth comparison method, and the minimum cost difference between the present fuel and the proposed fuel which would justify the conversion to the proposed fuel is calculated.more » These calculated breakeven points allow a fast and easy method of determining the feasibility of a fuel by merely knowing the relative price difference between the fuels under consideration. (GRA)« less

  6. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies’ Functions

    PubMed Central

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-01

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies’ functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident’s origin and other indirect losses. In the valuation of damage to people’s life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water’s recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole. PMID:26805869

  7. Economic Estimation of the Losses Caused by Surface Water Pollution Accidents in China From the Perspective of Water Bodies' Functions.

    PubMed

    Yao, Hong; You, Zhen; Liu, Bo

    2016-01-22

    The number of surface water pollution accidents (abbreviated as SWPAs) has increased substantially in China in recent years. Estimation of economic losses due to SWPAs has been one of the focuses in China and is mentioned many times in the Environmental Protection Law of China promulgated in 2014. From the perspective of water bodies' functions, pollution accident damages can be divided into eight types: damage to human health, water supply suspension, fishery, recreational functions, biological diversity, environmental property loss, the accident's origin and other indirect losses. In the valuation of damage to people's life, the procedure for compensation of traffic accidents in China was used. The functional replacement cost method was used in economic estimation of the losses due to water supply suspension and loss of water's recreational functions. Damage to biological diversity was estimated by recovery cost analysis and damage to environmental property losses were calculated using pollutant removal costs. As a case study, using the proposed calculation procedure the economic losses caused by the major Songhuajiang River pollution accident that happened in China in 2005 have been estimated at 2263 billion CNY. The estimated economic losses for real accidents can sometimes be influenced by social and political factors, such as data authenticity and accuracy. Besides, one or more aspects in the method might be overestimated, underrated or even ignored. The proposed procedure may be used by decision makers for the economic estimation of losses in SWPAs. Estimates of the economic losses of pollution accidents could help quantify potential costs associated with increased risk sources along lakes/rivers but more importantly, highlight the value of clean water to society as a whole.

  8. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  9. A cost-function approach to rival penalized competitive learning (RPCL).

    PubMed

    Ma, Jinwen; Wang, Taijun

    2006-08-01

    Rival penalized competitive learning (RPCL) has been shown to be a useful tool for clustering on a set of sample data in which the number of clusters is unknown. However, the RPCL algorithm was proposed heuristically and is still in lack of a mathematical theory to describe its convergence behavior. In order to solve the convergence problem, we investigate it via a cost-function approach. By theoretical analysis, we prove that a general form of RPCL, called distance-sensitive RPCL (DSRPCL), is associated with the minimization of a cost function on the weight vectors of a competitive learning network. As a DSRPCL process decreases the cost to a local minimum, a number of weight vectors eventually fall into a hypersphere surrounding the sample data, while the other weight vectors diverge to infinity. Moreover, it is shown by the theoretical analysis and simulation experiments that if the cost reduces into the global minimum, a correct number of weight vectors is automatically selected and located around the centers of the actual clusters, respectively. Finally, we apply the DSRPCL algorithms to unsupervised color image segmentation and classification of the wine data.

  10. Brake System Design Optimization : Volume 1. A Survey and Assessment.

    DOT National Transportation Integrated Search

    1978-06-01

    Existing freight car braking systems, components, and subsystems are characterized both physically and functionally, and life-cycle costs are examined. Potential improvements to existing systems previously proposed or available are identified and des...

  11. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  12. Design Approach and Implementation of Application Specific Instruction Set Processor for SHA-3 BLAKE Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang

    This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.

  13. An IoT Reader for Wireless Passive Electromagnetic Sensors.

    PubMed

    Galindo-Romera, Gabriel; Carnerero-Cano, Javier; Martínez-Martínez, José Juan; Herraiz-Martínez, Francisco Javier

    2017-03-28

    In the last years, many passive electromagnetic sensors have been reported. Some of these sensors are used for measuring harmful substances. Moreover, the response of these sensors is usually obtained with laboratory equipment. This approach highly increases the total cost and complexity of the sensing system. In this work, a novel low-cost and portable Internet-of-Things (IoT) reader for passive wireless electromagnetic sensors is proposed. The reader is used to interrogate the sensors within a short-range wireless link avoiding the direct contact with the substances under test. The IoT functionalities of the reader allows remote sensing from computers and handheld devices. For that purpose, the proposed design is based on four functional layers: the radiating layer, the RF interface, the IoT mini-computer and the power unit. In this paper a demonstrator of the proposed reader is designed and manufactured. The demonstrator shows, through the remote measurement of different substances, that the proposed system can estimate the dielectric permittivity. It has been demonstrated that a linear approximation with a small error can be extracted from the reader measurements. It is remarkable that the proposed reader can be used with other type of electromagnetic sensors, which transduce the magnitude variations in the frequency domain.

  14. An IoT Reader for Wireless Passive Electromagnetic Sensors

    PubMed Central

    Galindo-Romera, Gabriel; Carnerero-Cano, Javier; Martínez-Martínez, José Juan; Herraiz-Martínez, Francisco Javier

    2017-01-01

    In the last years, many passive electromagnetic sensors have been reported. Some of these sensors are used for measuring harmful substances. Moreover, the response of these sensors is usually obtained with laboratory equipment. This approach highly increases the total cost and complexity of the sensing system. In this work, a novel low-cost and portable Internet-of-Things (IoT) reader for passive wireless electromagnetic sensors is proposed. The reader is used to interrogate the sensors within a short-range wireless link avoiding the direct contact with the substances under test. The IoT functionalities of the reader allows remote sensing from computers and handheld devices. For that purpose, the proposed design is based on four functional layers: the radiating layer, the RF interface, the IoT mini-computer and the power unit. In this paper a demonstrator of the proposed reader is designed and manufactured. The demonstrator shows, through the remote measurement of different substances, that the proposed system can estimate the dielectric permittivity. It has been demonstrated that a linear approximation with a small error can be extracted from the reader measurements. It is remarkable that the proposed reader can be used with other type of electromagnetic sensors, which transduce the magnitude variations in the frequency domain. PMID:28350356

  15. Replica Approach for Minimal Investment Risk with Cost

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  16. Global Network Alignment in the Context of Aging.

    PubMed

    Faisal, Fazle Elahi; Zhao, Han; Milenkovic, Tijana

    2015-01-01

    Analogous to sequence alignment, network alignment (NA) can be used to transfer biological knowledge across species between conserved network regions. NA faces two algorithmic challenges: 1) Which cost function to use to capture "similarities" between nodes in different networks? 2) Which alignment strategy to use to rapidly identify "high-scoring" alignments from all possible alignments? We "break down" existing state-of-the-art methods that use both different cost functions and different alignment strategies to evaluate each combination of their cost functions and alignment strategies. We find that a combination of the cost function of one method and the alignment strategy of another method beats the existing methods. Hence, we propose this combination as a novel superior NA method. Then, since human aging is hard to study experimentally due to long lifespan, we use NA to transfer aging-related knowledge from well annotated model species to poorly annotated human. By doing so, we produce novel human aging-related knowledge, which complements currently available knowledge about aging that has been obtained mainly by sequence alignment. We demonstrate significant similarity between topological and functional properties of our novel predictions and those of known aging-related genes. We are the first to use NA to learn more about aging.

  17. Ranked solutions to a class of combinatorial optimizations - with applications in mass spectrometry based peptide sequencing

    NASA Astrophysics Data System (ADS)

    Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo

    2006-03-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.

  18. Seamless interworking architecture for WBAN in heterogeneous wireless networks with QoS guarantees.

    PubMed

    Khan, Pervez; Ullah, Niamat; Ullah, Sana; Kwak, Kyung Sup

    2011-10-01

    The IEEE 802.15.6 standard is a communication standard optimized for low-power and short-range in-body/on-body nodes to serve a variety of medical, consumer electronics and entertainment applications. Providing high mobility with guaranteed Quality of Service (QoS) to a WBAN user in heterogeneous wireless networks is a challenging task. A WBAN uses a Personal Digital Assistant (PDA) to gather data from body sensors and forwards it to a remote server through wide range wireless networks. In this paper, we present a coexistence study of WBAN with Wireless Local Area Networks (WLAN) and Wireless Wide Area Networks (WWANs). The main issue is interworking of WBAN in heterogenous wireless networks including seamless handover, QoS, emergency services, cooperation and security. We propose a Seamless Interworking Architecture (SIA) for WBAN in heterogenous wireless networks based on a cost function. The cost function is based on power consumption and data throughput costs. Our simulation results show that the proposed scheme outperforms typical approaches in terms of throughput, delay and packet loss rate.

  19. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  20. An Efficient Scheduling Scheme on Charging Stations for Smart Transportation

    NASA Astrophysics Data System (ADS)

    Kim, Hye-Jin; Lee, Junghoon; Park, Gyung-Leen; Kang, Min-Jae; Kang, Mikyung

    This paper proposes a reservation-based scheduling scheme for the charging station to decide the service order of multiple requests, aiming at improving the satisfiability of electric vehicles. The proposed scheme makes it possible for a customer to reduce the charge cost and waiting time, while a station can extend the number of clients it can serve. A linear rank function is defined based on estimated arrival time, waiting time bound, and the amount of needed power, reducing the scheduling complexity. Receiving the requests from the clients, the power station decides the charge order by the rank function and then replies to the requesters with the waiting time and cost it can guarantee. Each requester can decide whether to charge at that station or try another station. This scheduler can evolve to integrate a new pricing policy and services, enriching the electric vehicle transport system.

  1. Low cost and efficient kurtosis-based deflationary ICA method: application to MRS sources separation problem.

    PubMed

    Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L

    2016-08-01

    Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.

  2. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  3. Local Minima Free Parameterized Appearance Models

    PubMed Central

    Nguyen, Minh Hoai; De la Torre, Fernando

    2010-01-01

    Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750

  4. An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks

    PubMed Central

    Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed

    2016-01-01

    Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586

  5. Superpixel Cut for Figure-Ground Image Segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Michael Ying; Rosenhahn, Bodo

    2016-06-01

    Figure-ground image segmentation has been a challenging problem in computer vision. Apart from the difficulties in establishing an effective framework to divide the image pixels into meaningful groups, the notions of figure and ground often need to be properly defined by providing either user inputs or object models. In this paper, we propose a novel graph-based segmentation framework, called superpixel cut. The key idea is to formulate foreground segmentation as finding a subset of superpixels that partitions a graph over superpixels. The problem is formulated as Min-Cut. Therefore, we propose a novel cost function that simultaneously minimizes the inter-class similarity while maximizing the intra-class similarity. This cost function is optimized using parametric programming. After a small learning step, our approach is fully automatic and fully bottom-up, which requires no high-level knowledge such as shape priors and scene content. It recovers coherent components of images, providing a set of multiscale hypotheses for high-level reasoning. We evaluate our proposed framework by comparing it to other generic figure-ground segmentation approaches. Our method achieves improved performance on state-of-the-art benchmark databases.

  6. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    PubMed Central

    Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María

    2016-01-01

    Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542

  7. Abatement costs of soil conservation in China's Loess Plateau: balancing income with conservation in an agricultural system.

    PubMed

    Hou, Lingling; Hoag, Dana L K; Keske, Catherine M H

    2015-02-01

    This study proposes the use of marginal abatement cost curves to calculate environmental damages of agricultural systems in China's Loess Plateau. Total system costs and revenues, management characteristics and pollution attributes are imputed into a directional output distance function, which is then used to determine shadow prices and abatement cost curves for soil and nitrogen loss. Marginal abatement costs curves are an effective way to compare economic and conservation tradeoffs when field-specific data are scarce. The results show that sustainable agricultural practices can balance soil conservation and agricultural production; land need not be retired, as is current policy. Published by Elsevier Ltd.

  8. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  9. Reducing a cost of traumatic insemination: female bedbugs evolve a unique organ.

    PubMed Central

    Reinhardt, Klaus; Naylor, Richard; Siva-Jothy, Michael T

    2003-01-01

    The frequent wounding of female bedbugs (Cimex lectularius: Cimicidae) during copulation has been shown to decrease their fitness, but how females have responded to this cost in evolutionary terms is unclear. The evolution of a unique anatomical structure found in female bedbugs, the spermalege, into which the male's intromittent organ passes during traumatic insemination, is a possible counteradaptation to harmful male traits. Several functions have been proposed for this organ, and we test two hypotheses related to its role in sexual conflict. We examine the hypotheses that the spermalege functions to (i) defend against pathogens introduced during traumatic insemination; and (ii) reduce the costs of wound healing during traumatic insemination. Our results support the 'defence against pathogens' hypothesis, suggesting that the evolution of this unique cimicid organ resulted, at least partly, from selection to reduce the costs of mating-associated infection. We found no evidence that the spermalege reduces the costs of wound healing. PMID:14667353

  10. [The equivalence and interchangeability of medical articles].

    PubMed

    Antonov, V S

    2013-11-01

    The information concerning the interchangeability of medical articles is highly valuable because it makes it possible to correlate most precisely medical articles with medical technologies and medical care standards and to optimize budget costs under public purchasing. The proposed procedure of determination of interchangeability is based on criteria of equivalence of prescriptions, functional technical and technological characteristics and effectiveness of functioning of medical articles.

  11. Biological filters and their use in potable water filtration systems in spaceflight conditions

    NASA Astrophysics Data System (ADS)

    Thornhill, Starla G.; Kumar, Manish

    2018-05-01

    Providing drinking water to space missions such as the International Space Station (ISS) is a costly requirement for human habitation. To limit the costs of water transport, wastewater is collected and purified using a variety of physical and chemical means. To date, sand-based biofilters have been designed to function against gravity, and biofilms have been shown to form in microgravity conditions. Development of a universal silver-recycling biological filter system that is able to function in both microgravity and full gravity conditions would reduce the costs incurred in removing organic contaminants from wastewater by limiting the energy and chemical inputs required. This paper aims to propose the use of a sand-substrate biofilter to replace chemical means of water purification on manned spaceflights.

  12. The use of locally optimal trajectory management for base reaction control of robots in a microgravity environment

    NASA Technical Reports Server (NTRS)

    Lin, N. J.; Quinn, R. D.

    1991-01-01

    A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.

  13. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications.

    PubMed

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-05-12

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O ( 2 N 2 ) degrees of freedom (DOF) with O ( N ) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array.

  14. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications

    PubMed Central

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-01-01

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O(2N2) degrees of freedom (DOF) with O(N) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array. PMID:28498329

  15. Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao

    This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less

  16. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  17. Graph cuts with invariant object-interaction priors: application to intervertebral disc segmentation.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Garvin, Gregory; Romano, Walter; Li, Shuo

    2011-01-01

    This study investigates novel object-interaction priors for graph cut image segmentation with application to intervertebral disc delineation in magnetic resonance (MR) lumbar spine images. The algorithm optimizes an original cost function which constrains the solution with learned prior knowledge about the geometric interactions between different objects in the image. Based on a global measure of similarity between distributions, the proposed priors are intrinsically invariant with respect to translation and rotation. We further introduce a scale variable from which we derive an original fixed-point equation (FPE), thereby achieving scale-invariance with only few fast computations. The proposed priors relax the need of costly pose estimation (or registration) procedures and large training sets (we used a single subject for training), and can tolerate shape deformations, unlike template-based priors. Our formulation leads to an NP-hard problem which does not afford a form directly amenable to graph cut optimization. We proceeded to a relaxation of the problem via an auxiliary function, thereby obtaining a nearly real-time solution with few graph cuts. Quantitative evaluations over 60 intervertebral discs acquired from 10 subjects demonstrated that the proposed algorithm yields a high correlation with independent manual segmentations by an expert. We further demonstrate experimentally the invariance of the proposed geometric attributes. This supports the fact that a single subject is sufficient for training our algorithm, and confirms the relevance of the proposed priors to disc segmentation.

  18. Network formation: neighborhood structures, establishment costs, and distributed learning.

    PubMed

    Chasparis, Georgios C; Shamma, Jeff S

    2013-12-01

    We consider the problem of network formation in a distributed fashion. Network formation is modeled as a strategic-form game, where agents represent nodes that form and sever unidirectional links with other nodes and derive utilities from these links. Furthermore, agents can form links only with a limited set of neighbors. Agents trade off the benefit from links, which is determined by a distance-dependent reward function, and the cost of maintaining links. When each agent acts independently, trying to maximize its own utility function, we can characterize “stable” networks through the notion of Nash equilibrium. In fact, the introduced reward and cost functions lead to Nash equilibria (networks), which exhibit several desirable properties such as connectivity, bounded-hop diameter, and efficiency (i.e., minimum number of links). Since Nash networks may not necessarily be efficient, we also explore the possibility of “shaping” the set of Nash networks through the introduction of state-based utility functions. Such utility functions may represent dynamic phenomena such as establishment costs (either positive or negative). Finally, we show how Nash networks can be the outcome of a distributed learning process. In particular, we extend previous learning processes to so-called “state-based” weakly acyclic games, and we show that the proposed network formation games belong to this class of games.

  19. Distributed Optimal Dispatch of Distributed Energy Resources Over Lossy Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Junfeng; Yang, Tao; Wu, Di

    In this paper, we consider the economic dispatch problem (EDP), where a cost function that is assumed to be strictly convex is assigned to each of distributed energy resources (DERs), over packet dropping networks. The goal of a standard EDP is to minimize the total generation cost while meeting total demand and satisfying individual generator output limit. We propose a distributed algorithm for solving the EDP over networks. The proposed algorithm is resilient against packet drops over communication links. Under the assumption that the underlying communication network is strongly connected with a positive probability and the packet drops are independentmore » and identically distributed (i.i.d.), we show that the proposed algorithm is able to solve the EDP. Numerical simulation results are used to validate and illustrate the main results of the paper.« less

  20. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  1. Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O

    2009-04-01

    This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.

  2. Efficient experimental design for uncertainty reduction in gene regulatory networks.

    PubMed

    Dehghannasiri, Roozbeh; Yoon, Byung-Jun; Dougherty, Edward R

    2015-01-01

    An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/.

  3. Efficient experimental design for uncertainty reduction in gene regulatory networks

    PubMed Central

    2015-01-01

    Background An accurate understanding of interactions among genes plays a major role in developing therapeutic intervention methods. Gene regulatory networks often contain a significant amount of uncertainty. The process of prioritizing biological experiments to reduce the uncertainty of gene regulatory networks is called experimental design. Under such a strategy, the experiments with high priority are suggested to be conducted first. Results The authors have already proposed an optimal experimental design method based upon the objective for modeling gene regulatory networks, such as deriving therapeutic interventions. The experimental design method utilizes the concept of mean objective cost of uncertainty (MOCU). MOCU quantifies the expected increase of cost resulting from uncertainty. The optimal experiment to be conducted first is the one which leads to the minimum expected remaining MOCU subsequent to the experiment. In the process, one must find the optimal intervention for every gene regulatory network compatible with the prior knowledge, which can be prohibitively expensive when the size of the network is large. In this paper, we propose a computationally efficient experimental design method. This method incorporates a network reduction scheme by introducing a novel cost function that takes into account the disruption in the ranking of potential experiments. We then estimate the approximate expected remaining MOCU at a lower computational cost using the reduced networks. Conclusions Simulation results based on synthetic and real gene regulatory networks show that the proposed approximate method has close performance to that of the optimal method but at lower computational cost. The proposed approximate method also outperforms the random selection policy significantly. A MATLAB software implementing the proposed experimental design method is available at http://gsp.tamu.edu/Publications/supplementary/roozbeh15a/. PMID:26423515

  4. How Can It Cost That Much? A Three-Year Study of Proposal Production Costs.

    ERIC Educational Resources Information Center

    Wiese, W. C.; Bowden, C. Mal

    1997-01-01

    Examines significant new business proposal efforts for United States Department of Defense contracts. Identifies six "pillars" of a contractor's proposal preparation costs. Derives a formula that characterizes proposal preparation costs. Demonstrates that a quick, accurate cost model can be developed for proposal publishing. (RS)

  5. A review method for UML requirements analysis model employing system-side prototyping.

    PubMed

    Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    User interface prototyping is an effective method for users to validate the requirements defined by analysts at an early stage of a software development. However, a user interface prototype system offers weak support for the analysts to verify the consistency of the specifications about internal aspects of a system such as business logic. As the result, the inconsistency causes a lot of rework costs because the inconsistency often makes the developers impossible to actualize the system based on the specifications. For verifying such consistency, functional prototyping is an effective method for the analysts, but it needs a lot of costs and more detailed specifications. In this paper, we propose a review method so that analysts can verify the consistency among several different kinds of diagrams in UML efficiently by employing system-side prototyping without the detailed model. The system-side prototype system does not have any functions to achieve business logic, but visualizes the results of the integration among the diagrams in UML as Web pages. The usefulness of our proposal was evaluated by applying our proposal into a development of Library Management System (LMS) for a laboratory. This development was conducted by a group. As the result, our proposal was useful for discovering the serious inconsistency caused by the misunderstanding among the members of the group.

  6. a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments

    NASA Astrophysics Data System (ADS)

    Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.

    2017-09-01

    We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.

  7. Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines

    PubMed Central

    Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing

    2014-01-01

    m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933

  8. Alzheimer Classification Using a Minimum Spanning Tree of High-Order Functional Network on fMRI Dataset

    PubMed Central

    Guo, Hao; Liu, Lei; Chen, Junjie; Xu, Yong; Jie, Xiang

    2017-01-01

    Functional magnetic resonance imaging (fMRI) is one of the most useful methods to generate functional connectivity networks of the brain. However, conventional network generation methods ignore dynamic changes of functional connectivity between brain regions. Previous studies proposed constructing high-order functional connectivity networks that consider the time-varying characteristics of functional connectivity, and a clustering method was performed to decrease computational cost. However, random selection of the initial clustering centers and the number of clusters negatively affected classification accuracy, and the network lost neurological interpretability. Here we propose a novel method that introduces the minimum spanning tree method to high-order functional connectivity networks. As an unbiased method, the minimum spanning tree simplifies high-order network structure while preserving its core framework. The dynamic characteristics of time series are not lost with this approach, and the neurological interpretation of the network is guaranteed. Simultaneously, we propose a multi-parameter optimization framework that involves extracting discriminative features from the minimum spanning tree high-order functional connectivity networks. Compared with the conventional methods, our resting-state fMRI classification method based on minimum spanning tree high-order functional connectivity networks greatly improved the diagnostic accuracy for Alzheimer's disease. PMID:29249926

  9. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  10. 41 CFR 109-1.5204 - Review and approval of a designated contractor's personal property management system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... overhaul; and (2) An analysis of the cost to implement the overhaul within a year versus a proposed... be based on a formal comprehensive appraisal or a series of formal appraisals of the functional...

  11. A general framework for regularized, similarity-based image restoration.

    PubMed

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  12. Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.

    PubMed

    Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry

    2011-10-01

    An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.

  13. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  14. Lung vessel segmentation in CT images using graph-cuts

    NASA Astrophysics Data System (ADS)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  15. A machine-learning graph-based approach for 3D segmentation of Bruch's membrane opening from glaucomatous SD-OCT volumes.

    PubMed

    Miri, Mohammad Saleh; Abràmoff, Michael D; Kwon, Young H; Sonka, Milan; Garvin, Mona K

    2017-07-01

    Bruch's membrane opening-minimum rim width (BMO-MRW) is a recently proposed structural parameter which estimates the remaining nerve fiber bundles in the retina and is superior to other conventional structural parameters for diagnosing glaucoma. Measuring this structural parameter requires identification of BMO locations within spectral domain-optical coherence tomography (SD-OCT) volumes. While most automated approaches for segmentation of the BMO either segment the 2D projection of BMO points or identify BMO points in individual B-scans, in this work, we propose a machine-learning graph-based approach for true 3D segmentation of BMO from glaucomatous SD-OCT volumes. The problem is formulated as an optimization problem for finding a 3D path within the SD-OCT volume. In particular, the SD-OCT volumes are transferred to the radial domain where the closed loop BMO points in the original volume form a path within the radial volume. The estimated location of BMO points in 3D are identified by finding the projected location of BMO points using a graph-theoretic approach and mapping the projected locations onto the Bruch's membrane (BM) surface. Dynamic programming is employed in order to find the 3D BMO locations as the minimum-cost path within the volume. In order to compute the cost function needed for finding the minimum-cost path, a random forest classifier is utilized to learn a BMO model, obtained by extracting intensity features from the volumes in the training set, and computing the required 3D cost function. The proposed method is tested on 44 glaucoma patients and evaluated using manual delineations. Results show that the proposed method successfully identifies the 3D BMO locations and has significantly smaller errors compared to the existing 3D BMO identification approaches. Published by Elsevier B.V.

  16. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  17. Designing scalable product families by the radial basis function-high-dimensional model representation metamodelling technique

    NASA Astrophysics Data System (ADS)

    Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary

    2015-10-01

    Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.

  18. Market Mechanism Design for Renewable Energy based on Risk Theory

    NASA Astrophysics Data System (ADS)

    Yang, Wu; Bo, Wang; Jichun, Liu; Wenjiao, Zai; Pingliang, Zeng; Haobo, Shi

    2018-02-01

    Generation trading between renewable energy and thermal power is an efficient market means for transforming supply structure of electric power into sustainable development pattern. But the trading is hampered by the output fluctuations of renewable energy and the cost differences between renewable energy and thermal power at present. In this paper, the external environmental cost (EEC) is defined and the EEC is introduced into the generation cost. At same time, the incentive functions of renewable energy and low-emission thermal power are designed, which are decreasing functions of EEC. On these bases, for the market risks caused by the random variability of EEC, the decision-making model of generation trading between renewable energy and thermal power is constructed according to the risk theory. The feasibility and effectiveness of the proposed model are verified by simulation results.

  19. Application of a territorial-based filtering algorithm in turbomachinery blade design optimization

    NASA Astrophysics Data System (ADS)

    Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François

    2017-02-01

    A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.

  20. Ranked solutions to a class of combinatorial optimizations—with applications in mass spectrometry based peptide sequencing and a variant of directed paths in random media

    NASA Astrophysics Data System (ADS)

    Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo

    2005-08-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.

  1. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less

  2. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    PubMed

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  3. Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Knox, Lenora A.

    The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.

  4. Reviving Campbell's paradigm for attitude research.

    PubMed

    Kaiser, Florian G; Byrka, Katarzyna; Hartig, Terry

    2010-11-01

    Because people often say one thing and do another, social psychologists have abandoned the idea of a simple or axiomatic connection between attitude and behavior. Nearly 50 years ago, however, Donald Campbell proposed that the root of the seeming inconsistency between attitude and behavior lies in disregard of behavioral costs. According to Campbell, attitude- behavior gaps are empirical chimeras. Verbal claims and other overt behaviors regarding an attitude object all arise from one "behavioral disposition." In this article, the authors present the constituents of and evidence for a paradigm for attitude research that describes individual behavior as a function of a person's attitude level and the costs of the specific behavior involved. In the authors' version of Campbell's paradigm, they propose a formal and thus axiomatic rather than causal relationship between an attitude and its corresponding performances. The authors draw implications of their proposal for mainstream attitude theory, empirical research, and applications concerning attitudes.

  5. When Reputation Enforces Evolutionary Cooperation in Unreliable MANETs.

    PubMed

    Tang, Changbing; Li, Ang; Li, Xiang

    2015-10-01

    In self-organized mobile ad hoc networks (MANETs), network functions rely on cooperation of self-interested nodes, where a challenge is to enforce their mutual cooperation. In this paper, we study cooperative packet forwarding in a one-hop unreliable channel which results from loss of packets and noisy observation of transmissions. We propose an indirect reciprocity framework based on evolutionary game theory, and enforce cooperation of packet forwarding strategies in both structured and unstructured MANETs. Furthermore, we analyze the evolutionary dynamics of cooperative strategies and derive the threshold of benefit-to-cost ratio to guarantee the convergence of cooperation. The numerical simulations verify that the proposed evolutionary game theoretic solution enforces cooperation when the benefit-to-cost ratio of the altruistic exceeds the critical condition. In addition, the network throughput performance of our proposed strategy in structured MANETs is measured, which is in close agreement with that of the full cooperative strategy.

  6. An effective and secure key-management scheme for hierarchical access control in E-medicine system.

    PubMed

    Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit

    2013-04-01

    Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.

  7. Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.

    PubMed

    Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís

    2010-10-01

    Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.

  8. A wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Yang, Yanxi

    2018-05-01

    We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.

  9. A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication

    PubMed Central

    Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo

    2017-01-01

    Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934

  10. 48 CFR 970.3102-05-18 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... development and bid and proposal costs. 970.3102-05-18 Section 970.3102-05-18 Federal Acquisition Regulations... Contract Cost Principles and Procedures 970.3102-05-18 Independent research and development and bid and proposal costs. (c) Independent Research and Development and Bid and Proposal costs are unallowable...

  11. The Time Dependent Propensity Function for Acceleration of Spatial Stochastic Simulation of Reaction-Diffusion Systems

    PubMed Central

    Wu, Sheng; Li, Hong; Petzold, Linda R.

    2015-01-01

    The inhomogeneous stochastic simulation algorithm (ISSA) is a fundamental method for spatial stochastic simulation. However, when diffusion events occur more frequently than reaction events, simulating the diffusion events by ISSA is quite costly. To reduce this cost, we propose to use the time dependent propensity function in each step. In this way we can avoid simulating individual diffusion events, and use the time interval between two adjacent reaction events as the simulation stepsize. We demonstrate that the new algorithm can achieve orders of magnitude efficiency gains over widely-used exact algorithms, scales well with increasing grid resolution, and maintains a high level of accuracy. PMID:26609185

  12. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  13. On real-space Density Functional Theory for non-orthogonal crystal systems: Kronecker product formulation of the kinetic energy operator

    NASA Astrophysics Data System (ADS)

    Sharma, Abhiraj; Suryanarayana, Phanish

    2018-05-01

    We present an accurate and efficient real-space Density Functional Theory (DFT) framework for the ab initio study of non-orthogonal crystal systems. Specifically, employing a local reformulation of the electrostatics, we develop a novel Kronecker product formulation of the real-space kinetic energy operator that significantly reduces the number of operations associated with the Laplacian-vector multiplication, the dominant cost in practical computations. In particular, we reduce the scaling with respect to finite-difference order from quadratic to linear, thereby significantly bridging the gap in computational cost between non-orthogonal and orthogonal systems. We verify the accuracy and efficiency of the proposed methodology through selected examples.

  14. A real-space approach to the X-ray phase problem

    NASA Astrophysics Data System (ADS)

    Liu, Xiangan

    Over the past few decades, the phase problem of X-ray crystallography has been explored in reciprocal space in the so called direct methods . Here we investigate the problem using a real-space approach that bypasses the laborious procedure of frequent Fourier synthesis and peak picking. Starting from a completely random structure, we move the atoms around in real space to minimize a cost function. A Monte Carlo method named simulated annealing (SA) is employed to search the global minimum of the cost function which could be constructed in either real space or reciprocal space. In the hybrid minimal principle, we combine the dual space costs together. One part of the cost function monitors the probability distribution of the phase triplets, while the other is a real space cost function which represents the discrepancy between measured and calculated intensities. Compared to the single space cost functions, the dual space cost function has a greatly improved landscape and therefore could prevent the system from being trapped in metastable states. Thus, the structures of large molecules such as virginiamycin (C43H 49N7O10 · 3CH0OH), isoleucinomycin (C60H102N 6O18) and hexadecaisoleucinomycin (HEXIL) (C80H136 N8O24) can now be solved, whereas it would not be possible using the single cost function. When a molecule gets larger, the configurational space becomes larger, and the requirement of CPU time increases exponentially. The method of improved Monte Carlo sampling has demonstrated its capability to solve large molecular structures. The atoms are encouraged to sample the high density regions in space determined by an approximate density map which in turn is updated and modified by averaging and Fourier synthesis. This type of biased sampling has led to considerable reduction of the configurational space. It greatly improves the algorithm compared to the previous uniform sampling. Hence, for instance, 90% of computer run time could be cut in solving the complex structure of isoleucinomycin. Successful trial calculations include larger molecular structures such as HEXIL and a collagen-like peptide (PPG). Moving chemical fragment is proposed to reduce the degrees of freedom. Furthermore, stereochemical parameters are considered for geometric constraints and for a cost function related to chemical energy.

  15. Economic Model Predictive Control of Bihormonal Artificial Pancreas System Based on Switching Control and Dynamic R-parameter.

    PubMed

    Tang, Fengna; Wang, Youqing

    2017-11-01

    Blood glucose (BG) regulation is a long-term task for people with diabetes. In recent years, more and more researchers have attempted to achieve automated regulation of BG using automatic control algorithms, called the artificial pancreas (AP) system. In clinical practice, it is equally important to guarantee the treatment effect and reduce the treatment costs. The main motivation of this study is to reduce the cure burden. The dynamic R-parameter economic model predictive control (R-EMPC) is chosen to regulate the delivery rates of exogenous hormones (insulin and glucagon). It uses particle swarm optimization (PSO) to optimize the economic cost function and the switching logic between insulin delivery and glucagon delivery is designed based on switching control theory. The proposed method is first tested on the standard subject; the result is compared with the switching PID and the switching MPC. The effect of the dynamic R-parameter on improving the control performance is illustrated by comparing the results of the EMPC and the R-EMPC. Finally, the robustness tests on meal change (size and timing), hormone sensitivity (insulin and glucagon), and subject variability are performed. All results show that the proposed method can improve the control performance and reduce the economic costs. The simulation results verify the effectiveness of the proposed algorithm on improving the tracking performance, enhancing robustness, and reducing economic costs. The method proposed in this study owns great worth in practical application.

  16. A tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel independent brake from moderate driving to limit handling

    NASA Astrophysics Data System (ADS)

    Joa, Eunhyek; Park, Kwanwoo; Koh, Youngil; Yi, Kyongsu; Kim, Kilsoo

    2018-04-01

    This paper presents a tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel braking for enhanced performance from moderate driving to limit handling. The proposed algorithm adopted hierarchical structure: supervisor - desired motion tracking controller - optimisation-based control allocation. In the supervisor, by considering transient cornering characteristics, desired vehicle motion is calculated. In the desired motion tracking controller, in order to track desired vehicle motion, virtual control input is determined in the manner of sliding mode control. In the control allocation, virtual control input is allocated to minimise cost function. The cost function consists of two major parts. First part is a slip-based tyre friction utilisation quantification, which does not need a tyre force estimation. Second part is an allocation guideline, which guides optimally allocated inputs to predefined solution. The proposed algorithm has been investigated via simulation from moderate driving to limit handling scenario. Compared to Base and direct yaw moment control system, the proposed algorithm can effectively reduce tyre dissipation energy in the moderate driving situation. Moreover, the proposed algorithm enhances limit handling performance compared to Base and direct yaw moment control system. In addition to comparison with Base and direct yaw moment control, comparison the proposed algorithm with the control algorithm based on the known tyre force information has been conducted. The results show that the performance of the proposed algorithm is similar with that of the control algorithm with the known tyre force information.

  17. [Determination of cost-effective strategies in colorectal cancer screening].

    PubMed

    Dervaux, B; Eeckhoudt, L; Lebrun, T; Sailly, J C

    1992-01-01

    The object of the article is to implement particular methodologies in order to determine which strategies are cost-effective in the mass screening of colorectal cancer after a positive Hemoccult test. The first approach to be presented consists in proposing a method which enables all the admissible diagnostic strategies to be determined. The second approach enables a minimal cost function to be estimated using an adaptation of "Data Envelopment Analysis". This method proves to be particularly successful in cost-efficiency analysis, when the performance indicators are numerous and hard to aggregate. The results show that there are two cost-effective strategies after a positive Hemoccult test: coloscopy and sigmoidoscopy; they put into question the relevance of double contrast barium enema in the diagnosis of colo-rectal lesions.

  18. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    PubMed

    Yeh, Wei-Chang

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  19. Neural-Network-Based Robust Optimal Tracking Control for MIMO Discrete-Time Systems With Unknown Uncertainty Using Adaptive Critic Design.

    PubMed

    Liu, Lei; Wang, Zhanshan; Zhang, Huaguang

    2018-04-01

    This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.

  20. Dynamic remedial action scheme using online transient stability analysis

    NASA Astrophysics Data System (ADS)

    Shrestha, Arun

    Economic pressure and environmental factors have forced the modern power systems to operate closer to their stability limits. However, maintaining transient stability is a fundamental requirement for the operation of interconnected power systems. In North America, power systems are planned and operated to withstand the loss of any single or multiple elements without violating North American Electric Reliability Corporation (NERC) system performance criteria. For a contingency resulting in the loss of multiple elements (Category C), emergency transient stability controls may be necessary to stabilize the power system. Emergency control is designed to sense abnormal conditions and subsequently take pre-determined remedial actions to prevent instability. Commonly known as either Remedial Action Schemes (RAS) or as Special/System Protection Schemes (SPS), these emergency control approaches have been extensively adopted by utilities. RAS are designed to address specific problems, e.g. to increase power transfer, to provide reactive support, to address generator instability, to limit thermal overloads, etc. Possible remedial actions include generator tripping, load shedding, capacitor and reactor switching, static VAR control, etc. Among various RAS types, generation shedding is the most effective and widely used emergency control means for maintaining system stability. In this dissertation, an optimal power flow (OPF)-based generation-shedding RAS is proposed. This scheme uses online transient stability calculation and generator cost function to determine appropriate remedial actions. For transient stability calculation, SIngle Machine Equivalent (SIME) technique is used, which reduces the multimachine power system model to a One-Machine Infinite Bus (OMIB) equivalent and identifies critical machines. Unlike conventional RAS, which are designed using offline simulations, online stability calculations make the proposed RAS dynamic and adapting to any power system configuration and operating state. The generation-shedding cost is calculated using pre-RAS and post-RAS OPF costs. The criteria for selecting generators to trip is based on the minimum cost rather than minimum amount of generation to shed. For an unstable Category C contingency, the RAS control action that results in stable system with minimum generation shedding cost is selected among possible candidate solutions. The RAS control actions update whenever there is a change in operating condition, system configuration, or cost functions. The effectiveness of the proposed technique is demonstrated by simulations on the IEEE 9-bus system, the IEEE 39-bus system, and IEEE 145-bus system. This dissertation also proposes an improved, yet relatively simple, technique for solving Transient Stability-Constrained Optimal Power Flow (TSC-OPF) problem. Using the SIME method, the sets of dynamic and transient stability constraints are reduced to a single stability constraint, decreasing the overall size of the optimization problem. The transient stability constraint is formulated using the critical machines' power at the initial time step, rather than using the machine rotor angles. This avoids the addition of machine steady state stator algebraic equations in the conventional OPF algorithm. A systematic approach to reach an optimal solution is developed by exploring the quasi-linear behavior of critical machine power and stability margin. The proposed method shifts critical machines active power based on generator costs using an OPF algorithm. Moreover, the transient stability limit is based on stability margin, and not on a heuristically set limit on OMIB rotor angle. As a result, the proposed TSC-OPF solution is more economical and transparent. The proposed technique enables the use of fast and robust commercial OPF tool and time-domain simulation software for solving large scale TSC-OPF problem, which makes the proposed method also suitable for real-time application.

  1. Economic evaluations of comprehensive geriatric assessment in surgical patients: a systematic review.

    PubMed

    Eamer, Gilgamesh; Saravana-Bawan, Bianka; van der Westhuizen, Brenden; Chambers, Thane; Ohinmaa, Arto; Khadaroo, Rachel G

    2017-10-01

    Seniors presenting with surgical disease face increased risk of postoperative morbidity and mortality and have increased treatment costs. Comprehensive Geriatric Assessment (CGA) is proposed to reduce morbidity, mortality, and the cost after surgery. A systematic review of CGA in emergency surgical patients was conducted. The primary outcome was cost-effectiveness; secondary outcomes were length of stay, return of function, and mortality. Inclusion and exclusion criteria were predefined. Systematic searches of MEDLINE, Embase, Cochrane, and National Health Service Economic Evaluation Database were performed. Text screening, bias assessment, and data extraction were performed by two authors. There were 560 articles identified; abstract review excluded 499 articles and full-text review excluded 53 articles. Eight studies were included; one nonorthopedic trauma and seven orthopedic trauma studies. Bias assessment revealed moderate to high risk of bias for all studies. Economic evaluation assessment identified two high-quality studies and six moderate or low quality studies. Pooled analysis from four studies assessed loss of function; loss of function decreased in the experimental arm (odds ratio 0.92, 95% confidence interval [CI]: 0.88-0.97). Pooled results for length of stay from five studies found a significant decrease (mean difference: -1.17, 95% CI: -1.63 to -0.71) after excluding the nonorthopedic trauma study. Pooled mortality was significantly decreased in seven studies (risk ratio: 0.78, 95% CI: 0.67-0.90). All studies decreased cost and improved health outcomes in a cost-effective manner. CGA improved return of function and mortality with reduced cost or improved utility. Our review suggests that CGA is economically dominant and the most cost-effective care model for orthogeriatric patients. Further research should examine other surgical fields. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Biological filters and their use in potable water filtration systems in spaceflight conditions.

    PubMed

    Thornhill, Starla G; Kumar, Manish

    2018-05-01

    Providing drinking water to space missions such as the International Space Station (ISS) is a costly requirement for human habitation. To limit the costs of water transport, wastewater is collected and purified using a variety of physical and chemical means. To date, sand-based biofilters have been designed to function against gravity, and biofilms have been shown to form in microgravity conditions. Development of a universal silver-recycling biological filter system that is able to function in both microgravity and full gravity conditions would reduce the costs incurred in removing organic contaminants from wastewater by limiting the energy and chemical inputs required. This paper aims to propose the use of a sand-substrate biofilter to replace chemical means of water purification on manned spaceflights. Copyright © 2018 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.

  3. Crystal structure prediction supported by incomplete experimental data

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji

    2018-05-01

    We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.

  4. Space Shuttle avionics upgrade - Issues and opportunities

    NASA Astrophysics Data System (ADS)

    Swaim, Richard A.; Wingert, William B.

    An overview is conducted of existing Space Shuttle avionics and the possibilities for upgrading the cockpit to reduce costs and increase functionability. The current avionics include five general-purpose computers fitted with multifunction displays, dedicated switches and indicators, and dedicated flight instruments. The operational needs of the Shuttle are reviewed in the light of the avionics and potential upgrades in the form of microprocessors and display systems. The use of better processors can provide hardware support for multitasking and memory management and can reduce the life-cycle cost for software. Some limitations of the current technology are acknowledged including the Shuttle's power budget and structural configuration. A phased infusion of upgraded avionics is proposed that provides a functionally transparent replacement of crew-interface equipment as well as the addition of interface enhancements and the migration of selected functions.

  5. 48 CFR 52.230-7 - Proposal Disclosure-Cost Accounting Practice Changes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Provisions and Clauses 52.230-7 Proposal Disclosure—Cost Accounting Practice Changes. As prescribed in 30.201-3(c), insert the following provision: Proposal Disclosure—Cost Accounting Practice Changes (APR 2005... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Proposal Disclosure-Cost...

  6. Long-term care financing through Federal tax incentives.

    PubMed

    Moran, D W; Weingart, J M

    1988-12-01

    Congress and the Administration are currently exploring various methods of promoting access to long-term care. In this article, an inventory of recent legislative proposals for using the Federal tax code to expand access to long-term care services is provided. Proposals are arrayed along a functional typology that includes tax mechanisms to encourage accumulation of funds, promote purchase of long-term care insurance, or induce the diversion of funds accumulated for another purpose (such as individual retirement accounts). The proposals are evaluated against the public policy objective of encouraging risk pooling to minimize social cost.

  7. Long-term care financing through Federal tax incentives

    PubMed Central

    Moran, Donald W.; Weingart, Janet M.

    1988-01-01

    Congress and the Administration are currently exploring various methods of promoting access to long-term care. In this article, an inventory of recent legislative proposals for using the Federal tax code to expand access to long-term care services is provided. Proposals are arrayed along a functional typology that includes tax mechanisms to encourage accumulation of funds, promote purchase of long-term care insurance, or induce the diversion of funds accumulated for another purpose (such as individual retirement accounts). The proposals are evaluated against the public policy objective of encouraging risk pooling to minimize social cost. PMID:10312964

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko

    Long-range corrected density functional theory (LC-DFT) attracts many chemists’ attentions as a quantum chemical method to be applied to large molecular system and its property calculations. However, the expensive time cost to evaluate the long-range HF exchange is a big obstacle to be overcome to be applied to the large molecular systems and the solid state materials. Upon this problem, we propose a linear-scaling method of the HF exchange integration, in particular, for the LC-DFT hybrid functional.

  9. A novel edge-preserving nonnegative matrix factorization method for spectral unmixing

    NASA Astrophysics Data System (ADS)

    Bao, Wenxing; Ma, Ruishi

    2015-12-01

    Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.

  10. Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography

    PubMed Central

    Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier

    2015-01-01

    This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371

  11. 48 CFR 970.3102-05-18 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... proposal costs. (c) Independent Research and Development and Bid and Proposal costs are unallowable... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Independent research and development and bid and proposal costs. 970.3102-05-18 Section 970.3102-05-18 Federal Acquisition Regulations...

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  13. Shifting orders among suppliers considering risk, price and transportation cost

    NASA Astrophysics Data System (ADS)

    Revitasari, C.; Pujawan, I. N.

    2018-04-01

    Supplier order allocation is an important supply chain decision for an enterprise. It is related to the supplier’s function as a raw material provider and other supporting materials that will be used in production process. Most of works on order allocation has been based on costs and other supply chain performance, but very limited of them taking risks into consideration. In this paper we address the problem of order allocation of a single commodity sourced from multiple suppliers considering supply risks in addition to the attempt of minimizing transportation costs. The supply chain risk was investigated and a procedure was proposed in the risk mitigation phase as a form of risk profile. The objective including risk profile in order allocation is to maximize the product flow from a risky supplier to a relatively less risky supplier. The proposed procedure is applied to a sugar company. The result suggests that order allocations should be maximized to suppliers that have a relatively low risk and minimized to suppliers that have a relatively larger risks.

  14. Low cost satellite land mobile service for nationwide applications

    NASA Technical Reports Server (NTRS)

    Weiss, J. A.

    1978-01-01

    A satellite land mobile system using mobile radios in the UHF band, and Ku-band Communications Routing Terminals (earth stations) for a nationwide connection from any mobile location to any fixed or mobile location, and from any fixed location to any mobile location is proposed. The proposed nationwide satellite land mobile service provides: telephone network quality (1 out of 100 blockage) service, complete privacy for all the users, operation similar to the telephone network, alternatives for data services up to 32 Kbps data rates, and a cost effective and practical mobile radio compatible with system sizes ranging from 10,000 to 1,000,000 users. Seven satellite alternatives (ranging from 30 ft diameter dual beam antenna to 210 ft diameter 77 beam antenna) along with mobile radios having a sensitivity figure of merit (G/T) of -15 dB/deg K are considered. Optimized mobile radio user costs are presented as a function of the number of users with the satellite and mobile radio alternatives as system parameters.

  15. Alzheimer's disease detection via automatic 3D caudate nucleus segmentation using coupled dictionary learning with level set formulation.

    PubMed

    Al-Shaikhli, Saif Dawood Salman; Yang, Michael Ying; Rosenhahn, Bodo

    2016-12-01

    This paper presents a novel method for Alzheimer's disease classification via an automatic 3D caudate nucleus segmentation. The proposed method consists of segmentation and classification steps. In the segmentation step, we propose a novel level set cost function. The proposed cost function is constrained by a sparse representation of local image features using a dictionary learning method. We present coupled dictionaries: a feature dictionary of a grayscale brain image and a label dictionary of a caudate nucleus label image. Using online dictionary learning, the coupled dictionaries are learned from the training data. The learned coupled dictionaries are embedded into a level set function. In the classification step, a region-based feature dictionary is built. The region-based feature dictionary is learned from shape features of the caudate nucleus in the training data. The classification is based on the measure of the similarity between the sparse representation of region-based shape features of the segmented caudate in the test image and the region-based feature dictionary. The experimental results demonstrate the superiority of our method over the state-of-the-art methods by achieving a high segmentation (91.5%) and classification (92.5%) accuracy. In this paper, we find that the study of the caudate nucleus atrophy gives an advantage over the study of whole brain structure atrophy to detect Alzheimer's disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    NASA Astrophysics Data System (ADS)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  17. A Method for Scheduling Air Traffic with Uncertain En Route Capacity Constraints

    NASA Technical Reports Server (NTRS)

    Arneson, Heather; Bloem, Michael

    2009-01-01

    A method for scheduling ground delay and airborne holding for flights scheduled to fly through airspace with uncertain capacity constraints is presented. The method iteratively solves linear programs for departure rates and airborne holding as new probabilistic information about future airspace constraints becomes available. The objective function is the expected value of the weighted sum of ground and airborne delay. In order to limit operationally costly changes to departure rates, they are updated only when such an update would lead to a significant cost reduction. Simulation results show a 13% cost reduction over a rough approximation of current practices. Comparison between the proposed as needed replanning method and a similar method that uses fixed frequency replanning shows a typical cost reduction of 1% to 2%, and even up to a 20% cost reduction in some cases.

  18. Technology Candidates for Air-to-Air and Air-to-Ground Data Exchange

    NASA Technical Reports Server (NTRS)

    Haynes, Brian D.

    2015-01-01

    Technology Candidates for Air-to-Air and Air-to-Ground Data Exchange is a two-year research effort to visualize the U. S. aviation industry at a point 50 years in the future, and to define potential communication solutions to meet those future data exchange needs. The research team, led by XCELAR, was tasked with identifying future National Airspace System (NAS) scenarios, determining requirements and functions (including gaps), investigating technical and business issues for air, ground, & air-to-ground interactions, and reporting on the results. The project was conducted under technical direction from NASA and in collaboration with XCELAR's partner, National Institute of Aerospace, and NASA technical representatives. Parallel efforts were initiated to define the information exchange functional needs of the future NAS, and specific communication link technologies to potentially serve those needs. Those efforts converged with the mapping of each identified future NAS function to potential enabling communication solutions; those solutions were then compared with, and ranked relative to, each other on a technical basis in a structured analysis process. The technical solutions emerging from that process were then assessed from a business case perspective to determine their viability from a real-world adoption and deployment standpoint. The results of that analysis produced a proposed set of future solutions and most promising candidate technologies. Gap analyses were conducted at two points in the process, the first examining technical factors, and the second as part of the business case analysis. In each case, no gaps or unmet needs were identified in applying the solutions evaluated to the requirements identified. The future communication solutions identified in the research comprise both specific link technologies and two enabling technologies that apply to most or all specific links. As a result, the research resulted in a new analysis approach, viewing the underlying architecture of ground-air and air-air communications as a whole, rather than as simple "link to function" paired solutions. For the business case analysis, a number of "reference architectures" were developed for both the future technologies and the current systems, based on three typical configurations of current aircraft. Current and future costs were assigned, and various comparisons made between the current and future architectures. In general, it was assumed that if a future architecture offers lower cost than the current typical architecture, while delivering equivalent or better performance, it is likely that the future solution will gain industry acceptance. Conversely, future architectures presenting higher costs than their current counterparts must present a compelling benefit case in other areas or risk a lack of industry acceptance. The business case analysis consistently indicated lower costs for the proposed future architectures, and in most cases, significantly so. The proposed future solutions were found to offer significantly greater functionality, flexibility, and growth potential over time, at lower cost, than current systems. This was true for overall, fleet-wide equipage for domestic and oceanic air carriers, as well as for single, General Aviation (GA) aircraft. The overall research results indicate that all identified requirements can be met by the proposed solutions with significant capacity for future growth. Results also illustrate that the majority of the future communication needs can be met using currently allocated aviation RF spectrum, if used in more effective ways than it is today. A combination of such optimized aviation-specific links and commercial communication systems meets all identified needs for the 50-year future and beyond, with the caveat that a new, overall function will be needed to manage all information exchange, individual links, security, cost, and other factors. This function was labeled "Delivery Manager" (DM) within this research. DM employs a distributed client/server architecture, for both airborne and ground communications architectures. Final research results included identifying the most promising candidate technologies for the future system, conclusions and recommendations, and identifying areas where further research should be considered.

  19. Wavefield reconstruction inversion with a multiplicative cost function

    NASA Astrophysics Data System (ADS)

    da Silva, Nuno V.; Yao, Gang

    2018-01-01

    We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.

  20. Efficient three-dimensional Poisson solvers in open rectangular conducting pipe

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2016-06-01

    Three-dimensional (3D) Poisson solver plays an important role in the study of space-charge effects on charged particle beam dynamics in particle accelerators. In this paper, we propose three new 3D Poisson solvers for a charged particle beam in an open rectangular conducting pipe. These three solvers include a spectral integrated Green function (IGF) solver, a 3D spectral solver, and a 3D integrated Green function solver. These solvers effectively handle the longitudinal open boundary condition using a finite computational domain that contains the beam itself. This saves the computational cost of using an extra larger longitudinal domain in order to set up an appropriate finite boundary condition. Using an integrated Green function also avoids the need to resolve rapid variation of the Green function inside the beam. The numerical operational cost of the spectral IGF solver and the 3D IGF solver scales as O(N log(N)) , where N is the number of grid points. The cost of the 3D spectral solver scales as O(Nn N) , where Nn is the maximum longitudinal mode number. We compare these three solvers using several numerical examples and discuss the advantageous regime of each solver in the physical application.

  1. Reconstruction method for inversion problems in an acoustic tomography based temperature distribution measurement

    NASA Astrophysics Data System (ADS)

    Liu, Sha; Liu, Shi; Tong, Guowei

    2017-11-01

    In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.

  2. A Proposed Information Architecture for Telehealth System Interoperability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.

    1999-04-20

    We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that amore » single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.« less

  3. Uncertainty in sample estimates and the implicit loss function for soil information.

    NASA Astrophysics Data System (ADS)

    Lark, Murray

    2015-04-01

    One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.

  4. Fuzzy Multi-Objective Vendor Selection Problem with Modified S-CURVE Membership Function

    NASA Astrophysics Data System (ADS)

    Díaz-Madroñero, Manuel; Peidro, David; Vasant, Pandian

    2010-06-01

    In this paper, the S-Curve membership function methodology is used in a vendor selection (VS) problem. An interactive method for solving multi-objective VS problems with fuzzy goals is developed. The proposed method attempts simultaneously to minimize the total order costs, the number of rejected items and the number of late delivered items with reference to several constraints such as meeting buyers' demand, vendors' capacity, vendors' quota flexibility, vendors' allocated budget, etc. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in VS problems, with linear membership functions.

  5. A new single-particle basis for nuclear many-body calculations

    NASA Astrophysics Data System (ADS)

    Puddu, G.

    2017-10-01

    Predominantly, harmonic oscillator single-particle wave functions are the preferred choice for a basis in ab initio nuclear many-body calculations. These wave-functions, although very convenient in order to evaluate the matrix elements of the interaction in the laboratory frame, have too fast a fall-off at large distances. In the past, as an alternative to the harmonic oscillator, other single-particle wave functions have been proposed. In this work, we propose a new single-particle basis, directly linked to nucleon-nucleon interaction. This new basis is orthonormal and complete, has the proper asymptotic behavior at large distances and does not contain the continuum which would pose severe convergence problems in nuclear many body calculations. We consider the newly proposed NNLO-opt nucleon-nucleon interaction, without any renormalization. We show that, unlike other bases, this single-particle representation has a computational cost similar to the harmonic oscillator basis with the same space truncation and it gives lower energies for 6He and 6Li.

  6. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  7. Text-line extraction in handwritten Chinese documents based on an energy minimization framework.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2012-03-01

    Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  9. Heuristic Approach for Configuration of a Grid-Tied Microgrid in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Rodriguez, Miguel A.

    The high rates of cost of electricity that consumers are being charged by the utility grid in Puerto Rico have created an energy crisis around the island. This situation is due to the island's dependence on imported fossil fuels. In order to aid in the transition from fossil-fuel based electricity into electricity from renewable and alternative sources, this research work focuses on reducing the cost of electricity for Puerto Rico through means of finding the optimal microgrid configuration for a set number of consumers from the residential sector. The Hybrid Optimization Modeling for Energy Renewables (HOMER) software, developed by NREL, is utilized as an aid in determining the optimal microgrid setting. The problem is also approached via convex optimization; specifically, an objective function C(t) is formulated in order to be minimized. The cost function depends on the energy supplied by the grid, the energy supplied by renewable sources, the energy not supplied due to outages, as well as any excess energy sold to the utility in a yearly manner. A term for considering the social cost of carbon is also considered in the cost function. Once the microgrid settings from HOMER are obtained, those are evaluated via the optimized function C( t), which will in turn assess the true optimality of the microgrid configuration. A microgrid to supply 10 consumers is considered; each consumer can possess a different microgrid configuration. The cost function C( t) is minimized, and the Net Present Value and Cost of Electricity are computed for each configuration, in order to assess the true feasibility. Results show that the greater the penetration of components into the microgrid, the greater the energy produced by the renewable sources in the microgrid, the greater the energy not supplied due to outages. The proposed method demonstrates that adding large amounts of renewable components in a microgrid does not necessarily translates into economic benefits for the consumer; in fact, there is a trade back between cost and addition of elements that must be considered. Any configurations which consider further increases in microgrid components will result in increased NPV and increased costs of electricity, which deem the configurations as unfeasible.

  10. A Fuzzy Goal Programming for a Multi-Depot Distribution Problem

    NASA Astrophysics Data System (ADS)

    Nunkaew, Wuttinan; Phruksaphanrat, Busaba

    2010-10-01

    A fuzzy goal programming model for solving a Multi-Depot Distribution Problem (MDDP) is proposed in this research. This effective proposed model is applied for solving in the first step of Assignment First-Routing Second (AFRS) approach. Practically, a basic transportation model is firstly chosen to solve this kind of problem in the assignment step. After that the Vehicle Routing Problem (VRP) model is used to compute the delivery cost in the routing step. However, in the basic transportation model, only depot to customer relationship is concerned. In addition, the consideration of customer to customer relationship should also be considered since this relationship exists in the routing step. Both considerations of relationships are solved using Preemptive Fuzzy Goal Programming (P-FGP). The first fuzzy goal is set by a total transportation cost and the second fuzzy goal is set by a satisfactory level of the overall independence value. A case study is used for describing the effectiveness of the proposed model. Results from the proposed model are compared with the basic transportation model that has previously been used in this company. The proposed model can reduce the actual delivery cost in the routing step owing to the better result in the assignment step. Defining fuzzy goals by membership functions are more realistic than crisps. Furthermore, flexibility to adjust goals and an acceptable satisfactory level for decision maker can also be increased and the optimal solution can be obtained.

  11. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  12. Design optimisation of powers-of-two FIR filter using self-organising random immigrants GA

    NASA Astrophysics Data System (ADS)

    Chandra, Abhijit; Chattopadhyay, Sudipta

    2015-01-01

    In this communication, we propose a novel design strategy of multiplier-less low-pass finite impulse response (FIR) filter with the aid of a recent evolutionary optimisation technique, known as the self-organising random immigrants genetic algorithm. Individual impulse response coefficients of the proposed filter have been encoded as sum of signed powers-of-two. During the formulation of the cost function for the optimisation algorithm, both the frequency response characteristic and the hardware cost of the discrete coefficient FIR filter have been considered. The role of crossover probability of the optimisation technique has been evaluated on the overall performance of the proposed strategy. For this purpose, the convergence characteristic of the optimisation technique has been included in the simulation results. In our analysis, two design examples of different specifications have been taken into account. In order to substantiate the efficiency of our proposed structure, a number of state-of-the-art design strategies of multiplier-less FIR filter have also been included in this article for the purpose of comparison. Critical analysis of the result unambiguously establishes the usefulness of our proposed approach for the hardware efficient design of digital filter.

  13. Gradient corrections to the exchange-correlation free energy

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-07

    We develop the first-order gradient correction to the exchange-correlation free energy of the homogeneous electron gas for use in finite-temperature density functional calculations. Based on this, we propose and implement a simple temperature-dependent extension for functionals beyond the local density approximation. These finite-temperature functionals show improvement over zero-temperature functionals, as compared to path-integral Monte Carlo calculations for deuterium equations of state, and perform without computational cost increase compared to zero-temperature functionals and so should be used for finite-temperature calculations. Furthermore, while the present functionals are valid at all temperatures including zero, non-negligible difference with zero-temperature functionals begins at temperatures abovemore » 10 000 K.« less

  14. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    PubMed

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.

  15. Proposing a Mathematical Software Tool in Physics Secondary Education

    ERIC Educational Resources Information Center

    Baltzis, Konstantinos B.

    2009-01-01

    MathCad® is a very popular software tool for mathematical and statistical analysis in science and engineering. Its low cost, ease of use, extensive function library, and worksheet-like user interface distinguish it among other commercial packages. Its features are also well suited to educational process. The use of natural mathematical notation…

  16. Toward a Methodology for Conducting Social Impact Assessments Using Quality of Social Life Indicators.

    ERIC Educational Resources Information Center

    Olsen, Marvin E.; Merwin, Donna J.

    Broadly conceived, social impacts refer to all changes in the structure and functioning of patterned social ordering that occur in conjunction with an environmental, technological, or social innovation or alteration. Departing from the usual cost-benefit analysis approach, a new methodology proposes conducting social impact assessment grounded in…

  17. Improved patch-based learning for image deblurring

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng

    2015-05-01

    Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.

  18. Actions to Align Defense Contract Management Agency and Defense Contract Audit Agency Functions

    DTIC Science & Technology

    2012-11-13

    1’he netual 5aviqg~ for all c:ost trPe proposal reviews are generally limited lo tlie negotiated ree percentages applfed to the costs questioned...such time as a businc:ss case analysts can suppon that any change lo Do() procurcmem and actiUisition pOlicy will tJrolc:ct the intdrests of the...mo.vc forward and If that analysis shows better uses for the DC AA re.<:l> uroes ·or benerways to-efficiently support contrncting officers. DPAP will

  19. An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging

    NASA Astrophysics Data System (ADS)

    Santhi, G.; Karthikeyan, K.

    2017-11-01

    In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.

  20. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  1. Routine real-time cost-effectiveness monitoring of a web-based depression intervention: a risk-sharing proposal.

    PubMed

    Naveršnik, Klemen; Mrhar, Aleš

    2014-02-27

    A new health care technology must be cost-effective in order to be adopted. If evidence regarding cost-effectiveness is uncertain, then the decision maker faces two choices: (1) adopt the technology and run the risk that it is less effective in actual practice, or (2) reject the technology and risk that potential health is forgone. A new depression eHealth service was found to be cost-effective in a previously published study. The results, however, were unreliable because it was based on a pilot clinical trial. A conservative decision maker would normally require stronger evidence for the intervention to be implemented. Our objective was to evaluate how to facilitate service implementation by shifting the burden of risk due to uncertainty to the service provider and ensure that the intervention remains cost-effective during routine use. We propose a risk-sharing scheme, where the service cost depends on the actual effectiveness of the service in real-life setting. Routine efficacy data can be used as the input to the cost-effectiveness model, which employs a mapping function to translate a depression specific score into quality-adjusted life-years. The latter is the denominator in the cost-effectiveness ratio calculation, required by the health care decision maker. The output of the model is a "value graph", showing intervention value as a function of its observed (future) efficacy, using the €30,000 per quality-adjusted life-year (QALY) threshold. We found that the eHealth service should improve the patient's outcome by at least 11.9 points on the Beck Depression Inventory scale in order for the cost-effectiveness ratio to remain below the €30,000/QALY threshold. The value of a single point improvement was found to be between €200 and €700, depending on depression severity at treatment start. Value of the eHealth service, based on the current efficacy estimates, is €1900, which is significantly above its estimated cost (€200). The eHealth depression service is particularly suited to routine monitoring, since data can be gathered through the Internet within the service communication channels. This enables real-time cost-effectiveness evaluation and allows a value-based price to be established. We propose a novel pricing scheme where the price is set to a point in the interval between cost and value, which provides an economic surplus to both the payer and the provider. Such a business model will assure that a portion of the surplus is retained by the payer and not completely appropriated by the private provider. If the eHealth service were to turn out less effective than originally anticipated, then the price would be lowered in order to achieve the cost-effectiveness threshold and this risk of financial loss would be borne by the provider.

  2. Self-organizing maps for learning the edit costs in graph matching.

    PubMed

    Neuhaus, Michel; Bunke, Horst

    2005-06-01

    Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively.

  3. Efficient cost-sensitive human-machine collaboration for offline signature verification

    NASA Astrophysics Data System (ADS)

    Coetzer, Johannes; Swanepoel, Jacques; Sabourin, Robert

    2012-01-01

    We propose a novel strategy for the optimal combination of human and machine decisions in a cost-sensitive environment. The proposed algorithm should be especially beneficial to financial institutions where off-line signatures, each associated with a specific transaction value, require authentication. When presented with a collection of genuine and fraudulent training signatures, produced by so-called guinea pig writers, the proficiency of a workforce of human employees and a score-generating machine can be estimated and represented in receiver operating characteristic (ROC) space. Using a set of Boolean fusion functions, the majority vote decision of the human workforce is combined with each threshold-specific machine-generated decision. The performance of the candidate ensembles is estimated and represented in ROC space, after which only the optimal ensembles and associated decision trees are retained. When presented with a questioned signature linked to an arbitrary writer, the system first uses the ROC-based cost gradient associated with the transaction value to select the ensemble that minimises the expected cost, and then uses the corresponding decision tree to authenticate the signature in question. We show that, when utilising the entire human workforce, the incorporation of a machine streamlines the authentication process and decreases the expected cost for all operating conditions.

  4. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  5. Phytoplankton defence mechanisms: traits and trade-offs.

    PubMed

    Pančić, Marina; Kiørboe, Thomas

    2018-05-01

    In aquatic ecosystems, unicellular algae form the basis of the food webs. Theoretical and experimental studies have demonstrated that one of the mechanisms that maintain high diversity of phytoplankton is through predation and the consequent evolution of defence mechanisms. Proposed defence mechanisms in phytoplankton are diverse and include physiological (e.g. toxicity, bioluminescence), morphological (e.g. silica shell, colony formation), and behavioural (e.g. escape response) traits. However, the function of many of the proposed defence mechanisms remains elusive, and the costs and benefits (trade-offs) are often unquantified or undocumented. Here, we provide an overview of suggested phytoplankton defensive traits and review their experimental support. Wherever possible we quantify the trade-offs from experimental evidence and theoretical considerations. In many instances, experimental evidence suggests that defences are costless. However, we argue that (i) some costs materialize only under natural conditions, for example, sinking losses, or dependency on the availability of specific nutrients, and (ii) other costs become evident only under resource-deficient conditions where a rivalry for limiting resources between growth and defence occurs. Based on these findings, we suggest two strategies for quantifying the costs of defence mechanisms in phytoplankton: (i) for the evaluation of defence costs that are realized under natural conditions, a mechanistic understanding of the hypothesized component processes is required; and (ii) the magnitude of the costs (i.e. growth reduction) must be assessed under conditions of resource limitation. © 2018 Cambridge Philosophical Society.

  6. The two-sample problem with induced dependent censorship.

    PubMed

    Huang, Y

    1999-12-01

    Induced dependent censorship is a general phenomenon in health service evaluation studies in which a measure such as quality-adjusted survival time or lifetime medical cost is of interest. We investigate the two-sample problem and propose two classes of nonparametric tests. Based on consistent estimation of the survival function for each sample, the two classes of test statistics examine the cumulative weighted difference in hazard functions and in survival functions. We derive a unified asymptotic null distribution theory and inference procedure. The tests are applied to trial V of the International Breast Cancer Study Group and show that long duration chemotherapy significantly improves time without symptoms of disease and toxicity of treatment as compared with the short duration treatment. Simulation studies demonstrate that the proposed tests, with a wide range of weight choices, perform well under moderate sample sizes.

  7. Design of a magnetic-tunnel-junction-oriented nonvolatile lookup table circuit with write-operation-minimized data shifting

    NASA Astrophysics Data System (ADS)

    Suzuki, Daisuke; Hanyu, Takahiro

    2018-04-01

    A magnetic-tunnel-junction (MTJ)-oriented nonvolatile lookup table (LUT) circuit, in which a low-power data-shift function is performed by minimizing the number of write operations in MTJ devices is proposed. The permutation of the configuration memory cell for read/write access is performed as opposed to conventional direct data shifting to minimize the number of write operations, which results in significant write energy savings in the data-shift function. Moreover, the hardware cost of the proposed LUT circuit is small since the selector is shared between read access and write access. In fact, the power consumption in the data-shift function and the transistor count are reduced by 82 and 52%, respectively, compared with those in a conventional static random-access memory-based implementation using a 90 nm CMOS technology.

  8. Self-Organizing Hierarchical Particle Swarm Optimization with Time-Varying Acceleration Coefficients for Economic Dispatch with Valve Point Effects and Multifuel Options

    NASA Astrophysics Data System (ADS)

    Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc

    2011-06-01

    This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.

  9. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  10. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  11. Remediation of hydrocarbons polluted water by hydrophobic functionalized cellulose.

    PubMed

    Tursi, Antonio; Beneduci, Amerigo; Chidichimo, Francesco; De Vietro, Nicoletta; Chidichimo, Giuseppe

    2018-06-01

    Remediation of water bodies from petroleum hydrocarbons is of the utmost importance due to health risks related to the high toxicity, mutagenicity and carcinogenicity of the hydrocarbons components that may enter into the food chain. Though several methods were proposed to face up this challenge, they are generally not easily feasible at a contaminated site and quite costly. Here we propose a green, cost-effective technology based on hydrophobized Spanish Broom (SB) cellulose fiber. The natural cellulose fiber was extracted by alkaline digestion of the raw vegetable. The hydrophilic cellulose surface was transformed into a hydrophobic one by the reaction with 4,4'-diphenylmethane diisocyanate (MDI) forming a very stable urethane linkage with the hydroxyl groups of cellulose emerging from the fibers surface. Chemical functionalization was performed with a novel solvent-free technology based on a home-made still reactor were the fiber was kept under vortex stirring and the MDI reactant then spread onto the fiber surface by nebulizing it in form of micrometer-sized droplets. The functionalized fiber, characterized by means of WCA measurements, XPS and ATR-FTIR spectroscopy, shows fast adsorption kinetics adsorption capacity as high as 220 mg/g, among the highest ever reported so far in the literature for cellulosic materials. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Nonulcer dyspepsia.

    PubMed

    Scolapio, J S; Camilleri, M

    1996-03-01

    There is considerable confusion in the literature about the entity of nonulcer dyspepsia and its epidemiology, mechanisms, and management. In this review, we discuss the mechanisms and develop a strategy for diagnosis and management of nonulcer dyspepsia in the era of cost-containment. This analysis was based on a computerized literature search on epidemiology, pathophysiology, and management of nonulcer dyspepsia. Inconsistencies in the inclusion criteria of several studies result in disparities in the data from epidemiological and physiology-based studies. We propose that the inclusion criteria need to be unrestricted by the symptom of "pain," and that epidemiological features must be refined further because recent data used pain/discomfort as the dominant feature for identifying "dyspepsia." The interplay between three factors (impaired motor and sensory functions, psychosocial factors, and Helicobacter pylori infection) deserves further study. Advances in this field will follow rigorous reappraisal of the epidemiology using an unrestricted definition of the symptom complex and development of strategies in clinical practice that focus on either the cost-effective investigation of the mechanism and its treatment or an effective sequence of therapeutic trials. An algorithm proposed for patient evaluation needs to be tested, with emphasis on outcome (i.e., symptom control, cost efficacy, and societal costs).

  13. An Energy Balanced and Lifetime Extended Routing Protocol for Underwater Sensor Networks.

    PubMed

    Wang, Hao; Wang, Shilian; Zhang, Eryang; Lu, Luxi

    2018-05-17

    Energy limitation is an adverse problem in designing routing protocols for underwater sensor networks (UWSNs). To prolong the network lifetime with limited battery power, an energy balanced and efficient routing protocol, called energy balanced and lifetime extended routing protocol (EBLE), is proposed in this paper. The proposed EBLE not only balances traffic loads according to the residual energy, but also optimizes data transmissions by selecting low-cost paths. Two phases are operated in the EBLE data transmission process: (1) candidate forwarding set selection phase and (2) data transmission phase. In candidate forwarding set selection phase, nodes update candidate forwarding nodes by broadcasting the position and residual energy level information. The cost value of available nodes is calculated and stored in each sensor node. Then in data transmission phase, high residual energy and relatively low-cost paths are selected based on the cost function and residual energy level information. We also introduce detailed analysis of optimal energy consumption in UWSNs. Numerical simulation results on a variety of node distributions and data load distributions prove that EBLE outperforms other routing protocols (BTM, BEAR and direct transmission) in terms of network lifetime and energy efficiency.

  14. Ensemble of classifiers for ontology enrichment

    NASA Astrophysics Data System (ADS)

    Semenova, A. V.; Kureichik, V. M.

    2018-05-01

    A classifier is a basis of ontology learning systems. Classification of text documents is used in many applications, such as information retrieval, information extraction, definition of spam. A new ensemble of classifiers based on SVM (a method of support vectors), LSTM (neural network) and word embedding are suggested. An experiment was conducted on open data, which allows us to conclude that the proposed classification method is promising. The implementation of the proposed classifier is performed in the Matlab using the functions of the Text Analytics Toolbox. The principal difference between the proposed ensembles of classifiers is the high quality of classification of data at acceptable time costs.

  15. Group Management Method of RFID Passwords for Privacy Protection

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yuichi; Kuwana, Toshiyuki; Taniguchi, Yoji; Komoda, Norihisa

    When RFID tag is used in the whole item lifecycle including a consumer scene or a recycle scene, we have to protect consumer privacy in the state that RFID tag is stuck on an item. We use the low cost RFID tag that has the access control function using a password, and we propose a method which manages RFID tags by passwords identical to each group of RFID tags. This proposal improves safety of RFID system because the proposal method is able to reduce the traceability for a RFID tag, and hold down the influence for disclosure of RFID passwords in the both scenes.

  16. Fairness in optimizing bus-crew scheduling process.

    PubMed

    Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei

    2017-01-01

    This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.

  17. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE PAGES

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    2018-01-28

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  18. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  19. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  20. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  1. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  2. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiao; Dong, Jin; Djouadi, Seddik M

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, wheremore » the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.« less

  3. Low-cost coding of directivity information for the recording of musical instruments

    NASA Astrophysics Data System (ADS)

    Braasch, Jonas; Martens, William L.; Woszczyk, Wieslaw

    2004-05-01

    Most musical instruments radiate sound according to characteristic spatial directivity patterns. These patterns are usually not only strongly frequency dependent, but also time-variant functions of various parameters of the instrument, such as pitch and the playing technique applied (e.g., plucking versus bowing of string instruments). To capture the directivity information when recording an instrument, Warusfel and Misdariis (2001) proposed to record an instrument using four channels, one for the monopole and the others for three orthogonal dipole parts. In the new recording setup presented here, it is proposed to store one channel at a high sampling frequency, along with directivity information that is updated only every few milliseconds. Taking the binaural sluggishness of the human auditory system into account in this way provides a low-cost coding scheme for subsequent reproduction of time-variant directivity patterns.

  4. Stabilization for sampled-data neural-network-based control systems.

    PubMed

    Zhu, Xun-Lin; Wang, Youyi

    2011-02-01

    This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.

  5. 48 CFR 9904.420 - Accounting for independent research and development costs and bid and proposal costs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.420 Accounting for independent research and development costs and bid and proposal costs. ...

  6. Cryptographically supported NFC tags in medication for better inpatient safety.

    PubMed

    Özcanhan, Mehmet Hilal; Dalkılıç, Gökhan; Utku, Semih

    2014-08-01

    Reliable sources report that errors in drug administration are increasing the number of harmed or killed inpatients, during healthcare. This development is in contradiction to patient safety norms. A correctly designed hospital-wide ubiquitous system, using advanced inpatient identification and matching techniques, should provide correct medicine and dosage at the right time. Researchers are still making grouping proof protocol proposals based on the EPC Global Class 1 Generation 2 ver. 1.2 standard tags, for drug administration. Analyses show that such protocols make medication unsecure and hence fail to guarantee inpatient safety. Thus, the original goal of patient safety still remains. In this paper, a very recent proposal (EKATE) upgraded by a cryptographic function is shown to fall short of expectations. Then, an alternative proposal IMS-NFC which uses a more suitable and newer technology; namely Near Field Communication (NFC), is described. The proposed protocol has the additional support of stronger security primitives and it is compliant to ISO communication and security standards. Unlike previous works, the proposal is a complete ubiquitous system that guarantees full patient safety; and it is based on off-the-shelf, new technology products available in every corner of the world. To prove the claims the performance, cost, security and scope of IMS-NFC are compared with previous proposals. Evaluation shows that the proposed system has stronger security, increased patient safety and equal efficiency, at little extra cost.

  7. Dopamine does double duty in motivating cognitive effort

    PubMed Central

    Westbrook, Andrew; Braver, Todd S.

    2015-01-01

    Cognitive control is subjectively costly, suggesting that engagement is modulated in relationship to incentive state. Dopamine appears to play key roles. In particular, dopamine may mediate cognitive effort by two broad classes of functions: 1) modulating the functional parameters of working memory circuits subserving effortful cognition, and 2) mediating value-learning and decision-making about effortful cognitive action. Here we tie together these two lines of research, proposing how dopamine serves “double duty”, translating incentive information into cognitive motivation. PMID:26889810

  8. Protein function prediction using neighbor relativity in protein-protein interaction network.

    PubMed

    Moosavi, Sobhan; Rahgozar, Masoud; Rahimi, Amir

    2013-04-01

    There is a large gap between the number of discovered proteins and the number of functionally annotated ones. Due to the high cost of determining protein function by wet-lab research, function prediction has become a major task for computational biology and bioinformatics. Some researches utilize the proteins interaction information to predict function for un-annotated proteins. In this paper, we propose a novel approach called "Neighbor Relativity Coefficient" (NRC) based on interaction network topology which estimates the functional similarity between two proteins. NRC is calculated for each pair of proteins based on their graph-based features including distance, common neighbors and the number of paths between them. In order to ascribe function to an un-annotated protein, NRC estimates a weight for each neighbor to transfer its annotation to the unknown protein. Finally, the unknown protein will be annotated by the top score transferred functions. We also investigate the effect of using different coefficients for various types of functions. The proposed method has been evaluated on Saccharomyces cerevisiae and Homo sapiens interaction networks. The performance analysis demonstrates that NRC yields better results in comparison with previous protein function prediction approaches that utilize interaction network. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Weihong; Sun, Kai; Qi, Junjian

    2015-01-01

    Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less

  10. Dynamic reserve selection: Optimal land retention with land-price feedbacks

    Treesearch

    Sandor F. Toth; Robert G. Haight; Luke W. Rogers

    2011-01-01

    Urban growth compromises open space and ecosystem functions. To mitigate the negative effects, some agencies use reserve selection models to identify conservation sites for purchase or retention. Existing models assume that conservation has no impact on nearby land prices. We propose a new integer program that relaxes this assumption via adaptive cost coefficients. Our...

  11. Modeling Pumped Thermal Energy Storage with Waste Heat Harvesting

    NASA Astrophysics Data System (ADS)

    Abarr, Miles L. Lindsey

    This work introduces a new concept for a utility scale combined energy storage and generation system. The proposed design utilizes a pumped thermal energy storage (PTES) system, which also utilizes waste heat leaving a natural gas peaker plant. This system creates a low cost utility-scale energy storage system by leveraging this dual-functionality. This dissertation first presents a review of previous work in PTES as well as the details of the proposed integrated bottoming and energy storage system. A time-domain system model was developed in Mathworks R2016a Simscape and Simulink software to analyze this system. Validation of both the fluid state model and the thermal energy storage model are provided. The experimental results showed the average error in cumulative fluid energy between simulation and measurement was +/- 0.3% per hour. Comparison to a Finite Element Analysis (FEA) model showed <1% error for bottoming mode heat transfer. The system model was used to conduct sensitivity analysis, baseline performance, and levelized cost of energy of a recently proposed Pumped Thermal Energy Storage and Bottoming System (Bot-PTES) that uses ammonia as the working fluid. This analysis focused on the effects of hot thermal storage utilization, system pressure, and evaporator/condenser size on the system performance. This work presents the estimated performance for a proposed baseline Bot-PTES. Results of this analysis showed that all selected parameters had significant effects on efficiency, with the evaporator/condenser size having the largest effect over the selected ranges. Results for the baseline case showed stand-alone energy storage efficiencies between 51 and 66% for varying power levels and charge states, and a stand-alone bottoming efficiency of 24%. The resulting efficiencies for this case were low compared to competing technologies; however, the dual-functionality of the Bot-PTES enables it to have higher capacity factor, leading to 91-197/MWh levelized cost of energy compared to 262-284/MWh for batteries and $172-254/MWh for Compressed Air Energy Storage.

  12. Fuzzy Multi-Objective Transportation Planning with Modified S-Curve Membership Function

    NASA Astrophysics Data System (ADS)

    Peidro, D.; Vasant, P.

    2009-08-01

    In this paper, the S-Curve membership function methodology is used in a transportation planning decision (TPD) problem. An interactive method for solving multi-objective TPD problems with fuzzy goals, available supply and forecast demand is developed. The proposed method attempts simultaneously to minimize the total production and transportation costs and the total delivery time with reference to budget constraints and available supply, machine capacities at each source, as well as forecast demand and warehouse space constraints at each destination. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in TPD problems, with linear membership functions.

  13. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  14. 48 CFR 242.771 - Independent research and development and bid and proposal costs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES Indirect Cost Rates 242.771 Independent research and development and bid and proposal costs. ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Independent research and development and bid and proposal costs. 242.771 Section 242.771 Federal Acquisition Regulations System DEFENSE...

  15. A joint inventory policy under permissible delay in payment and stochastic demand (Case study: Pharmacy Department of Pariaman Hospital)

    NASA Astrophysics Data System (ADS)

    Jonrinaldi, Primadi, M. Yugo; Hadiguna, Rika Ampuh

    2017-11-01

    Inventory cannot be avoided by organizations. One of them is a hospital which has a functional unit to manage the drugs and other medical supplies such as disposable and laboratory material. The unit is called Pharmacy Department which is responsible to do all of pharmacy services in the hospital. The current problem in Pharmacy Department is that the level of drugs and medical supplies inventory is too high. Inventory is needed to keep the service level to customers but at the same time it increases the cost of holding the items, so there should be a policy to keep the inventory on an optimal condition. To solve such problem, this paper proposes an inventory policy in Pharmacy Department of Pariaman Hospital. The inventory policy is determined by using Economic Order Quantity (EOQ) model under condition of permissible delay in payment for multiple products considering safety stock to anticipate stochastic demand. This policy is developed based on the actual condition of the system studied where suppliers provided a certain period to Pharmacy Department to complete the payment of the order. Based on implementation using software Lingo 13.0, total inventory cost of proposed policy of IDR 137,334,815.34 is 37.4% lower than the total inventory cost of current policy of IDR 219,511,519.45. Therefore, the proposed inventory policy is applicable to the system to minimize the total inventory cost.

  16. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  17. A study of parameter identification

    NASA Technical Reports Server (NTRS)

    Herget, C. J.; Patterson, R. E., III

    1978-01-01

    A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.

  18. Integrating QoS and security functions in an IP-VPN gateway

    NASA Astrophysics Data System (ADS)

    Fan, Kuo-Pao; Chang, Shu-Hsin; Lin, Kuan-Ming; Pen, Mau-Jy

    2001-10-01

    IP-based Virtual Private Network becomes more and more popular. It can not only reduce the enterprise communication cost but also increase the revenue of the service provider. The common IP-VPN application types include Intranet VPN, Extranet VPN, and remote access VPN. For the large IP-VPN market, some vendors develop dedicated IP-VPN devices; while some vendors add the VPN functions into their existing network equipment such as router, access gateway, etc. The functions in the IP-VPN device include security, QoS, and management. The common security functions supported are IPSec (IP Security), IKE (Internet Key Exchange), and Firewall. The QoS functions include bandwidth control and packet scheduling. In the management component, policy-based network management is under standardization in IETF. In this paper, we discuss issues on how to integrate the QoS and security functions in an IP-VPN Gateway. We propose three approaches to do this. They are (1) perform Qos first (2) perform IPSec first and (3) reserve fixed bandwidth for IPSec. We also compare the advantages and disadvantages of the three proposed approaches.

  19. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  20. Half-quadratic variational regularization methods for speckle-suppression and edge-enhancement in SAR complex image

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Wang, Guang-xin

    2008-12-01

    Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.

  1. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  2. Strategy of arm movement control is determined by minimization of neural effort for joint coordination.

    PubMed

    Dounskaia, Natalia; Shimansky, Yury

    2016-06-01

    Optimality criteria underlying organization of arm movements are often validated by testing their ability to adequately predict hand trajectories. However, kinematic redundancy of the arm allows production of the same hand trajectory through different joint coordination patterns. We therefore consider movement optimality at the level of joint coordination patterns. A review of studies of multi-joint movement control suggests that a 'trailing' pattern of joint control is consistently observed during which a single ('leading') joint is rotated actively and interaction torque produced by this joint is the primary contributor to the motion of the other ('trailing') joints. A tendency to use the trailing pattern whenever the kinematic redundancy is sufficient and increased utilization of this pattern during skillful movements suggests optimality of the trailing pattern. The goal of this study is to determine the cost function minimization of which predicts the trailing pattern. We show that extensive experimental testing of many known cost functions cannot successfully explain optimality of the trailing pattern. We therefore propose a novel cost function that represents neural effort for joint coordination. That effort is quantified as the cost of neural information processing required for joint coordination. We show that a tendency to reduce this 'neurocomputational' cost predicts the trailing pattern and that the theoretically developed predictions fully agree with the experimental findings on control of multi-joint movements. Implications for future research of the suggested interpretation of the trailing joint control pattern and the theory of joint coordination underlying it are discussed.

  3. Solving bi-level optimization problems in engineering design using kriging models

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Liu, Xiaojie; Du, Gang

    2018-05-01

    Stackelberg game-theoretic approaches are applied extensively in engineering design to handle distributed collaboration decisions. Bi-level genetic algorithms (BLGAs) and response surfaces have been used to solve the corresponding bi-level programming models. However, the computational costs for BLGAs often increase rapidly with the complexity of lower-level programs, and optimal solution functions sometimes cannot be approximated by response surfaces. This article proposes a new method, namely the optimal solution function approximation by kriging model (OSFAKM), in which kriging models are used to approximate the optimal solution functions. A detailed example demonstrates that OSFAKM can obtain better solutions than BLGAs and response surface-based methods, and at the same time reduce the workload of computation remarkably. Five benchmark problems and a case study of the optimal design of a thin-walled pressure vessel are also presented to illustrate the feasibility and potential of the proposed method for bi-level optimization in engineering design.

  4. H2-norm for mesh optimization with application to electro-thermal modeling of an electric wire in automotive context

    NASA Astrophysics Data System (ADS)

    Chevrié, Mathieu; Farges, Christophe; Sabatier, Jocelyn; Guillemard, Franck; Pradere, Laetitia

    2017-04-01

    In automotive application field, reducing electric conductors dimensions is significant to decrease the embedded mass and the manufacturing costs. It is thus essential to develop tools to optimize the wire diameter according to thermal constraints and protection algorithms to maintain a high level of safety. In order to develop such tools and algorithms, accurate electro-thermal models of electric wires are required. However, thermal equation solutions lead to implicit fractional transfer functions involving an exponential that cannot be embedded in a car calculator. This paper thus proposes an integer order transfer function approximation methodology based on a spatial discretization for this class of fractional transfer functions. Moreover, the H2-norm is used to minimize approximation error. Accuracy of the proposed approach is confirmed with measured data on a 1.5 mm2 wire implemented in a dedicated test bench.

  5. Modified crisis intervention for personality disorder.

    PubMed

    Rudnick, A

    1998-01-01

    This study proposes that the goal of crisis intervention for persons with personality disorders should be to return them to their pre-crisis level of functioning, even though this is maladaptive. This is contrasted with standard crisis intervention, which aims to return normal or neurotic persons to their pre-crisis normal or neurotic functioning, usually by means of few and short-term therapeutic encounters. The modification proposed costs more time and resources in persons with personality disorders in crisis and fits the intervention to the personality type. This is illustrated by the case of Eve, a patient in crisis, whose pre-crisis functioning was maladaptive because of a dependent personality disorder. The goal of (modified) crisis intervention in this case was to return the patient to her dependent lifestyle, by means of pharmacotherapy combined with intensive supportive psychotherapy during 3-4 months of partial (day) hospitalization. The special nature of crisis in personality disorders is discussed.

  6. New Concept for FES-Induced Movements

    NASA Astrophysics Data System (ADS)

    Ahmed, Mohammed; Huq, M. S.; Ibrahim, B. S. K. K.; Ahmed, Aisha; Ahmed, Zainab

    2016-11-01

    Functional Electrical Stimulation (FES) had become a viable option for movement restoration, therapy and rehabilitation in neurologically impaired subjects. Although the number of such subjects increase globally but only few orthosis devices combine with the technique are available and are costly. A factor resulting to this could be stringent requirement for such devices to have passed clinical acceptance. In that regard a new approach which utilize the patient wheelchair as support and also a novel control system to synchronize the stimulation such that the movement is accomplished safely was proposed. It is expected to improve well-being, social integration, independence, cost, and healthcare delivery.

  7. The technological raw material heating furnaces operation efficiency improving issue

    NASA Astrophysics Data System (ADS)

    Paramonov, A. M.

    2017-08-01

    The issue of fuel oil applying efficiency improving in the technological raw material heating furnaces by means of its combustion intensification is considered in the paper. The technical and economic optimization problem of the fuel oil heating before combustion is solved. The fuel oil heating optimal temperature defining method and algorithm analytically considering the correlation of thermal, operating parameters and discounted costs for the heating furnace were developed. The obtained optimization functionality provides the heating furnace appropriate thermal indices achievement at minimum discounted costs. The carried out research results prove the expediency of the proposed solutions using.

  8. A grid layout algorithm for automatic drawing of biochemical networks.

    PubMed

    Li, Weijiang; Kurata, Hiroyuki

    2005-05-01

    Visualization is indispensable in the research of complex biochemical networks. Available graph layout algorithms are not adequate for satisfactorily drawing such networks. New methods are required to visualize automatically the topological architectures and facilitate the understanding of the functions of the networks. We propose a novel layout algorithm to draw complex biochemical networks. A network is modeled as a system of interacting nodes on squared grids. A discrete cost function between each node pair is designed based on the topological relation and the geometric positions of the two nodes. The layouts are produced by minimizing the total cost. We design a fast algorithm to minimize the discrete cost function, by which candidate layouts can be produced efficiently. A simulated annealing procedure is used to choose better candidates. Our algorithm demonstrates its ability to exhibit cluster structures clearly in relatively compact layout areas without any prior knowledge. We developed Windows software to implement the algorithm for CADLIVE. All materials can be freely downloaded from http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/ http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/

  9. Management of unmanned moving sensors through human decision layers: a bi-level optimization process with calls to costly sub-processes

    NASA Astrophysics Data System (ADS)

    Dambreville, Frédéric

    2013-10-01

    While there is a variety of approaches and algorithms for optimizing the mission of an unmanned moving sensor, there are much less works which deal with the implementation of several sensors within a human organization. In this case, the management of the sensors is done through at least one human decision layer, and the sensors management as a whole arises as a bi-level optimization process. In this work, the following hypotheses are considered as realistic: Sensor handlers of first level plans their sensors by means of elaborated algorithmic tools based on accurate modelling of the environment; Higher level plans the handled sensors according to a global observation mission and on the basis of an approximated model of the environment and of the first level sub-processes. This problem is formalized very generally as the maximization of an unknown function, defined a priori by sampling a known random function (law of model error). In such case, each actual evaluation of the function increases the knowledge about the function, and subsequently the efficiency of the maximization. The issue is to optimize the sequence of value to be evaluated, in regards to the evaluation costs. There is here a fundamental link with the domain of experiment design. Jones, Schonlau and Welch proposed a general method, the Efficient Global Optimization (EGO), for solving this problem in the case of additive functional Gaussian law. In our work, a generalization of the EGO is proposed, based on a rare event simulation approach. It is applied to the aforementioned bi-level sensor planning.

  10. Sandia National Laboratories: Working with Sandia: Contract Audit

    Science.gov Websites

    Government Auditing Standards. Electronic Cost Claims Electronic Cost Claim (ECC) An Electronic Cost Claim is ) ECC-Cost Reimbursable Template and Instructions (MS Excel) ECC-University Template (MS Excel) ECC -Indirect Rates (Indirect Rate Cost Claim) (MS Excel) Electronic Cost Proposals Electronic Cost Proposal

  11. An opportunity cost model of subjective effort and task performance

    PubMed Central

    Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus

    2013-01-01

    Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775

  12. Optimality of cycle time and inventory decisions in a two echelon inventory system with exponential price dependent demand under credit period

    NASA Astrophysics Data System (ADS)

    Krugon, Seelam; Nagaraju, Dega

    2017-05-01

    This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.

  13. Rolling scheduling of electric power system with wind power based on improved NNIA algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.

    2017-11-01

    This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.

  14. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    NASA Astrophysics Data System (ADS)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  15. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  16. A two-stage approach to removing noise from recorded music

    NASA Astrophysics Data System (ADS)

    Berger, Jonathan; Goldberg, Maxim J.; Coifman, Ronald C.; Goldberg, Maxim J.; Coifman, Ronald C.

    2004-05-01

    A two-stage algorithm for removing noise from recorded music signals (first proposed in Berger et al., ICMC, 1995) is described and updated. The first stage selects the ``best'' local trigonometric basis for the signal and models noise as the part having high entropy [see Berger et al., J. Audio Eng. Soc. 42(10), 808-818 (1994)]. In the second stage, the original source and the model of the noise obtained from the first stage are expanded into dyadic trees of smooth local sine bases. The best basis for the source signal is extracted using a relative entropy function (the Kullback-Leibler distance) to compare the sum of the costs of the children nodes to the cost of their parent node; energies of the noise in corresponding nodes of the model noise tree are used as weights. The talk will include audio examples of various stages of the method and proposals for further research.

  17. Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA

    PubMed Central

    Ma, Xiaoqi

    2015-01-01

    A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time. PMID:26543867

  18. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  19. Design and Analysis of a Micromechanical Three-Component Force Sensor for Characterizing and Quantifying Surface Roughness

    NASA Astrophysics Data System (ADS)

    Liang, Q.; Wu, W.; Zhang, D.; Wei, B.; Sun, W.; Wang, Y.; Ge, Y.

    2015-10-01

    Roughness, which can represent the trade-off between manufacturing cost and performance of mechanical components, is a critical predictor of cracks, corrosion and fatigue damage. In order to measure polished or super-finished surfaces, a novel touch probe based on three-component force sensor for characterizing and quantifying surface roughness is proposed by using silicon micromachining technology. The sensor design is based on a cross-beam structure, which ensures that the system possesses high sensitivity and low coupling. The results show that the proposed sensor possesses high sensitivity, low coupling error, and temperature compensation function. The proposed system can be used to investigate micromechanical structures with nanometer accuracy.

  20. Hybrid optimal online-overnight charging coordination of plug-in electric vehicles in smart grid

    NASA Astrophysics Data System (ADS)

    Masoum, Mohammad A. S.; Nabavi, Seyed M. H.

    2016-10-01

    Optimal coordinated charging of plugged-in electric vehicles (PEVs) in smart grid (SG) can be beneficial for both consumers and utilities. This paper proposes a hybrid optimal online followed by overnight charging coordination of high and low priority PEVs using discrete particle swarm optimization (DPSO) that considers the benefits of both consumers and electric utilities. Objective functions are online minimization of total cost (associated with grid losses and energy generation) and overnight valley filling through minimization of the total load levels. The constraints include substation transformer loading, node voltage regulations and the requested final battery state of charge levels (SOCreq). The main challenge is optimal selection of the overnight starting time (toptimal-overnight,start) to guarantee charging of all vehicle batteries to the SOCreq levels before the requested plug-out times (treq) which is done by simultaneously solving the online and overnight objective functions. The online-overnight PEV coordination approach is implemented on a 449-node SG; results are compared for uncoordinated and coordinated battery charging as well as a modified strategy using cost minimizations for both online and overnight coordination. The impact of toptimal-overnight,start on performance of the proposed PEV coordination is investigated.

  1. How to Prepare an Indirect Cost Rate Proposal for a Non-profit Organization

    EPA Pesticide Factsheets

    The indirect cost rate proposal is the documentation prepared by a grantee organization, in accordance with applicable federal cost principles, to substantiate its claim for the reimbursement of indirect costs.

  2. New spatial clustering-based models for optimal urban facility location considering geographical obstacles

    NASA Astrophysics Data System (ADS)

    Javadi, Maryam; Shahrabi, Jamal

    2014-03-01

    The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.

  3. Home Page

    Science.gov Websites

    Audit Manual Selected Area of Cost Guidebook: FAR 31.205 Cost Principles MRDs - Audit Guidance Memos CAS - Cost Accounting Standards FAR - Federal Acquisition Regulation FAR Cost Principles Guide DFARS Proposal Adequacy Checklist Forward Pricing Rate Proposal Adequacy Checklist Incurred Cost Submission

  4. Predicting Costs of Eastern National Forest Wildernesses.

    ERIC Educational Resources Information Center

    Guldin, Richard W.

    1981-01-01

    A method for estimating the total direct social costs for proposed wilderness areas is presented. A cost framework is constructed and equations are developed for cost components. To illustrate the study's method, social costs are estimated for a proposed wilderness area in New England. (Author/JN)

  5. Multiple kernel learning using single stage function approximation for binary classification problems

    NASA Astrophysics Data System (ADS)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  6. New routes to the functionalization patterning and manufacture of graphene-based materials for biomedical applications.

    PubMed

    De Sanctis, A; Russo, S; Craciun, M F; Alexeev, A; Barnes, M D; Nagareddy, V K; Wright, C D

    2018-06-06

    Graphene-based materials are being widely explored for a range of biomedical applications, from targeted drug delivery to biosensing, bioimaging and use for antibacterial treatments, to name but a few. In many such applications, it is not graphene itself that is used as the active agent, but one of its chemically functionalized forms. The type of chemical species used for functionalization will play a key role in determining the utility of any graphene-based device in any particular biomedical application, because this determines to a large part its physical, chemical, electrical and optical interactions. However, other factors will also be important in determining the eventual uptake of graphene-based biomedical technologies, in particular the ease and cost of manufacture of proposed device and system designs. In this work, we describe three novel routes for the chemical functionalization of graphene using oxygen, iron chloride and fluorine. We also introduce novel in situ methods for controlling and patterning such functionalization on the micro- and nanoscales. Our approaches are readily transferable to large-scale manufacturing, potentially paving the way for the eventual cost-effective production of functionalized graphene-based materials, devices and systems for a range of important biomedical applications.

  7. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  8. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  9. 2 CFR Appendix A to Part 225 - General Principles for Determining Allowable Costs

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... department or agency 15. Indirect cost rate proposal 16. Local government 17. Public assistance cost... plan, public assistance cost allocation plan, and indirect cost rate proposal. Each of these terms is..., or any agency or instrumentality of a local government. 17. “Public assistance cost allocation plan...

  10. Technology assessment and citizen action

    NASA Technical Reports Server (NTRS)

    Mottur, E. R.

    1972-01-01

    The importance of citizen participation in the assessment process is discussed, and a system for citizen assessment action is proposed. A national assessment system is outlined. Citizen participation is considered essential in the assessment process, and impediments to effective action taken by citizens are discussed. These impediments are finance, organization and motivation, and information. The establishment of citizens' assessment associations is proposed, whose functioning would be fostered and regulated by the Citizens' Assessment Administration. The organization, functions, and financing of these associations are described. The implications of citizen action are indicated as the extensive use of class action suits, the broad interpretation of associated costs of litigation, and the use of present scientific research as evidence to assert that it is reasonable to conclude that certain consequences are probable to occur in the future.

  11. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  12. Injured Brains and Adaptive Networks: The Benefits and Costs of Hyperconnectivity.

    PubMed

    Hillary, Frank G; Grafman, Jordan H

    2017-05-01

    A common finding in human functional brain-imaging studies is that damage to neural systems paradoxically results in enhanced functional connectivity between network regions, a phenomenon commonly referred to as 'hyperconnectivity'. Here, we describe the various ways that hyperconnectivity operates to benefit a neural network following injury while simultaneously negotiating the trade-off between metabolic cost and communication efficiency. Hyperconnectivity may be optimally expressed by increasing connections through the most central and metabolically efficient regions (i.e., hubs). While adaptive in the short term, we propose that chronic hyperconnectivity may leave network hubs vulnerable to secondary pathological processes over the life span due to chronically elevated metabolic stress. We conclude by offering novel, testable hypotheses for advancing our understanding of the role of hyperconnectivity in systems-level brain plasticity in neurological disorders. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Decision making under uncertainty: a quasimetric approach.

    PubMed

    N'Guyen, Steve; Moulin-Frier, Clément; Droulez, Jacques

    2013-01-01

    We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and many others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches.

  14. A novel rail defect detection method based on undecimated lifting wavelet packet transform and Shannon entropy-improved adaptive line enhancer

    NASA Astrophysics Data System (ADS)

    Hao, Qiushi; Zhang, Xin; Wang, Yan; Shen, Yi; Makis, Viliam

    2018-07-01

    Acoustic emission (AE) technology is sensitive to subliminal rail defects, however strong wheel-rail contact rolling noise under high-speed condition has gravely impeded detecting of rail defects using traditional denoising methods. In this context, the paper develops an adaptive detection method for rail cracks, which combines multiresolution analysis with an improved adaptive line enhancer (ALE). To obtain elaborate multiresolution information of transient crack signals with low computational cost, lifting scheme-based undecimated wavelet packet transform is adopted. In order to feature the impulsive property of crack signals, a Shannon entropy-improved ALE is proposed as a signal enhancing approach, where Shannon entropy is introduced to improve the cost function. Then a rail defect detection plan based on the proposed method for high-speed condition is put forward. From theoretical analysis and experimental verification, it is demonstrated that the proposed method has superior performance in enhancing the rail defect AE signal and reducing the strong background noise, offering an effective multiresolution approach for rail defect detection under high-speed and strong-noise condition.

  15. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    PubMed Central

    Fu, Yi-Ge; Zhou, Jie; Deng, Lei

    2014-01-01

    As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353

  16. Proposed Reliability/Cost Model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  17. Providing Coverage for the Unique Lifelong Health Care Needs of Living Kidney Donors Within the Framework of Financial Neutrality.

    PubMed

    Gill, J S; Delmonico, F; Klarenbach, S; Capron, A M

    2017-05-01

    Organ donation should neither enrich donors nor impose financial burdens on them. We described the scope of health care required for all living kidney donors, reflecting contemporary understanding of long-term donor health outcomes; proposed an approach to identify donor health conditions that should be covered within the framework of financial neutrality; and proposed strategies to pay for this care. Despite the Affordable Care Act in the United States, donors continue to have inadequate coverage for important health conditions that are donation related or that may compromise postdonation kidney function. Amendment of Medicare regulations is needed to clarify that surveillance and treatment of conditions that may compromise postdonation kidney function following donor nephrectomy will be covered without expense to the donor. In other countries lacking health insurance for all residents, sufficient data exist to allow the creation of a compensation fund or donor insurance policies to ensure appropriate care. Providing coverage for donation-related sequelae as well as care to preserve postdonation kidney function ensures protection against the financial burdens of health care encountered by donors throughout their lives. Providing coverage for this care should thus be cost-effective, even without considering the health care cost savings that occur for living donor transplant recipients. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.

  18. Optimizing staffing, quality, and cost in home healthcare nursing: theory synthesis.

    PubMed

    Park, Claire Su-Yeon

    2017-08-01

    To propose a new theory pinpointing the optimal nurse staffing threshold delivering the maximum quality of care relative to attendant costs in home health care. Little knowledge exists on the theoretical foundation addressing the inter-relationship among quality of care, nurse staffing, and cost. Theory synthesis. Cochrane Library, PubMed, CINAHL, EBSCOhost Web and Web of Science (25 February - 26 April 2013; 20 January - 22 March 2015). Most of the existing theories/models lacked the detail necessary to explain the relationship among quality of care, nurse staffing and cost. Two notable exceptions are: 'Production Function for Staffing and Quality in Nursing Homes,' which describes an S-shaped trajectory between quality of care and nurse staffing and 'Thirty-day Survival Isoquant and Estimated Costs According to the Nurse Staff Mix,' which depicts a positive quadric relationship between nurse staffing and cost according to quality of care. A synthesis of these theories led to an innovative multi-dimensional econometric theory helping to determine the maximum quality of care for patients while simultaneously delivering nurse staffing in the most cost-effective way. The theory-driven threshold, navigated by Mathematical Programming based on the Duality Theorem in Mathematical Economics, will help nurse executives defend sufficient nurse staffing with scientific justification to ensure optimal patient care; help stakeholders set an evidence-based reasonable economical goal; and facilitate patient-centred decision-making in choosing the institution which delivers the best quality of care. A new theory to determine the optimum nurse staffing maximizing quality of care relative to cost was proposed. © 2017 The Author. Journal of Advanced Nursing © John Wiley & Sons Ltd.

  19. 48 CFR 931.205-18 - Independent research and development (IR&D) and bid and proposal (B&P) costs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... development (IR&D) and bid and proposal (B&P) costs. 931.205-18 Section 931.205-18 Federal Acquisition... bid and proposal (B&P) costs. (c)(2) IR&D costs are recoverable under DOE contracts to the extent they... the DOE program. The term “DOE program” encompasses the DOE total mission and its objectives. B&P...

  20. 48 CFR 931.205-18 - Independent research and development (IR&D) and bid and proposal (B&P) costs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... development (IR&D) and bid and proposal (B&P) costs. 931.205-18 Section 931.205-18 Federal Acquisition... bid and proposal (B&P) costs. (c)(2) IR&D costs are recoverable under DOE contracts to the extent they... the DOE program. The term “DOE program” encompasses the DOE total mission and its objectives. B&P...

  1. 48 CFR 931.205-18 - Independent research and development (IR&D) and bid and proposal (B&P) costs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... development (IR&D) and bid and proposal (B&P) costs. 931.205-18 Section 931.205-18 Federal Acquisition... bid and proposal (B&P) costs. (c)(2) IR&D costs are recoverable under DOE contracts to the extent they... the DOE program. The term “DOE program” encompasses the DOE total mission and its objectives. B&P...

  2. 48 CFR 931.205-18 - Independent research and development (IR&D) and bid and proposal (B&P) costs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... development (IR&D) and bid and proposal (B&P) costs. 931.205-18 Section 931.205-18 Federal Acquisition... bid and proposal (B&P) costs. (c)(2) IR&D costs are recoverable under DOE contracts to the extent they... the DOE program. The term “DOE program” encompasses the DOE total mission and its objectives. B&P...

  3. 48 CFR 931.205-18 - Independent research and development (IR&D) and bid and proposal (B&P) costs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... development (IR&D) and bid and proposal (B&P) costs. 931.205-18 Section 931.205-18 Federal Acquisition... bid and proposal (B&P) costs. (c)(2) IR&D costs are recoverable under DOE contracts to the extent they... the DOE program. The term “DOE program” encompasses the DOE total mission and its objectives. B&P...

  4. A Scalable Implementation of Van der Waals Density Functionals

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Gygi, Francois

    2010-03-01

    Recently developed Van der Waals density functionals[1] offer the promise to account for weak intermolecular interactions that are not described accurately by local exchange-correlation density functionals. In spite of recent progress [2], the computational cost of such calculations remains high. We present a scalable parallel implementation of the functional proposed by Dion et al.[1]. The method is implemented in the Qbox first-principles simulation code (http://eslab.ucdavis.edu/software/qbox). Application to large molecular systems will be presented. [4pt] [1] M. Dion et al. Phys. Rev. Lett. 92, 246401 (2004).[0pt] [2] G. Roman-Perez and J. M. Soler, Phys. Rev. Lett. 103, 096102 (2009).

  5. Cyborg beast: a low-cost 3d-printed prosthetic hand for children with upper-limb differences.

    PubMed

    Zuniga, Jorge; Katsavelis, Dimitrios; Peck, Jean; Stollberg, John; Petrykowski, Marc; Carson, Adam; Fernandez, Cristina

    2015-01-20

    There is an increasing number of children with traumatic and congenital hand amputations or reductions. Children's prosthetic needs are complex due to their small size, constant growth, and psychosocial development. Families' financial resources play a crucial role in the prescription of prostheses for their children, especially when private insurance and public funding are insufficient. Electric-powered (i.e., myoelectric) and body-powered (i.e., mechanical) devices have been developed to accommodate children's needs, but the cost of maintenance and replacement represents an obstacle for many families. Due to the complexity and high cost of these prosthetic hands, they are not accessible to children from low-income, uninsured families or to children from developing countries. Advancements in computer-aided design (CAD) programs, additive manufacturing, and image editing software offer the possibility of designing, printing, and fitting prosthetic hands devices at a distance and at very low cost. The purpose of this preliminary investigation was to describe a low-cost three-dimensional (3D)-printed prosthetic hand for children with upper-limb reductions and to propose a prosthesis fitting methodology that can be performed at a distance. No significant mean differences were found between the anthropometric and range of motion measurements taken directly from the upper limbs of subjects versus those extracted from photographs. The Bland and Altman plots show no major bias and narrow limits of agreements for lengths and widths and small bias and wider limits of agreements for the range of motion measurements. The main finding of the survey was that our prosthetic device may have a significant potential to positively impact quality of life and daily usage, and can be incorporated in several activities at home and in school. This investigation describes a low-cost 3D-printed prosthetic hand for children and proposes a distance fitting procedure. The Cyborg Beast prosthetic hand and the proposed distance-fitting procedures may represent a possible low-cost alternative for children in developing countries and those who have limited access to health care providers. Further studies should examine the functionality, validity, durability, benefits, and rejection rate of this type of low-cost 3D-printed prosthetic device.

  6. COST MODEL FOR LARGE URBAN SCHOOLS.

    ERIC Educational Resources Information Center

    O'BRIEN, RICHARD J.

    THIS DOCUMENT CONTAINS A COST SUBMODEL OF AN URBAN EDUCATIONAL SYSTEM. THIS MODEL REQUIRES THAT PUPIL POPULATION AND PROPOSED SCHOOL BUILDING ARE KNOWN. THE COST ELEMENTS ARE--(1) CONSTRUCTION COSTS OF NEW PLANTS, (2) ACQUISITION AND DEVELOPMENT COSTS OF BUILDING SITES, (3) CURRENT OPERATING EXPENSES OF THE PROPOSED SCHOOL, (4) PUPIL…

  7. 32 CFR 165.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... That includes costs of any engineering change proposals initiated before the date of calculations of...-house DoD effort. This includes costs of any engineering change proposal started before the date of... NONRECURRING COSTS ON SALES OF U.S. ITEMS § 165.3 Definitions. (a) Cost pool. Represents the total cost to be...

  8. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    PubMed

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  9. Attractor neural networks with resource-efficient synaptic connectivity

    NASA Astrophysics Data System (ADS)

    Pehlevan, Cengiz; Sengupta, Anirvan

    Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.

  10. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  11. Failure Mode Identification Through Clustering Analysis

    NASA Technical Reports Server (NTRS)

    Arunajadai, Srikesh G.; Stone, Robert B.; Tumer, Irem Y.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Research has shown that nearly 80% of the costs and problems are created in product development and that cost and quality are essentially designed into products in the conceptual stage. Currently, failure identification procedures (such as FMEA (Failure Modes and Effects Analysis), FMECA (Failure Modes, Effects and Criticality Analysis) and FTA (Fault Tree Analysis)) and design of experiments are being used for quality control and for the detection of potential failure modes during the detail design stage or post-product launch. Though all of these methods have their own advantages, they do not give information as to what are the predominant failures that a designer should focus on while designing a product. This work uses a functional approach to identify failure modes, which hypothesizes that similarities exist between different failure modes based on the functionality of the product/component. In this paper, a statistical clustering procedure is proposed to retrieve information on the set of predominant failures that a function experiences. The various stages of the methodology are illustrated using a hypothetical design example.

  12. Beyond statistical inference: A decision theory for science

    PubMed Central

    KILLEEN, PETER R.

    2008-01-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute. PMID:17201351

  13. Beyond statistical inference: a decision theory for science.

    PubMed

    Killeen, Peter R

    2006-08-01

    Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.

  14. Optimal trajectory generation for mechanical arms. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Iemenschot, J. A.

    1972-01-01

    A general method of generating optimal trajectories between an initial and a final position of an n degree of freedom manipulator arm with nonlinear equations of motion is proposed. The method is based on the assumption that the time history of each of the coordinates can be expanded in a series of simple time functions. By searching over the coefficients of the terms in the expansion, trajectories which minimize the value of a given cost function can be obtained. The method has been applied to a planar three degree of freedom arm.

  15. Chitosan in Molecularly-Imprinted Polymers: Current and Future Prospects.

    PubMed

    Xu, Long; Huang, Yun-An; Zhu, Qiu-Jin; Ye, Chun

    2015-08-07

    Chitosan is widely used in molecular imprinting technology (MIT) as a functional monomer or supporting matrix because of its low cost and high contents of amino and hydroxyl functional groups. The various excellent properties of chitosan, which include nontoxicity, biodegradability, biocompatibility, and attractive physical and mechanical performances, make chitosan a promising alternative to conventional functional monomers. Recently, chitosan molecularly-imprinted polymers have gained considerable attention and showed significant potential in many fields, such as curbing environmental pollution, medicine, protein separation and identification, and chiral-compound separation. These extensive applications are due to the polymers' desired selectivity, physical robustness, and thermal stability, as well as their low cost and easy preparation. Cross-linkers, which fix the functional groups of chitosan around imprinted molecules, play an important role in chitosan molecularly-imprinted polymers. This review summarizes the important cross-linkers of chitosan molecularly-imprinted polymers and illustrates the cross-linking mechanism of chitosan and cross-linkers based on the two glucosamine units. Finally, some significant attempts to further develop the application of chitosan in MIT are proposed.

  16. 75 FR 57272 - Proposed CERCLA Administrative Cost Recovery Settlement; Gilberts/Kedzie Site, Village of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-20

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9203-6] Proposed CERCLA Administrative Cost Recovery... hereby given of a proposed administrative settlement for recovery of past response costs concerning the... requires the settling parties to pay $3,000.00 to the Hazardous Substance Superfund and additional payments...

  17. 76 FR 296 - Periodic Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-04

    ... part would update the mail processing portion of the Parcel Select/Parcel Return Service cost models...) processing cost model that was filed as Proposal Seven on September 8, 2010. Proposal Thirteen at 1. These... develop the Standard Mail/non-flat machinable (NFM) mail processing cost model. It also proposes to use...

  18. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  19. Why not private health insurance? 1. Insurance made easy

    PubMed Central

    Deber, R; Gildiner, A; Baranek, P

    1999-01-01

    How realistic are proposals to expand the financing of Canadian health care through private insurance, either in a parallel stream or an expanded supplementary tier? Any successful business requires that revenues exceed expenditures. Under a voluntary health insurance plan those at highest risk would be the most likely to seek coverage; insurers working within a competitive market would have to limit their financial risk through such mechanisms as "risk selection" to avoid clients likely to incur high costs and/or imposing caps on the costs covered. It is unlikely that parallel private plans will have a market if a comprehensive public insurance system continues to exist and function well. Although supplementary plans are more congruous with insurance principles, they would raise costs for purchasers and would probably not provide full open-ended coverage to all potential clients. Insurance principles suggest that voluntary insurance plans that shift costs to the private sector would damage the publicly funded system and would be unable to cover costs for all services required. PMID:10497613

  20. Analysis of hospital costs as a basis for pricing services in Mali.

    PubMed

    Audibert, Martine; Mathonnat, Jacky; Pareil, Delphine; Kabamba, Raymond

    2007-01-01

    In a move to achieve a better equity in the funding of access to health care, particularly for the poor, a better efficiency of hospital functioning and a better financial balance, the analysis of hospital costs in Mali brings several key elements to improve the pricing of medical services. The method utilized is the classical step-down process which takes into consideration the entire set of direct and indirect costs borne by the hospital. Although this approach does not allow to estimate the economic cost of consultations, it is a useful contribution to assess the financial activity of the hospital and improve its performance, financially speaking, through a more relevant user fees policy. The study shows that there are possibilities of cross-subsidies within the hospital or within services which improve the recovery of some of the current costs. It also leads to several proposals of pricing care while taking into account the constraints, the level of the hospital its specific conditions and equity. Copyright (c) 2007 John Wiley & Sons, Ltd.

  1. Quantum generalisation of feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  2. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less

  3. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  4. Influence of cost functions and optimization methods on solving the inverse problem in spatially resolved diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.

    2017-03-01

    Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.

  5. A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging

    PubMed Central

    Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu

    2017-01-01

    This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583

  6. A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction.

    PubMed

    Chen, C P; Wan, J Z

    1999-01-01

    A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.

  7. Training Guide for the Management Analyst Industrial Engineer Technician

    DTIC Science & Technology

    1979-07-01

    comtemporary work operations, and blending traditional and modern organization concepts, the student devwlops the facility to analyze and create organization...training, the attendee will know the functions of a computer as it processes business data to produce information for improved management. He will...action which is most cost effective when considering proposed investments. Emphasis is placed on the adaption of general business practices to

  8. Opportunities in SMR Emergency Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moe, Wayne L.

    2014-10-01

    Using year 2014 cost information gathered from twenty different locations within the current commercial nuclear power station fleet, an assessment was performed concerning compliance costs associated with the offsite emergency Planning Standards contained in 10 CFR 50.47(b). The study was conducted to quantitatively determine the potential cost benefits realized if an emergency planning zone (EPZ) were reduced in size according to the lowered risks expected to accompany small modular reactors (SMR). Licensees are required to provide a technical basis when proposing to reduce the surrounding EPZ size to less than the 10 mile plume exposure and 50 mile ingestion pathwaymore » distances currently being used. To assist licensees in assessing the savings that might be associated with such an action, this study established offsite emergency planning costs in connection with four discrete EPZ boundary distances, i.e., site boundary, 2 miles, 5 miles and 10 miles. The boundary selected by the licensee would be based on where EPA Protective Action Guidelines are no longer likely to be exceeded. Additional consideration was directed towards costs associated with reducing the 50 mile ingestion pathway EPZ. The assessment methodology consisted of gathering actual capital costs and annual operating and maintenance costs for offsite emergency planning programs at the surveyed sites, partitioning them according to key predictive factors, and allocating those portions to individual emergency Planning Standards as a function of EPZ size. Two techniques, an offsite population-based approach and an area-based approach, were then employed to calculate the scaling factors which enabled cost projections as a function of EPZ size. Site-specific factors that influenced source data costs, such as the effects of supplemental funding to external state and local agencies for offsite response organization activities, were incorporated into the analysis to the extent those factors could be representatively apportioned.« less

  9. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  10. HealthMarts, HIPCs (health insurance purchasing cooperatives), MEWAs (multiple employee welfare arrangements), and AHPs (association health plans): a guide for the perplexed.

    PubMed

    Hall, M A; Wicks, E K; Lawlor, J S

    2001-01-01

    This paper considers how pending proposals to authorize new forms of group purchasing arrangements for health insurance would fit and function within the existing, highly complex market and regulatory landscape and whether these proposals are likely to meet their stated objectives and avoid unintended consequences. Cost savings are more likely to result from increased risk segmentation than through true market efficiencies. Thus, these proposals could erode previous market reforms whose goal is increased risk pooling. On the other hand, these proposals contain important enhancements, clarifications, and simplification of state and federal regulatory oversight of group purchasing vehicles. Also, they address some of the problems that have hampered the performance of purchasing cooperatives. On balance, although these proposals should receive cautious and careful consideration, they are not likely to produce a significant overall reduction in premiums or increase in coverage.

  11. Operation analysis of a Chebyshev-Pantograph leg mechanism for a single DOF biped robot

    NASA Astrophysics Data System (ADS)

    Liang, Conghui; Ceccarelli, Marco; Takeda, Yukio

    2012-12-01

    In this paper, operation analysis of a Chebyshev-Pantograph leg mechanism is presented for a single degree of freedom (DOF) biped robot. The proposed leg mechanism is composed of a Chebyshev four-bar linkage and a pantograph mechanism. In contrast to general fully actuated anthropomorphic leg mechanisms, the proposed leg mechanism has peculiar features like compactness, low-cost, and easy-operation. Kinematic equations of the proposed leg mechanism are formulated for a computer oriented simulation. Simulation results show the operation performance of the proposed leg mechanism with suitable characteristics. A parametric study has been carried out to evaluate the operation performance as function of design parameters. A prototype of a single DOF biped robot equipped with two proposed leg mechanisms has been built at LARM (Laboratory of Robotics and Mechatronics). Experimental test shows practical feasible walking ability of the prototype, as well as drawbacks are discussed for the mechanical design.

  12. Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.

    PubMed

    Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei

    2014-01-01

    Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.

  13. Combining self-organizing mapping and supervised affinity propagation clustering approach to investigate functional brain networks involved in motor imagery and execution with fMRI measurements.

    PubMed

    Zhang, Jiang; Liu, Qi; Chen, Huafu; Yuan, Zhen; Huang, Jin; Deng, Lihua; Lu, Fengmei; Zhang, Junpeng; Wang, Yuqing; Wang, Mingwen; Chen, Liangyin

    2015-01-01

    Clustering analysis methods have been widely applied to identifying the functional brain networks of a multitask paradigm. However, the previously used clustering analysis techniques are computationally expensive and thus impractical for clinical applications. In this study a novel method, called SOM-SAPC that combines self-organizing mapping (SOM) and supervised affinity propagation clustering (SAPC), is proposed and implemented to identify the motor execution (ME) and motor imagery (MI) networks. In SOM-SAPC, SOM was first performed to process fMRI data and SAPC is further utilized for clustering the patterns of functional networks. As a result, SOM-SAPC is able to significantly reduce the computational cost for brain network analysis. Simulation and clinical tests involving ME and MI were conducted based on SOM-SAPC, and the analysis results indicated that functional brain networks were clearly identified with different response patterns and reduced computational cost. In particular, three activation clusters were clearly revealed, which include parts of the visual, ME and MI functional networks. These findings validated that SOM-SAPC is an effective and robust method to analyze the fMRI data with multitasks.

  14. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  15. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  16. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  17. 14 CFR 161.205 - Required analysis of proposed restriction and alternatives.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of the anticipated or actual costs and benefits of the proposed noise or access restriction; (2) A... not involve aircraft restrictions, and a comparison of the costs and benefits of such alternative measures to costs and benefits of the proposed noise or access restriction. (b) In preparing the analyses...

  18. 48 CFR 53.301-1437 - Settlement Proposal for Cost-Reimbursement Type Contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Settlement Proposal for Cost-Reimbursement Type Contracts. 53.301-1437 Section 53.301-1437 Federal Acquisition Regulations...-1437 Settlement Proposal for Cost-Reimbursement Type Contracts. ER09DE97.012 [62 FR 64951, Dec. 9, 1997] ...

  19. Novel parametric reduced order model for aeroengine blade dynamics

    NASA Astrophysics Data System (ADS)

    Yuan, Jie; Allegri, Giuliano; Scarpa, Fabrizio; Rajasekaran, Ramesh; Patsias, Sophoclis

    2015-10-01

    The work introduces a novel reduced order model (ROM) technique to describe the dynamic behavior of turbofan aeroengine blades. We introduce an equivalent 3D frame model to describe the coupled flexural/torsional mode shapes, with their relevant natural frequencies and associated modal masses. The frame configurations are identified through a structural identification approach based on a simulated annealing algorithm with stochastic tunneling. The cost functions are constituted by linear combinations of relative errors associated to the resonance frequencies, the individual modal assurance criteria (MAC), and on either overall static or modal masses. When static masses are considered the optimized 3D frame can represent the blade dynamic behavior with an 8% error on the MAC, a 1% error on the associated modal frequencies and a 1% error on the overall static mass. When using modal masses in the cost function the performance of the ROM is similar, but the overall error increases to 7%. The approach proposed in this paper is considerably more accurate than state-of-the-art blade ROMs based on traditional Timoshenko beams, and provides excellent accuracy at reduced computational time when compared against high fidelity FE models. A sensitivity analysis shows that the proposed model can adequately predict the global trends of the variations of the natural frequencies when lumped masses are used for mistuning analysis. The proposed ROM also follows extremely closely the sensitivity of the high fidelity finite element models when the material parameters are used in the sensitivity.

  20. Dependence in Alzheimer's disease and service use costs, quality of life, and caregiver burden: the DADE study.

    PubMed

    Jones, Roy W; Romeo, Renee; Trigg, Richard; Knapp, Martin; Sato, Azusa; King, Derek; Niecko, Timothy; Lacey, Loretto

    2015-03-01

    Most models determining how patient and caregiver characteristics and costs change with Alzheimer's disease (AD) progression focus on one aspect, for example, cognition. AD is inadequately defined by a single domain; tracking progression by focusing on a single aspect may mean other important aspects are insufficiently addressed. Dependence has been proposed as a better marker for following disease progression. This was a cross-sectional observational study (18 UK sites). Two hundred forty-nine community or institutionalized patients, with possible/probable AD, Mini-Mental State Examination (3-26), and a knowledgeable informant participated. Significant associations noted between dependence (Dependence Scale [DS]) and clinical measures of severity (cognition, function, and behavior). Bivariate and multivariate models demonstrated significant associations between DS and service use cost, patient quality of life, and caregiver perceived burden. The construct of dependence may help to translate the combined impact of changes in cognition, function, and behavior into a more readily interpretable form. The DS is useful for assessing patients with AD in clinical trials/research. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  1. Wind farm topology-finding algorithm considering performance, costs, and environmental impacts.

    PubMed

    Tazi, Nacef; Chatelet, Eric; Bouzidi, Youcef; Meziane, Rachid

    2017-06-05

    Optimal power in wind farms turns to be a modern problem for investors and decision makers; onshore wind farms are subject to performance and economic and environmental constraints. The aim of this work is to define the best installed capacity (best topology) with maximum performance and profits and consider environmental impacts as well. In this article, we continue the work recently done on wind farm topology-finding algorithm. The proposed resolution technique is based on finding the best topology of the system that maximizes the wind farm performance (availability) under the constraints of costs and capital investments. Global warming potential of wind farm is calculated and taken into account in the results. A case study is done using data and constraints similar to those collected from wind farm constructors, managers, and maintainers. Multi-state systems (MSS), universal generating function (UGF), wind, and load charge functions are applied. An economic study was conducted to assess the wind farm investment. Net present value (NPV) and levelized cost of energy (LCOE) were calculated for best topologies found.

  2. Model of Management (Mo.Ma) for the patient with schizophrenia: crisis control, maintenance, relapse prevention, and recovery with long-acting injectable antipsychotics (LAIs).

    PubMed

    Brugnoli, Roberto; Rapinesi, Chiara; Kotzalidis, Georgios D; Marcellusi, Andrea; Mennini, Francesco S; De Filippis, Sergio; Carrus, Dario; Ballerini, Andrea; Francomano, Antonio; Ducci, Giuseppe; Del Casale, Antonio; Girardi, Paolo

    2016-01-01

    Schizophrenia is a severe mental disease that affects approximately 1% of the population with a relevant chronic impact on social and occupational functioning and daily activities. People with schizophrenia are 2-2.5 times more likely to die early than the general population. Non-adherence to antipsychotic medications, both in chronic and first episode schizophrenia, is one of the most important risk factors for relapse and hospitalization, that consequently contributes to increased costs due to psychiatric hospitalization. Atypical long-acting injectable (LAI) antipsychotics can improve treatment adherence and decrease re-hospitalization rates in patients with schizophrenia since its onset. The primary goals in the management of schizophrenia are directed not only at symptom reduction in the short and long term, but also at maintaining physical and mental functioning, improving quality of life, and promoting patient recovery. To propose a scientific evidence-based integrated model that provides an algorithm for recovery of patients with schizophrenia and to investigate the effectiveness and safety of antipsychotics LAI in the treatment, maintenance, relapse prevention, and recovery of schizophrenia. After an accurate literature review we identified, collected and analyzed the crucial points in taking care schizophrenia patients, through which we defined the steps described in the model of management and the choice of the better treatment option. Results. In the management model we propose, the choice of a second generation long acting antipsychotic, could allow from the earliest stages of illness better patient management, especially for young individuals with schizophrenia onset, a better recovery and significant reductions of relapse and health care costs. LAI formulations of antipsychotics are valuable, because they help patients to remain adherent to their medication through regular contact with healthcare professionals and to prevent covert non-adherence. The proposed schizophrenia model of management could allow better patient management and recovery, in which the treatment with LAI formulation is a safe and effective therapeutic option. This new therapeutic approach could change the cost structure of schizophrenia by decreasing costs with efficient economic resource allocation guaranteed from efficient diagnostic and therapeutic pathways.

  3. Benefit-cost estimation for alternative drinking water maximum contaminant levels

    NASA Astrophysics Data System (ADS)

    Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.

    2001-08-01

    A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.

  4. A secure RFID authentication protocol adopting error correction code.

    PubMed

    Chen, Chien-Ming; Chen, Shuai-Min; Zheng, Xinying; Chen, Pei-Yu; Sun, Hung-Min

    2014-01-01

    RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance.

  5. A Secure RFID Authentication Protocol Adopting Error Correction Code

    PubMed Central

    Zheng, Xinying; Chen, Pei-Yu

    2014-01-01

    RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance. PMID:24959619

  6. Safety cost management in construction companies: A proposal classification.

    PubMed

    López-Alonso, M; Ibarrondo-Dávila, M P; Rubio, M C

    2016-06-16

    Estimating health and safety costs in the construction industry presents various difficulties, including the complexity of cost allocation, the inadequacy of data available to managers and the absence of an accounting model designed specifically for safety cost management. Very often, the costs arising from accidents in the workplace are not fully identifiable due to the hidden costs involved. This paper reviews some studies of occupational health and safety cost management and proposes a means of classifying these costs. We conducted an empirical study in which the health and safety costs of 40 construction worksites are estimated. A new classification of the health and safety cost and its categories is proposed: Safety and non-safety costs. The costs of the company's health and safety policy should be included in the information provided by the accounting system, as a starting point for analysis and control. From this perspective, a classification of health and safety costs and its categories is put forward.

  7. Hardware Neural Network for a Visual Inspection System

    NASA Astrophysics Data System (ADS)

    Chun, Seungwoo; Hayakawa, Yoshihiro; Nakajima, Koji

    The visual inspection of defects in products is heavily dependent on human experience and instinct. In this situation, it is difficult to reduce the production costs and to shorten the inspection time and hence the total process time. Consequently people involved in this area desire an automatic inspection system. In this paper, we propose a hardware neural network, which is expected to provide high-speed operation for automatic inspection of products. Since neural networks can learn, this is a suitable method for self-adjustment of criteria for classification. To achieve high-speed operation, we use parallel and pipelining techniques. Furthermore, we use a piecewise linear function instead of a conventional activation function in order to save hardware resources. Consequently, our proposed hardware neural network achieved 6GCPS and 2GCUPS, which in our test sample proved to be sufficiently fast.

  8. Low cost estimation of the contribution of post-CCSD excitations to the total atomization energy using density functional theory calculations

    NASA Astrophysics Data System (ADS)

    Sánchez, H. R.; Pis Diez, R.

    2016-04-01

    Based on the Aλ diagnostic for multireference effects recently proposed [U.R. Fogueri, S. Kozuch, A. Karton, J.M. Martin, Theor. Chem. Acc. 132 (2013) 1], a simple method for improving total atomization energies and reaction energies calculated at the CCSD level of theory is proposed. The method requires a CCSD calculation and two additional density functional theory calculations for the molecule. Two sets containing 139 and 51 molecules are used as training and validation sets, respectively, for total atomization energies. An appreciable decrease in the mean absolute error from 7-10 kcal mol-1 for CCSD to about 2 kcal mol-1 for the present method is observed. The present method provides atomization energies and reaction energies that compare favorably with relatively recent scaled CCSD methods.

  9. Information systems as a quality management tool in clinical laboratories

    NASA Astrophysics Data System (ADS)

    Schmitz, Vanessa; Rosecler Bez el Boukhari, Marta

    2007-11-01

    This article describes information systems as a quality management tool in clinical laboratories. The quality of laboratory analyses is of fundamental importance for health professionals in aiding appropriate diagnosis and treatment. Information systems allow the automation of internal quality management processes, using standard sample tests, Levey-Jennings charts and Westgard multirule analysis. This simplifies evaluation and interpretation of quality tests and reduces the possibility of human error. This study proposes the development of an information system with appropriate functions and costs for the automation of internal quality control in small and medium-sized clinical laboratories. To this end, it evaluates the functions and usability of two commercial software products designed for this purpose, identifying the positive features of each, so that these can be taken into account during the development of the proposed system.

  10. An analytic-numerical method for the construction of the reference law of operation for a class of mechanical controlled systems

    NASA Astrophysics Data System (ADS)

    Mizhidon, A. D.; Mizhidon, K. A.

    2017-04-01

    An analytic-numerical method for the construction of a reference law of operation for a class of dynamic systems describing vibrations in controlled mechanical systems is proposed. By the reference law of operation of a system, we mean a law of the system motion that satisfies all the requirements for the quality and design features of the system under permanent external disturbances. As disturbances, we consider polyharmonic functions with known amplitudes and frequencies of the harmonics but unknown initial phases. For constructing the reference law of motion, an auxiliary optimal control problem is solved in which the cost function depends on a weighting coefficient. The choice of the weighting coefficient ensures the design of the reference law. Theoretical foundations of the proposed method are given.

  11. Determining optimal selling price and lot size with process reliability and partial backlogging considerations

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsu-Pang; Cheng, Mei-Chuan; Dye, Chung-Yuan; Ouyang, Liang-Yuh

    2011-01-01

    In this article, we extend the classical economic production quantity (EPQ) model by proposing imperfect production processes and quality-dependent unit production cost. The demand rate is described by any convex decreasing function of the selling price. In addition, we allow for shortages and a time-proportional backlogging rate. For any given selling price, we first prove that the optimal production schedule not only exists but also is unique. Next, we show that the total profit per unit time is a concave function of price when the production schedule is given. We then provide a simple algorithm to find the optimal selling price and production schedule for the proposed model. Finally, we use a couple of numerical examples to illustrate the algorithm and conclude this article with suggestions for possible future research.

  12. An Authentication and Key Management Mechanism for Resource Constrained Devices in IEEE 802.11-based IoT Access Networks.

    PubMed

    Kim, Ki-Wook; Han, Youn-Hee; Min, Sung-Gi

    2017-09-21

    Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism.

  13. An Authentication and Key Management Mechanism for Resource Constrained Devices in IEEE 802.11-based IoT Access Networks

    PubMed Central

    Han, Youn-Hee; Min, Sung-Gi

    2017-01-01

    Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism. PMID:28934152

  14. Neural network-based optimal adaptive output feedback control of a helicopter UAV.

    PubMed

    Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani

    2013-07-01

    Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking.

  15. Study on multimodal transport route under low carbon background

    NASA Astrophysics Data System (ADS)

    Liu, Lele; Liu, Jie

    2018-06-01

    Low-carbon environmental protection is the focus of attention around the world, scientists are constantly researching on production of carbon emissions and living carbon emissions. However, there is little literature about multimodal transportation based on carbon emission at home and abroad. Firstly, this paper introduces the theory of multimodal transportation, the multimodal transport models that didn't consider carbon emissions and consider carbon emissions are analyzed. On this basis, a multi-objective programming 0-1 programming model with minimum total transportation cost and minimum total carbon emission is proposed. The idea of weight is applied to Ideal point method for solving problem, multi-objective programming is transformed into a single objective function. The optimal solution of carbon emission to transportation cost under different weights is determined by a single objective function with variable weights. Based on the model and algorithm, an example is given and the results are analyzed.

  16. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  17. Vibration Damping Via Acoustic Treatment Attached To Vehicle Body Panels

    NASA Astrophysics Data System (ADS)

    Gambino, Carlo

    Currently, in the automotive industry, the control of noise and vibration is the subject of much research, oriented towards the creation of innovative solutions to improve the comfort of the vehicle and to reduce its cost and weight. This thesis fits into this particular framework, as it aims to investigate the possibility of integrating the functions of sound absorptioninsulation and vibration damping in a unique component. At present the bituminous viscoelastic treatments, which are bonded to the car body panels, take charge of the vibration damping, while the sound absorption and insulation is obtained by means of the poroacoustic treatments. The solution proposed here consists of employing porous materials to perform both these functions, thus allowing the partial or complete removal of the viscoelastic damping treatments from the car body. This should decrease the weight of the vehicle, reducing fuel consumption and emissions, and it might also benefit production costs.

  18. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  19. An inverse model for a free-boundary problem with a contact line: Steady case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, Oleg; Protas, Bartosz

    2009-07-20

    This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less

  20. Design and Analysis of an Enhanced Patient-Server Mutual Authentication Protocol for Telecare Medical Information System.

    PubMed

    Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S

    2015-11-01

    In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes.

  1. 75 FR 7426 - Periodic Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-19

    ... change in transportation cost system sampling. The proposal involves distributing rail costs using inter...) sampling, and proposes instead to distribute rail costs using the Inter-BC highway distribution factors. \\1... rail sampling, and to use the TRACS inter-BMC distribtion in place of the Rail distribution key in Cost...

  2. 2 CFR 200.400 - Policy guide.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... educate and engage students in research, the dual role of students as both trainees and employees... indirect cost proposals, the cognizant agency for indirect costs should generally assure that the non... negotiation of indirect cost proposals. Where wide variations exist in the treatment of a given cost item by...

  3. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  4. Hybrid feedback feedforward: An efficient design of adaptive neural network control.

    PubMed

    Pan, Yongping; Liu, Yiqi; Xu, Bin; Yu, Haoyong

    2016-04-01

    This paper presents an efficient hybrid feedback feedforward (HFF) adaptive approximation-based control (AAC) strategy for a class of uncertain Euler-Lagrange systems. The control structure includes a proportional-derivative (PD) control term in the feedback loop and a radial-basis-function (RBF) neural network (NN) in the feedforward loop, which mimics the human motor learning control mechanism. At the presence of discontinuous friction, a sigmoid-jump-function NN is incorporated to improve control performance. The major difference of the proposed HFF-AAC design from the traditional feedback AAC (FB-AAC) design is that only desired outputs, rather than both tracking errors and desired outputs, are applied as RBF-NN inputs. Yet, such a slight modification leads to several attractive properties of HFF-AAC, including the convenient choice of an approximation domain, the decrease of the number of RBF-NN inputs, and semiglobal practical asymptotic stability dominated by control gains. Compared with previous HFF-AAC approaches, the proposed approach possesses the following two distinctive features: (i) all above attractive properties are achieved by a much simpler control scheme; (ii) the bounds of plant uncertainties are not required to be known. Consequently, the proposed approach guarantees a minimum configuration of the control structure and a minimum requirement of plant knowledge for the AAC design, which leads to a sharp decrease of implementation cost in terms of hardware selection, algorithm realization and system debugging. Simulation results have demonstrated that the proposed HFF-AAC can perform as good as or even better than the traditional FB-AAC under much simpler control synthesis and much lower computational cost. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Applying cost analyses to drive policy that protects children. Mercury as a case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leonardo Trasande; Clyde Schechter; Karla A. Haynes

    2006-09-15

    Exposure in prenatal life to methylmercury (MeHg) has become the topic of intense debate in the United States after the Environmental Protection Agency (EPA) announced a proposal in 2004 to reverse strict controls on emissions of mercury from coal-fired power plants that had been in effect for the preceding 15 years. This proposal failed to incorporate any consideration of the health impacts on children that would result from increased mercury emissions. We assessed the impact on children's health of industrial mercury emissions and found that between 316,588 and 637,233 babies are born with mercury-related losses of cognitive function ranging frommore » 0.2 to 5.13 points. We calculated that decreased economic productivity resulting from diminished intelligence over a lifetime results in an aggregate economic cost in each annual birth cohort of $8.7 billion annually. $1.3 billion of this cost is attributable to mercury emitted from American coal-fired power plants. Downward shifts in intellectual quotient (IQ) are also associated with 1566 excess cases of mental retardation annually. This number accounts for 3.2% of MR cases in the United States. If the lifetime excess cost of a case of MR is $1,248,648 in 2000 dollars, then the cost of these excess cases of MR is $2.0 billion annually. Preliminary data suggest that more stringent mercury policy options would prevent thousands of cases of MR and billions of dollars over the next 25 years.« less

  6. The costs of introducing new technologies into space systems

    NASA Technical Reports Server (NTRS)

    Dodson, E. N.; Partma, H.; Ruhland, W.

    1992-01-01

    A review is conducted of cost-research studies intended to provide guidelines for cost estimates of integrating new technologies into existing satellite systems. Quantitative methods are described for determining the technological state-of-the-art so that proposed programs can be evaluated accurately in terms of their contribution to technological development. The R&D costs associated with the proposed programs are then assessed with attention given to the technological advances. Also incorporated quantifiably are any reductions in the costs of production, operations, and support afforded by the advanced technologies. The proposed model is employed in relation to a satellite sizing and cost study in which a tradeoff between increased R&D costs and reduced production costs is examined. The technology/cost model provides a consistent yardstick for assessing the true relative economic impact of introducing novel techniques and technologies.

  7. 75 FR 34117 - Proposed CERCLA Section 122(h) Cost Recovery Settlement for the H.M. Quackenbush, Inc. Superfund...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-16

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9162-9] Proposed CERCLA Section 122(h) Cost Recovery Settlement for the H.M. Quackenbush, Inc. Superfund Site, Herkimer, Herkimer County, NY AGENCY: Environmental...''), Region II, of a proposed cost recovery settlement agreement pursuant to Section 122(h) of CERCLA, 42 U.S...

  8. LEGO Mindstorms NXT for elderly and visually impaired people in need: A platform.

    PubMed

    Al-Halhouli, Ala'aldeen; Qitouqa, Hala; Malkosh, Nancy; Shubbak, Alaa; Al-Gharabli, Samer; Hamad, Eyad

    2016-07-27

    This paper presents the employment of LEGO Mindstorms NXT robotics as core component of low cost multidisciplinary platform for assisting elderly and visually impaired people. LEGO Mindstorms system offers a plug-and-play programmable robotics toolkit, incorporating construction guides, microcontrollers and sensors, all connected via a comprehensive programming language. It facilitates, without special training and at low cost, the use of such device for interpersonal communication and for handling multiple tasks required for elderly and visually impaired people in-need. The research project provides a model for larger-scale implementation, tackling the issues of creating additional functions in order to assist people in-need. The new functions were built and programmed using MATLAB through a user friendly Graphical User Interface (GUI). Power consumption problem, besides the integration of WiFi connection has been resolved, incorporating GPS application on smart phones enhanced the guiding and tracking functions. We believe that developing and expanding the system to encompass a range of applications beyond the initial design schematics to ease conducting a limited number of pre-described protocols. However, the beneficiaries for the proposed research would be limited to elderly people who require assistance within their household as assistive-robot to facilitate a low-cost solution for a highly demanding health circumstance.

  9. Applying Cost-Sensitive Extreme Learning Machine and Dissimilarity Integration to Gene Expression Data Classification.

    PubMed

    Liu, Yanqiu; Lu, Huijuan; Yan, Ke; Xia, Haixia; An, Chunlin

    2016-01-01

    Embedding cost-sensitive factors into the classifiers increases the classification stability and reduces the classification costs for classifying high-scale, redundant, and imbalanced datasets, such as the gene expression data. In this study, we extend our previous work, that is, Dissimilar ELM (D-ELM), by introducing misclassification costs into the classifier. We name the proposed algorithm as the cost-sensitive D-ELM (CS-D-ELM). Furthermore, we embed rejection cost into the CS-D-ELM to increase the classification stability of the proposed algorithm. Experimental results show that the rejection cost embedded CS-D-ELM algorithm effectively reduces the average and overall cost of the classification process, while the classification accuracy still remains competitive. The proposed method can be extended to classification problems of other redundant and imbalanced data.

  10. Monitoring Moving Queries inside a Safe Region

    PubMed Central

    Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan

    2014-01-01

    With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652

  11. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  12. Maximal Neighbor Similarity Reveals Real Communities in Networks

    PubMed Central

    Žalik, Krista Rizman

    2015-01-01

    An important problem in the analysis of network data is the detection of groups of densely interconnected nodes also called modules or communities. Community structure reveals functions and organizations of networks. Currently used algorithms for community detection in large-scale real-world networks are computationally expensive or require a priori information such as the number or sizes of communities or are not able to give the same resulting partition in multiple runs. In this paper we investigate a simple and fast algorithm that uses the network structure alone and requires neither optimization of pre-defined objective function nor information about number of communities. We propose a bottom up community detection algorithm in which starting from communities consisting of adjacent pairs of nodes and their maximal similar neighbors we find real communities. We show that the overall advantage of the proposed algorithm compared to the other community detection algorithms is its simple nature, low computational cost and its very high accuracy in detection communities of different sizes also in networks with blurred modularity structure consisting of poorly separated communities. All communities identified by the proposed method for facebook network and E-Coli transcriptional regulatory network have strong structural and functional coherence. PMID:26680448

  13. Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning.

    PubMed

    Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn

    2017-06-01

    To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.

  14. 40 CFR 35.937-6 - Cost and price considerations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Cost and price considerations. 35.937-6...) shall apply. (1) The candidate(s) selected for negotiation shall submit to the grantee for review...) Cost review. (1) The grantee shall review proposed subagreement costs. (2) As a minimum, proposed...

  15. 40 CFR 35.937-6 - Cost and price considerations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Cost and price considerations. 35.937-6...) shall apply. (1) The candidate(s) selected for negotiation shall submit to the grantee for review...) Cost review. (1) The grantee shall review proposed subagreement costs. (2) As a minimum, proposed...

  16. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application

    PubMed Central

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747

  17. Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.

    PubMed

    Zhang, Ping; Li, Wenjun; Sun, Hua

    2016-01-01

    Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy.

  18. Scalable Functionalized Graphene Nano-platelets as Tunable Cathodes for High-performance Lithium Rechargeable Batteries

    PubMed Central

    Kim, Haegyeom; Lim, Hee-Dae; Kim, Sung-Wook; Hong, Jihyun; Seo, Dong-Hwa; Kim, Dae-chul; Jeon, Seokwoo; Park, Sungjin; Kang, Kisuk

    2013-01-01

    High-performance and cost-effective rechargeable batteries are key to the success of electric vehicles and large-scale energy storage systems. Extensive research has focused on the development of (i) new high-energy electrodes that can store more lithium or (ii) high-power nano-structured electrodes hybridized with carbonaceous materials. However, the current status of lithium batteries based on redox reactions of heavy transition metals still remains far below the demands required for the proposed applications. Herein, we present a novel approach using tunable functional groups on graphene nano-platelets as redox centers. The electrode can deliver high capacity of ~250 mAh g−1, power of ~20 kW kg−1 in an acceptable cathode voltage range, and provide excellent cyclability up to thousands of repeated charge/discharge cycles. The simple, mass-scalable synthetic route for the functionalized graphene nano-platelets proposed in this work suggests that the graphene cathode can be a promising new class of electrode. PMID:23514953

  19. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.

    This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of themore » hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  20. Distributed intrusion monitoring system with fiber link backup and on-line fault diagnosis functions

    NASA Astrophysics Data System (ADS)

    Xu, Jiwei; Wu, Huijuan; Xiao, Shunkun

    2014-12-01

    A novel multi-channel distributed optical fiber intrusion monitoring system with smart fiber link backup and on-line fault diagnosis functions was proposed. A 1× N optical switch was intelligently controlled by a peripheral interface controller (PIC) to expand the fiber link from one channel to several ones to lower the cost of the long or ultra-long distance intrusion monitoring system and also to strengthen the intelligent monitoring link backup function. At the same time, a sliding window auto-correlation method was presented to identify and locate the broken or fault point of the cable. The experimental results showed that the proposed multi-channel system performed well especially whenever any a broken cable was detected. It could locate the broken or fault point by itself accurately and switch to its backup sensing link immediately to ensure the security system to operate stably without a minute idling. And it was successfully applied in a field test for security monitoring of the 220-km-length national borderline in China.

  1. Minimization of annotation work: diagnosis of mammographic masses via active learning

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-06-01

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  2. Minimization of annotation work: diagnosis of mammographic masses via active learning.

    PubMed

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-05-22

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  3. Principles of operating room organization.

    PubMed

    Watkins, W D

    1997-01-01

    The importance of the changing health care climate has triggered important changes in the management of high-cost components of acute care facilities. By integrating and better managing various elements of the surgical process, health care institutions are able to rationally trim costs while maintaining high-quality services. The leadership that physicians can provide is crucial to the success of this undertaking (1). The importance of the use of primary data related to patient throughput and related resources should be strongly emphasized, for only when such data are converted to INFORMATION of functional value can participating healthcare personnel be reasonably expected to anticipate and respond to varying clinical demands with ever-limited resources. Despite the claims of specific commercial vendors, no single product will likely be sufficient to significantly change the perioperative process to the degree or for the duration demanded by healthcare reform. The most effective approach to achieving safety, cost-effectiveness, and predictable process in the realm of Surgical Services will occur by appropriate application of the "best of breed" contributions of: (a) medical/patient safety practice/oversight; (b) information technology; (c) contemporary management; and (d) innovative and functional cost-accounting methodology. S "modified activity-based cost accounting method" can serve as the basis for acquiring true direct-cost information related to the perioperative process. The proposed overall management strategy emphasizes process and feedback, rather than specific product, and although imposing initial demands and change on the traditional hospital setting, can advance the strongest competitive position in perioperative services. This comprehensive approach comprises a functional basis for important bench-marking activities among multiple surgical services. An active, comparative process of this type is of paramount importance in emphasizing patient care and safety as the highest priority while changing the process and cost of perioperative care. Additionally, this approach objectively defines the surgical process in terms by which the impact of new treatments, drugs, devices and process changes can be assessed rationally.

  4. A modified multi-objective particle swarm optimization approach and its application to the design of a deepwater composite riser

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Chen, J.

    2017-09-01

    A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.

  5. An automatic method to calculate heart rate from zebrafish larval cardiac videos.

    PubMed

    Kang, Chia-Pin; Tu, Hung-Chi; Fu, Tzu-Fun; Wu, Jhe-Ming; Chu, Po-Hsun; Chang, Darby Tien-Hao

    2018-05-09

    Zebrafish is a widely used model organism for studying heart development and cardiac-related pathogenesis. With the ability of surviving without a functional circulation at larval stages, strong genetic similarity between zebrafish and mammals, prolific reproduction and optically transparent embryos, zebrafish is powerful in modeling mammalian cardiac physiology and pathology as well as in large-scale high throughput screening. However, an economical and convenient tool for rapid evaluation of fish cardiac function is still in need. There have been several image analysis methods to assess cardiac functions in zebrafish embryos/larvae, but they are still improvable to reduce manual intervention in the entire process. This work developed a fully automatic method to calculate heart rate, an important parameter to analyze cardiac function, from videos. It contains several filters to identify the heart region, to reduce video noise and to calculate heart rates. The proposed method was evaluated with 32 zebrafish larval cardiac videos that were recording at three-day post-fertilization. The heart rate measured by the proposed method was comparable to that determined by manual counting. The experimental results show that the proposed method does not lose accuracy while largely reducing the labor cost and uncertainty of manual counting. With the proposed method, researchers do not have to manually select a region of interest before analyzing videos. Moreover, filters designed to reduce video noise can alleviate background fluctuations during the video recording stage (e.g. shifting), which makes recorders generate usable videos easily and therefore reduce manual efforts while recording.

  6. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  7. Third-party punishment as a costly signal of high continuation probabilities in repeated games.

    PubMed

    Jordan, Jillian J; Rand, David G

    2017-05-21

    Why do individuals pay costs to punish selfish behavior, even as third-party observers? A large body of research suggests that reputation plays an important role in motivating such third-party punishment (TPP). Here we focus on a recently proposed reputation-based account (Jordan et al., 2016) that invokes costly signaling. This account proposed that "trustworthy type" individuals (who are incentivized to cooperate with others) typically experience lower costs of TPP, and thus that TPP can function as a costly signal of trustworthiness. Specifically, it was argued that some but not all individuals face incentives to cooperate, making them high-quality and trustworthy interaction partners; and, because the same mechanisms that incentivize cooperation also create benefits for using TPP to deter selfish behavior, these individuals are likely to experience reduced costs of punishing selfishness. Here, we extend this conceptual framework by providing a concrete, "from-the-ground-up" model demonstrating how this process could work in the context of repeated interactions incentivizing both cooperation and punishment. We show how individual differences in the probability of future interaction can create types that vary in whether they find cooperation payoff-maximizing (and thus make high-quality partners), as well as in their net costs of TPP - because a higher continuation probability increases the likelihood of receiving rewards from the victim of the punished transgression (thus offsetting the cost of punishing). We also provide a simple model of dispersal that demonstrates how types that vary in their continuation probabilities can stably coexist, because the payoff from remaining in one's local environment (i.e. not dispersing) decreases with the number of others who stay. Together, this model demonstrates, from the group up, how TPP can serve as a costly signal of trustworthiness arising from exposure to repeated interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Environmental limits to growth: physiological niche boundaries of corals along turbidity-light gradients.

    PubMed

    Anthony, Kenneth R N; Connolly, Sean R

    2004-11-01

    The physiological responses of organisms to resources and environmental conditions are important determinants of niche boundaries. In previous work, functional relationships between organism energetics and environment have been limited to energy intakes. However, energetic costs of maintenance may also depend on the supply of resources. In many mixotrophic organisms, two such resource types are light and particle concentration (turbidity). Using two coral species with contrasting abundances along light and turbidity gradients (Acropora valida and Turbinaria mesenterina), we incorporate the dual resource-stressor roles of these variables by calibrating functional responses of energy costs (respiration and loss of organic carbon) as well as energy intake (photosynthesis and particle feeding). This allows us to characterize physiological niche boundaries along light and turbidity gradients, identify species-specific differences in these boundaries, and assess the sensitivity of these differences to interspecific differences in particular functional response parameters. The turbidity-light niche of T. mesenterina was substantially larger than that of A. valida, consistent with its broader ecological distribution. As expected, the responses of photosynthesis, heterotrophic capacity, respiration, and organic carbon loss to light and turbidity varied between species. Niche boundaries were highly sensitive to the functional responses of energy costs to light and turbidity. Moreover, the study species' niche differences were almost entirely attributable to species-specific differences in one functional response: that of respiration to turbidity. These results demonstrate that functional responses of energy-loss processes are important determinants of species-specific physiological limits to growth, and thereby of niche differences in reef corals. Given that many resources can stress organisms when supply rates are high, we propose that the functional responses of energy losses will prove to be important determinants of niche differences in other systems as well.

  9. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  10. Assembling and Using an LED-Based Detector to Monitor Absorbance Changes during Acid-Base Titrations

    ERIC Educational Resources Information Center

    Santos, Willy G.; Cavalheiro, E´der T. G.

    2015-01-01

    A simple photometric assembly based in an LED as a light source and a photodiode as a detector is proposed in order to follow the absorbance changes as a function of the titrant volume added during the course of acid-base titrations in the presence of a suitable visual indicator. The simplicity and low cost of the electronic device allow the…

  11. Energy Center Structure Optimization by using Smart Technologies in Process Control System

    NASA Astrophysics Data System (ADS)

    Shilkina, Svetlana V.

    2018-03-01

    The article deals with practical application of fuzzy logic methods in process control systems. A control object - agroindustrial greenhouse complex, which includes its own energy center - is considered. The paper analyzes object power supply options taking into account connection to external power grids and/or installation of own power generating equipment with various layouts. The main problem of a greenhouse facility basic process is extremely uneven power consumption, which forces to purchase redundant generating equipment idling most of the time, which quite negatively affects project profitability. Energy center structure optimization is largely based on solving the object process control system construction issue. To cut investor’s costs it was proposed to optimize power consumption by building an energy-saving production control system based on a fuzzy logic controller. The developed algorithm of automated process control system functioning ensured more even electric and thermal energy consumption, allowed to propose construction of the object energy center with a smaller number of units due to their more even utilization. As a result, it is shown how practical use of microclimate parameters fuzzy control system during object functioning leads to optimization of agroindustrial complex energy facility structure, which contributes to a significant reduction in object construction and operation costs.

  12. An effective and robust method for tracking multiple fish in video image based on fish head detection.

    PubMed

    Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu

    2016-06-23

    Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.

  13. Task-switching deficits and repetitive behaviour in genetic neurodevelopmental disorders: data from children with Prader-Willi syndrome chromosome 15 q11-q13 deletion and boys with Fragile X syndrome.

    PubMed

    Woodcock, Kate A; Oliver, Chris; Humphreys, Glyn W

    2009-03-01

    Prader-Willi syndrome (PWS) and Fragile X syndrome (FraX) are associated with distinctive cognitive and behavioural profiles. We examined whether repetitive behaviours in the two syndromes were associated with deficits in specific executive functions. PWS, FraX, and typically developing (TD) children were assessed for executive functioning using the Test of Everyday Attention for Children and an adapted Simon spatial interference task. Relative to the TD children, children with PWS and FraX showed greater costs of attention switching on the Simon task, but after controlling for intellectual ability, these switching deficits were only significant in the PWS group. Children with PWS and FraX also showed significantly increased preference for routine and differing profiles of other specific types of repetitive behaviours. A measure of switch cost from the Simon task was positively correlated to scores on preference for routine questionnaire items and was strongly associated with scores on other items relating to a preference for predictability. It is proposed that a deficit in attention switching is a component of the endophenotypes of both PWS and FraX and is associated with specific behaviours. This proposal is discussed in the context of neurocognitive pathways between genes and behaviour.

  14. 48 CFR 3452.216-70 - Additional cost principles.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... scientific, cost, and other data needed to support the bids, proposals, and applications. Bid and proposal... practice is to treat these costs by some other method, they may be accepted if they are found to be...

  15. 48 CFR 3452.216-70 - Additional cost principles.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... scientific, cost, and other data needed to support the bids, proposals, and applications. Bid and proposal... practice is to treat these costs by some other method, they may be accepted if they are found to be...

  16. Towards the automatization of the Foucault knife-edge quantitative test

    NASA Astrophysics Data System (ADS)

    Rodríguez, G.; Villa, J.; Martínez, G.; de la Rosa, I.; Ivanov, R.

    2017-08-01

    Given the increasing necessity of simple, economical and reliable methods and instruments for performing quality tests of optical surfaces such as mirrors and lenses, in the recent years we resumed the study of the long forgotten Foucault knife-edge test from the point of view of the physical optics, ultimately achieving a closed mathematical expression that directly relates the knife-edge position along the displacement paraxial axis with the observable irradiance pattern, which later allowed us to propose a quantitative methodology for estimating the wavefront error of an aspherical mirror with precision akin to interferometry. In this work, we present a further improved digital image processing algorithm in which the sigmoidal cost-function for calculating the transient slope-point of each associated intensity-illumination profile is replaced for a simplified version of it, thus making the whole process of estimating the wavefront gradient remarkably more stable and efficient, at the same time, the Fourier based algorithm employed for gradient integration has been replaced as well for a regularized quadratic cost-function that allows a considerably easier introduction of the region of interest (ROI) of the function, which solved by means of a linear gradient conjugate method largely increases the overall accuracy and efficiency of the algorithm. This revised approach of our methodology can be easily implemented and handled by most single-board microcontrollers in the market, hence enabling the implementation of a full-integrated automatized test apparatus, opening a realistic path for even the proposal of a stand-alone optical mirror analyzer prototype.

  17. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  18. Costly third-party punishment in young children.

    PubMed

    McAuliffe, Katherine; Jordan, Jillian J; Warneken, Felix

    2015-01-01

    Human adults engage in costly third-party punishment of unfair behavior, but the developmental origins of this behavior are unknown. Here we investigate costly third-party punishment in 5- and 6-year-old children. Participants were asked to accept (enact) or reject (punish) proposed allocations of resources between a pair of absent, anonymous children. In addition, we manipulated whether subjects had to pay a cost to punish proposed allocations. Experiment 1 showed that 6-year-olds (but not 5-year-olds) punished unfair proposals more than fair proposals. However, children punished less when doing so was personally costly. Thus, while sensitive to cost, they were willing to sacrifice resources to intervene against unfairness. Experiment 2 showed that 6-year-olds were less sensitive to unequal allocations when they resulted from selfishness than generosity. These findings show that costly third-party punishment of unfair behavior is present in young children, suggesting that from early in development children show a sophisticated capacity to promote fair behavior. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. To Do or Not to Do: Dopamine, Affordability and the Economics of Opportunity.

    PubMed

    Beeler, Jeff A; Mourra, Devry

    2018-01-01

    Five years ago, we introduced the thrift hypothesis of dopamine (DA), suggesting that the primary role of DA in adaptive behavior is regulating behavioral energy expenditure to match the prevailing economic conditions of the environment. Here we elaborate that hypothesis with several new ideas. First, we introduce the concept of affordability, suggesting that costs must necessarily be evaluated with respect to the availability of resources to the organism, which computes a value not only for the potential reward opportunity, but also the value of resources expended. Placing both costs and benefits within the context of the larger economy in which the animal is functioning requires consideration of the different timescales against which to compute resource availability, or average reward rate. Appropriate windows of computation for tracking resources requires corresponding neural substrates that operate on these different timescales. In discussing temporal patterns of DA signaling, we focus on a neglected form of DA plasticity and adaptation, changes in the physical substrate of the DA system itself, such as up- and down-regulation of receptors or release probability. We argue that changes in the DA substrate itself fundamentally alter its computational function, which we propose mediates adaptations to longer temporal horizons and economic conditions. In developing our hypothesis, we focus on DA D2 receptors (D2R), arguing that D2R implements a form of "cost control" in response to the environmental economy, serving as the "brain's comptroller". We propose that the balance between the direct and indirect pathway, regulated by relative expression of D1 and D2 DA receptors, implements affordability. Finally, as we review data, we discuss limitations in current approaches that impede fully investigating the proposed hypothesis and highlight alternative, more semi-naturalistic strategies more conducive to neuroeconomic investigations on the role of DA in adaptive behavior.

  20. Remote Autonomous Sensor Networks: A Study in Redundancy and Life Cycle Costs

    NASA Astrophysics Data System (ADS)

    Ahlrichs, M.; Dotson, A.; Cenek, M.

    2017-12-01

    The remote nature of the United States and Canada border and their extreme seasonal shifts has made monitoring much of the area impossible using conventional monitoring techniques. Currently, the United States has large gaps in its ability to detect movement on an as-needed-basis in remote areas. The proposed autonomous sensor network aims to meet that need by developing a product that is low cost, robust, and can be deployed on an as-needed-basis for short term monitoring events. This is accomplished by identifying radio frequency disturbance and acoustic disturbance. This project aims to validate the proposed design and offer optimization strategies by conducting a redundancy model as well as performing a Life Cycle Assessment (LCA). The model will incorporate topological, meteorological, and land cover datasets to estimate sensor loss over a three-month period, ensuring that the remaining network does not have significant gaps in coverage which preclude being able to receive and transmit data. The LCA will investigate the materials used to create the sensor to generate an estimate of the total environmental energy that is utilized to create the network and offer alternative materials and distribution methods that can lower this cost. This platform can function as a stand-alone monitoring network or provide additional spatial and temporal resolution to existing monitoring networks. This study aims to create the framework to determine if a sensor's design and distribution is appropriate for the target environment. The incorporation of a LCA will seek to answer if the data a proposed sensor network will collect outweighs the environmental damage that will result from its deployment. Furthermore, as the arctic continues to thaw and economic development grows, the methodology described in paper will function as a guidance document to ensure that future sensor networks have a minimal impact on these pristine areas.

  1. To Do or Not to Do: Dopamine, Affordability and the Economics of Opportunity

    PubMed Central

    Beeler, Jeff A.; Mourra, Devry

    2018-01-01

    Five years ago, we introduced the thrift hypothesis of dopamine (DA), suggesting that the primary role of DA in adaptive behavior is regulating behavioral energy expenditure to match the prevailing economic conditions of the environment. Here we elaborate that hypothesis with several new ideas. First, we introduce the concept of affordability, suggesting that costs must necessarily be evaluated with respect to the availability of resources to the organism, which computes a value not only for the potential reward opportunity, but also the value of resources expended. Placing both costs and benefits within the context of the larger economy in which the animal is functioning requires consideration of the different timescales against which to compute resource availability, or average reward rate. Appropriate windows of computation for tracking resources requires corresponding neural substrates that operate on these different timescales. In discussing temporal patterns of DA signaling, we focus on a neglected form of DA plasticity and adaptation, changes in the physical substrate of the DA system itself, such as up- and down-regulation of receptors or release probability. We argue that changes in the DA substrate itself fundamentally alter its computational function, which we propose mediates adaptations to longer temporal horizons and economic conditions. In developing our hypothesis, we focus on DA D2 receptors (D2R), arguing that D2R implements a form of “cost control” in response to the environmental economy, serving as the “brain’s comptroller”. We propose that the balance between the direct and indirect pathway, regulated by relative expression of D1 and D2 DA receptors, implements affordability. Finally, as we review data, we discuss limitations in current approaches that impede fully investigating the proposed hypothesis and highlight alternative, more semi-naturalistic strategies more conducive to neuroeconomic investigations on the role of DA in adaptive behavior. PMID:29487508

  2. A systematic approach for watershed ecological restoration strategy making: An application in the Taizi River Basin in northern China.

    PubMed

    Li, Mengdi; Fan, Juntao; Zhang, Yuan; Guo, Fen; Liu, Lusan; Xia, Rui; Xu, Zongxue; Wu, Fengchang

    2018-05-15

    Aiming to protect freshwater ecosystems, river ecological restoration has been brought into the research spotlight. However, it is challenging for decision makers to set appropriate objectives and select a combination of rehabilitation acts from numerous possible solutions to meet ecological, economic, and social demands. In this study, we developed a systematic approach to help make an optimal strategy for watershed restoration, which incorporated ecological security assessment and multi-objectives optimization (MOO) into the planning process to enhance restoration efficiency and effectiveness. The river ecological security status was evaluated by using a pressure-state-function-response (PSFR) assessment framework, and MOO was achieved by searching for the Pareto optimal solutions via Non-dominated Sorting Genetic Algorithm II (NSGA-II) to balance tradeoffs between different objectives. Further, we clustered the searched solutions into three types in terms of different optimized objective function values in order to provide insightful information for decision makers. The proposed method was applied in an example rehabilitation project in the Taizi River Basin in northern China. The MOO result in the Taizi River presented a set of Pareto optimal solutions that were classified into three types: I - high ecological improvement, high cost and high benefits solution; II - medial ecological improvement, medial cost and medial economic benefits solution; III - low ecological improvement, low cost and low economic benefits solution. The proposed systematic approach in our study can enhance the effectiveness of riverine ecological restoration project and could provide valuable reference for other ecological restoration planning. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Distributed Nash Equilibrium Seeking for Generalized Convex Games with Shared Constraints

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Hu, Guoqiang

    2018-05-01

    In this paper, we deal with the problem of finding a Nash equilibrium for a generalized convex game. Each player is associated with a convex cost function and multiple shared constraints. Supposing that each player can exchange information with its neighbors via a connected undirected graph, the objective of this paper is to design a Nash equilibrium seeking law such that each agent minimizes its objective function in a distributed way. Consensus and singular perturbation theories are used to prove the stability of the system. A numerical example is given to show the effectiveness of the proposed algorithms.

  4. A miniature on-chip multi-functional ECG signal processor with 30 µW ultra-low power consumption.

    PubMed

    Liu, Xin; Zheng, Yuan Jin; Phyu, Myint Wai; Zhao, Bin; Je, Minkyu; Yuan, Xiao Jun

    2010-01-01

    In this paper, a miniature low-power Electrocardiogram (ECG) signal processing application specific integrated circuit (ASIC) chip is proposed. This chip provides multiple critical functions for ECG analysis using a systematic wavelet transform algorithm and a novel SRAM-based ASIC architecture, while achieves low cost and high performance. Using 0.18 µm CMOS technology and 1 V power supply, this ASIC chip consumes only 29 µW and occupies an area of 3 mm(2). This on-chip ECG processor is highly suitable for reliable real-time cardiac status monitoring applications.

  5. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2016-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  6. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this presentation is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  7. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  8. Design of an Inertial-Sensor-Based Data Glove for Hand Function Evaluation.

    PubMed

    Lin, Bor-Shing; Lee, I-Jung; Yang, Shu-Yu; Lo, Yi-Chiang; Lee, Junghsi; Chen, Jean-Lon

    2018-05-13

    Capturing hand motions for hand function evaluations is essential in the medical field. Various data gloves have been developed for rehabilitation and manual dexterity assessments. This study proposed a modular data glove with 9-axis inertial measurement units (IMUs) to obtain static and dynamic parameters during hand function evaluation. A sensor fusion algorithm is used to calculate the range of motion of joints. The data glove is designed to have low cost, easy wearability, and high reliability. Owing to the modular design, the IMU board is independent and extensible and can be used with various microcontrollers to realize more medical applications. This design greatly enhances the stability and maintainability of the glove.

  9. DESIGN OF SUPERCONDUCTING COMBINED FUNCTION MAGNETS FOR THE 50 GEV PROTON BEAM LINE FOR THE J-PARC NEUTRINO EXPERIMENT.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WANDERER,P.; ET AL.

    2003-06-15

    Superconducting combined function magnets will be utilized for the 50GeV-750kW proton beam line for the J-PARC neutrino experiment and an R and D program has been launched at KEK. The magnet is designed to provide a combined function with a dipole field of 2.59 T and a quadrupole field of 18.7 T/m in a coil aperture of 173.4 mm. A single layer coil is proposed to reduce the fabrication cost and the coil arrangement in the 2-D cross-section results in left-right asymmetry. This paper reports the design study of the magnet.

  10. Object-oriented productivity metrics

    NASA Technical Reports Server (NTRS)

    Connell, John L.; Eller, Nancy

    1992-01-01

    Software productivity metrics are useful for sizing and costing proposed software and for measuring development productivity. Estimating and measuring source lines of code (SLOC) has proven to be a bad idea because it encourages writing more lines of code and using lower level languages. Function Point Analysis is an improved software metric system, but it is not compatible with newer rapid prototyping and object-oriented approaches to software development. A process is presented here for counting object-oriented effort points, based on a preliminary object-oriented analysis. It is proposed that this approach is compatible with object-oriented analysis, design, programming, and rapid prototyping. Statistics gathered on actual projects are presented to validate the approach.

  11. Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.

    PubMed

    Wang, Xinghu; Hong, Yiguang; Ji, Haibo

    2016-07-01

    The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.

  12. Ultra-thin enhanced-absorption long-wave infrared detectors

    NASA Astrophysics Data System (ADS)

    Wang, Shaohua; Yoon, Narae; Kamboj, Abhilasha; Petluru, Priyanka; Zheng, Wanhua; Wasserman, Daniel

    2018-02-01

    We propose an architecture for enhanced absorption in ultra-thin strained layer superlattice detectors utilizing a hybrid optical cavity design. Our detector architecture utilizes a designer-metal doped semiconductor ground plane beneath the ultra-subwavelength thickness long-wavelength infrared absorber material, upon which we pattern metallic antenna structures. We demonstrate the potential for near 50% detector absorption in absorber layers with thicknesses of approximately λ0/50, using realistic material parameters. We investigate detector absorption as a function of wavelength and incidence angle, as well as detector geometry. The proposed device architecture offers the potential for high efficiency detectors with minimal growth costs and relaxed design parameters.

  13. CAD/CAM monolithic restorations and full-mouth adhesive rehabilitation to restore a patient with a past history of bulimia: the modified three-step technique.

    PubMed

    Vailati, Francesca; Carciofo, Sylvain

    2016-01-01

    Due to an increasing awareness about dental erosion, many clinicians would like to propose treatments even at the initial stages of the disease. However, when the loss of tooth structure is visible only to the professional eye, and it has not affected the esthetics of the smile, affected patients do not usually accept a full-mouth rehabilitation. Reducing the cost of the therapy, simplifying the clinical steps, and proposing noninvasive adhesive techniques may promote patient acceptance. In this article, the treatment of an ex-bulimic patient is illustrated. A modified approach of the three-step technique was followed. The patient completed the therapy in five short visits, including the initial one. No tooth preparation was required, no anesthesia was delivered, and the overall (clinical and laboratory) costs were kept low. At the end of the treatment, the patient was very satisfied from a biologic and functional point of view.

  14. A new technique based on Artificial Bee Colony Algorithm for optimal sizing of stand-alone photovoltaic system.

    PubMed

    Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M

    2014-05-01

    One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.

  15. A new technique based on Artificial Bee Colony Algorithm for optimal sizing of stand-alone photovoltaic system

    PubMed Central

    Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.

    2013-01-01

    One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt. PMID:25685507

  16. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  17. Proposed Project Selection Method for Human Support Research and Technology Development (HSR&TD)

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2005-01-01

    The purpose of HSR&TD is to deliver human support technologies to the Exploration Systems Mission Directorate (ESMD) that will be selected for future missions. This requires identifying promising candidate technologies and advancing them in technology readiness until they are acceptable. HSR&TD must select an may of technology development projects, guide them, and either terminate or continue them, so as to maximize the resulting number of usable advanced human support technologies. This paper proposes an effective project scoring methodology to support managing the HSR&TD project portfolio. Researchers strongly disagree as to what are the best technology project selection methods, or even if there are any proven ones. Technology development is risky and outstanding achievements are rare and unpredictable. There is no simple formula for success. Organizations that are satisfied with their project selection approach typically use a mix of financial, strategic, and scoring methods in an open, established, explicit, formal process. This approach helps to build consensus and develop management insight. It encourages better project proposals by clarifying the desired project attributes. We propose a project scoring technique based on a method previously used in a federal laboratory and supported by recent research. Projects are ranked by their perceived relevance, risk, and return - a new 3 R's. Relevance is the degree to which the project objective supports the HSR&TD goal of developing usable advanced human support technologies. Risk is the estimated probability that the project will achieve its specific objective. Return is the reduction in mission life cycle cost obtained if the project is successful. If the project objective technology performs a new function with no current cost, its return is the estimated cash value of performing the new function. The proposed project selection scoring method includes definitions of the criteria, a project evaluation questionnaire, and a scoring formula.

  18. HIV Treatment as Prevention: Modelling the Cost of Antiretroviral Treatment—State of the Art and Future Directions

    PubMed Central

    Meyer-Rath, Gesine; Over, Mead

    2012-01-01

    Policy discussions about the feasibility of massively scaling up antiretroviral therapy (ART) to reduce HIV transmission and incidence hinge on accurately projecting the cost of such scale-up in comparison to the benefits from reduced HIV incidence and mortality. We review the available literature on modelled estimates of the cost of providing ART to different populations around the world, and suggest alternative methods of characterising cost when modelling several decades into the future. In past economic analyses of ART provision, costs were often assumed to vary by disease stage and treatment regimen, but for treatment as prevention, in particular, most analyses assume a uniform cost per patient. This approach disregards variables that can affect unit cost, such as differences in factor prices (i.e., the prices of supplies and services) and the scale and scope of operations (i.e., the sizes and types of facilities providing ART). We discuss several of these variables, and then present a worked example of a flexible cost function used to determine the effect of scale on the cost of a proposed scale-up of treatment as prevention in South Africa. Adjusting previously estimated costs of universal testing and treatment in South Africa for diseconomies of small scale, i.e., more patients being treated in smaller facilities, adds 42% to the expected future cost of the intervention. PMID:22802731

  19. Development and Beam-Shape Analysis of an Integrated Fiber-Optic Confocal Probe for High-Precision Central Thickness Measurement of Small-Radius Lenses

    PubMed Central

    Sutapun, Boonsong; Somboonkaew, Armote; Amarit, Ratthasart; Chanhorm, Sataporn

    2015-01-01

    This work describes a new design of a fiber-optic confocal probe suitable for measuring the central thicknesses of small-radius optical lenses or similar objects. The proposed confocal probe utilizes an integrated camera that functions as a shape-encoded position-sensing device. The confocal signal for thickness measurement and beam-shape data for off-axis measurement can be simultaneously acquired using the proposed probe. Placing the probe’s focal point off-center relative to a sample’s vertex produces a non-circular image at the camera’s image plane that closely resembles an ellipse for small displacements. We were able to precisely position the confocal probe’s focal point relative to the vertex point of a ball lens with a radius of 2.5 mm, with a lateral resolution of 1.2 µm. The reflected beam shape based on partial blocking by an aperture was analyzed and verified experimentally. The proposed confocal probe offers a low-cost, high-precision technique, an alternative to a high-cost three-dimensional surface profiler, for tight quality control of small optical lenses during the manufacturing process. PMID:25871720

  20. Localization-Free Detection of Replica Node Attacks in Wireless Sensor Networks Using Similarity Estimation with Group Deployment Knowledge

    PubMed Central

    Ding, Chao; Yang, Lijun; Wu, Meng

    2017-01-01

    Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies. PMID:28098846

  1. Measurement of jaw motion: the proposal of a simple and accurate method.

    PubMed

    Pinheiro, A P; Pereira, A A; Andrade, A O; Bellomo, D

    2011-01-01

    The analysis of jaw movements has long been used as a measure for clinical diagnosis and assessment. A number of strategies are available for monitoring the trajectory; however most of these strategies make use of expensive tools, which are often not available to many clinics in the world. In this context, this research proposes the development of a new tool capable of quantifying the movements of opening/closing, protrusion and laterotrusion of the mandible. These movements are important for the clinical evaluation of both the temporomandibular function and muscles involved in mastication. The proposed system, unlike current commercial systems, employs a low-cost video camera and a computer program, which is used for reconstructing the trajectory of a reflective marker that is fixed on the jaw. In order to illustrate the application of the devised tool a clinical trial was carried out, investigating jaw movements of 10 subjects. The results obtained in this study were compatible with those found in the literature with the advantage of using a low-cost, simple, non-invasive and flexible solution customized for the practical needs of clinics. The average error of the system was less than 1.0%.

  2. An energy and cost efficient majority-based RAM cell in quantum-dot cellular automata

    NASA Astrophysics Data System (ADS)

    Khosroshahy, Milad Bagherian; Moaiyeri, Mohammad Hossein; Navi, Keivan; Bagherzadeh, Nader

    Nanotechnologies, notably quantum-dot cellular automata, have achieved major attentions for their prominent features as compared to the conventional CMOS circuitry. Quantum-dot cellular automata, particularly owning to its considerable reduction in size, high switching speed and ultra-low energy consumption, is considered as a potential alternative for the CMOS technology. As the memory unit is one of the most essential components in a digital system, designing a well-optimized QCA random access memory (RAM) cell is an important area of research. In this paper, a new five-input majority gate is presented which is suitable for implementing efficient single-layer QCA circuits. In addition, a new RAM cell with set and reset capabilities is designed based on the proposed majority gate, which has an efficient and low-energy structure. The functionality, performance and energy consumption of the proposed designs are evaluated based on the QCADesigner and QCAPro tools. According to the simulation results, the proposed RAM design leads to on average 38% lower total energy dissipation, 25% smaller area, 20% lower cell count, 28% lower delay and 60% lower QCA cost as compared to its previous counterparts.

  3. Localization-Free Detection of Replica Node Attacks in Wireless Sensor Networks Using Similarity Estimation with Group Deployment Knowledge.

    PubMed

    Ding, Chao; Yang, Lijun; Wu, Meng

    2017-01-15

    Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies.

  4. Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.

    PubMed

    Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko

    2010-06-28

    We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.

  5. 48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... proposal instructions. (1) Submit proposal for than cost factors as a separate part of the total proposal... these items, including estimated usage hours, rates, and total costs. (ii) If equipment purchases are... the total estimated contract dollar value or $100,000, whichever is less), the offeror shall include...

  6. An Enhanced Lightweight Anonymous Authentication Scheme for a Scalable Localization Roaming Service in Wireless Sensor Networks.

    PubMed

    Chung, Youngseok; Choi, Seokjin; Lee, Youngsook; Park, Namje; Won, Dongho

    2016-10-07

    More security concerns and complicated requirements arise in wireless sensor networks than in wired networks, due to the vulnerability caused by their openness. To address this vulnerability, anonymous authentication is an essential security mechanism for preserving privacy and providing security. Over recent years, various anonymous authentication schemes have been proposed. Most of them reveal both strengths and weaknesses in terms of security and efficiency. Recently, Farash et al. proposed a lightweight anonymous authentication scheme in ubiquitous networks, which remedies the security faults of previous schemes. However, their scheme still suffers from certain weaknesses. In this paper, we prove that Farash et al.'s scheme fails to provide anonymity, authentication, or password replacement. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Considering the limited capability of sensor nodes, we utilize only low-cost functions, such as one-way hash functions and bit-wise exclusive-OR operations. The security and lightness of the proposed scheme mean that it can be applied to roaming service in localized domains of wireless sensor networks, to provide anonymous authentication of sensor nodes.

  7. An Enhanced Lightweight Anonymous Authentication Scheme for a Scalable Localization Roaming Service in Wireless Sensor Networks

    PubMed Central

    Chung, Youngseok; Choi, Seokjin; Lee, Youngsook; Park, Namje; Won, Dongho

    2016-01-01

    More security concerns and complicated requirements arise in wireless sensor networks than in wired networks, due to the vulnerability caused by their openness. To address this vulnerability, anonymous authentication is an essential security mechanism for preserving privacy and providing security. Over recent years, various anonymous authentication schemes have been proposed. Most of them reveal both strengths and weaknesses in terms of security and efficiency. Recently, Farash et al. proposed a lightweight anonymous authentication scheme in ubiquitous networks, which remedies the security faults of previous schemes. However, their scheme still suffers from certain weaknesses. In this paper, we prove that Farash et al.’s scheme fails to provide anonymity, authentication, or password replacement. In addition, we propose an enhanced scheme that provides efficiency, as well as anonymity and security. Considering the limited capability of sensor nodes, we utilize only low-cost functions, such as one-way hash functions and bit-wise exclusive-OR operations. The security and lightness of the proposed scheme mean that it can be applied to roaming service in localized domains of wireless sensor networks, to provide anonymous authentication of sensor nodes. PMID:27739417

  8. 75 FR 35864 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-23

    ... Effectiveness of Proposed Rule Change To Establish the Appointment Cost for Options on the iPath S&P 500 VIX... Proposed Rule Change CBOE proposes to amend Rule 8.3 to establish the appointment cost for options on the i... the appointment cost for options on the iPath S&P 500 VIX Short-Term Futures Index ETN (``VXX...

  9. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    PubMed

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  10. Efficient hyperspectral image segmentation using geometric active contour formulation

    NASA Astrophysics Data System (ADS)

    Albalooshi, Fatema A.; Sidike, Paheding; Asari, Vijayan K.

    2014-10-01

    In this paper, we present a new formulation of geometric active contours that embeds the local hyperspectral image information for an accurate object region and boundary extraction. We exploit self-organizing map (SOM) unsupervised neural network to train our model. The segmentation process is achieved by the construction of a level set cost functional, in which, the dynamic variable is the best matching unit (BMU) coming from SOM map. In addition, we use Gaussian filtering to discipline the deviation of the level set functional from a signed distance function and this actually helps to get rid of the re-initialization step that is computationally expensive. By using the properties of the collective computational ability and energy convergence capability of the active control models (ACM) energy functional, our method optimizes the geometric ACM energy functional with lower computational time and smoother level set function. The proposed algorithm starts with feature extraction from raw hyperspectral images. In this step, the principal component analysis (PCA) transformation is employed, and this actually helps in reducing dimensionality and selecting best sets of the significant spectral bands. Then the modified geometric level set functional based ACM is applied on the optimal number of spectral bands determined by the PCA. By introducing local significant spectral band information, our proposed method is capable to force the level set functional to be close to a signed distance function, and therefore considerably remove the need of the expensive re-initialization procedure. To verify the effectiveness of the proposed technique, we use real-life hyperspectral images and test our algorithm in varying textural regions. This framework can be easily adapted to different applications for object segmentation in aerial hyperspectral imagery.

  11. A non-stationary cost-benefit analysis approach for extreme flood estimation to explore the nexus of 'Risk, Cost and Non-stationarity'

    NASA Astrophysics Data System (ADS)

    Qi, Wei

    2017-11-01

    Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.

  12. The Cost-Effectiveness of Replacing the Bottom Quartile of Novice Teachers through Value-Added Teacher Assessment

    ERIC Educational Resources Information Center

    Yeh, Stuart S.; Ritter, Joseph

    2009-01-01

    A cost-effectiveness analysis was conducted of Gordon, Kane, and Staiger's (2006) proposal to raise student achievement by identifying and replacing the bottom quartile of novice teachers, using value-added assessment of teacher performance. The cost effectiveness of this proposal was compared to the cost effectiveness of voucher programs, charter…

  13. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  14. Lung tumor segmentation in PET images using graph cuts.

    PubMed

    Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan

    2013-03-01

    The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Dikin-type algorithms for dextrous grasping force optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buss, M.; Faybusovich, L.; Moore, J.B.

    1998-08-01

    One of the central issues in dextrous robotic hand grasping is to balance external forces acting on the object and at the same time achieve grasp stability and minimum grasping effort. A companion paper shows that the nonlinear friction-force limit constraints on grasping forces are equivalent to the positive definiteness of a certain matrix subject to linear constraints. Further, compensation of the external object force is also a linear constraint on this matrix. Consequently, the task of grasping force optimization can be formulated as a problem with semidefinite constraints. In this paper, two versions of strictly convex cost functions, onemore » of them self-concordant, are considered. These are twice-continuously differentiable functions that tend to infinity at the boundary of possible definiteness. For the general class of such cost functions, Dikin-type algorithms are presented. It is shown that the proposed algorithms guarantee convergence to the unique solution of the semidefinite programming problem associated with dextrous grasping force optimization. Numerical examples demonstrate the simplicity of implementation, the good numerical properties, and the optimality of the approach.« less

  16. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  17. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  18. A Well-Tempered Hybrid Method for Solving Challenging Time-Dependent Density Functional Theory (TDDFT) Systems.

    PubMed

    Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong

    2018-04-10

    The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.

  19. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE PAGES

    Paszyńska, A.; Paszyński, M.; Jopek, K.; ...

    2015-01-01

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  20. Low-cost ultra-thin broadband terahertz beam-splitter.

    PubMed

    Ung, Benjamin S-Y; Fumeaux, Christophe; Lin, Hungyen; Fischer, Bernd M; Ng, Brian W-H; Abbott, Derek

    2012-02-27

    A low-cost terahertz beam-splitter is fabricated using ultra-thin LDPE plastic sheeting coated with a conducting silver layer. The beam splitting ratio is determined as a function of the thickness of the silver layer--thus any required splitting ratio can be printed on demand with a suitable rapid prototyping technology. The low-cost aspect is a consequence of the fact that ultra-thin LDPE sheeting is readily obtainable, known more commonly as domestic plastic wrap or cling wrap. The proposed beam-splitter has numerous advantages over float zone silicon wafers commonly used within the terahertz frequency range. These advantages include low-cost, ease of handling, ultra-thin thickness, and any required beam splitting ratio can be readily fabricated. Furthermore, as the beam-splitter is ultra-thin, it presents low loss and does not suffer from Fabry-Pérot effects. Measurements performed on manufactured prototypes with different splitting ratios demonstrate a good agreement with our theoretical model in both P and S polarizations, exhibiting nearly frequency-independent splitting ratios in the terahertz frequency range.

  1. Activities identification for activity-based cost/management applications of the diagnostics outpatient procedures.

    PubMed

    Alrashdan, Abdalla; Momani, Amer; Ababneh, Tamador

    2012-01-01

    One of the most challenging problems facing healthcare providers is to determine the actual cost for their procedures, which is important for internal accounting and price justification to insurers. The objective of this paper is to find suitable categories to identify the diagnostic outpatient medical procedures and translate them from functional orientation to process orientation. A hierarchal task tree is developed based on a classification schema of procedural activities. Each procedure is seen as a process consisting of a number of activities. This makes a powerful foundation for activity-based cost/management implementation and provides enough information to discover the value-added and non-value-added activities that assist in process improvement and eventually may lead to cost reduction. Work measurement techniques are used to identify the standard time of each activity at the lowest level of the task tree. A real case study at a private hospital is presented to demonstrate the proposed methodology. © 2011 National Association for Healthcare Quality.

  2. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paszyńska, A.; Paszyński, M.; Jopek, K.

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  3. Neurocomputational account of how the human brain decides when to have a break.

    PubMed

    Meyniel, Florent; Sergent, Claire; Rigoux, Lionel; Daunizeau, Jean; Pessiglione, Mathias

    2013-02-12

    No pain, no gain: cost-benefit trade-off has been formalized in classical decision theory to account for how we choose whether to engage effort. However, how the brain decides when to have breaks in the course of effort production remains poorly understood. We propose that decisions to cease and resume work are triggered by a cost evidence accumulation signal reaching upper and lower bounds, respectively. We developed a task in which participants are free to exert a physical effort knowing that their payoff would be proportional to their effort duration. Functional MRI and magnetoencephalography recordings conjointly revealed that the theoretical cost evidence accumulation signal was expressed in proprioceptive regions (bilateral posterior insula). Furthermore, the slopes and bounds of the accumulation process were adapted to the difficulty of the task and the money at stake. Cost evidence accumulation might therefore provide a dynamical mechanistic account of how the human brain maximizes benefits while preventing exhaustion.

  4. The graph neural network model.

    PubMed

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.

  5. Shifting and power sharing control of a novel dual input clutchless transmission for electric vehicles

    NASA Astrophysics Data System (ADS)

    Liang, Jiejunyi; Yang, Haitao; Wu, Jinglai; Zhang, Nong; Walker, Paul D.

    2018-05-01

    To improve the overall efficiency of electric vehicles and guarantee the driving comfort and vehicle drivability under the concept of simplifying mechanism complexity and minimizing manufacturing cost, this paper proposes a novel clutchless power-shifting transmission system with shifting control strategy and power sharing control strategy. The proposed shifting strategy takes advantage of the transmission architecture to achieve power-on shifting, which greatly improves the driving comfort compared with conventional automated manual transmission, with a bump function based shifting control method. To maximize the overall efficiency, a real-time power sharing control strategy is designed to solve the power distribution problem between the two motors. Detailed mathematical model is built to verify the effectiveness of the proposed methods. The results demonstrate the proposed strategies considerably improve the overall efficiency while achieve non-interrupted power-on shifting and maintain the vehicle jerk during shifting under an acceptable threshold.

  6. Transient stability enhancement of modern power grid using predictive Wide-Area Monitoring and Control

    NASA Astrophysics Data System (ADS)

    Yousefian, Reza

    This dissertation presents a real-time Wide-Area Control (WAC) designed based on artificial intelligence for large scale modern power systems transient stability enhancement. The WAC using the measurements available from Phasor Measurement Units (PMUs) at generator buses, monitors the global oscillations in the system and optimally augments the local excitation system of the synchronous generators. The complexity of the power system stability problem along with uncertainties and nonlinearities makes the conventional modeling non-practical or inaccurate. In this work Reinforcement Learning (RL) algorithm on the benchmark of Neural Networks (NNs) is used to map the nonlinearities of the system in real-time. This method different from both the centralized and the decentralized control schemes, employs a number of semi-autonomous agents to collaborate with each other to perform optimal control theory well-suited for WAC applications. Also, to handle the delays in Wide-Area Monitoring (WAM) and adapt the RL toward the robust control design, Temporal Difference (TD) is proposed as a solver for RL problem or optimal cost function. However, the main drawback of such WAC design is that it is challenging to determine if an offline trained network is valid to assess the stability of the power system once the system is evolved to a different operating state or network topology. In order to address the generality issue of NNs, a value priority scheme is proposed in this work to design a hybrid linear and nonlinear controllers. The algorithm so-called supervised RL is based on mixture of experts, where it is initialized by linear controller and as the performance and identification of the RL controller improves in real-time switches to the other controller. This work also focuses on transient stability and develops Lyapunov energy functions for synchronous generators to monitor the stability stress of the system. Using such energies as a cost function guarantees the convergence toward optimal post-fault solutions. These energy functions are developed on inter-area oscillations of the system identified online with Prony analysis. Finally, this work investigates the impacts of renewable energy resources, in specific Doubly Fed Induction Generator (DFIG)-based wind turbines, on power system transient stability and control. As the penetration of such resources is increased in transmission power system, neglecting the impacts of them will make the WAC design non-realistic. An energy function is proposed for DFIGs based on their dynamic performance in transient disturbances. Further, this energy is augmented to synchronous generators' energy as a global cost function, which is minimized by the WAC signals. We discuss the relative advantages and bottlenecks of each architecture and methodology using dynamic simulations of several test systems including a 2-area 8 bus system, IEEE 39 bus system, and IEEE 68 bus system in EMTP and real-time simulators. Being nonlinear-based, fast, accurate, and non-model based design, the proposed WAC system shows better transient and damping response when compared to conventional control schemes and local PSSs.

  7. A proposal for capital cost payment.

    PubMed

    Cleverley, W O

    1984-01-01

    This article proposes new bases for the payment of hospital capital costs. Separate distinctions between proprietary and voluntary hospitals are made based on their definition of capital and the requirements for capital maintenance. Replacement cost depreciation is suggested as the payment basis for voluntary hospitals.

  8. 14 CFR 151.127 - Accounting and audit.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) AIRPORTS FEDERAL AID TO AIRPORTS Rules and Procedures for Advance Planning and Engineering Proposals § 151... costs are also applicable to advance planning proposal costs. However, the requirement of segregating and grouping costs applies only to § 151.55(a) (5) and (7) classifications. ...

  9. 14 CFR 151.127 - Accounting and audit.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) AIRPORTS FEDERAL AID TO AIRPORTS Rules and Procedures for Advance Planning and Engineering Proposals § 151... costs are also applicable to advance planning proposal costs. However, the requirement of segregating and grouping costs applies only to § 151.55(a) (5) and (7) classifications. ...

  10. 14 CFR 151.127 - Accounting and audit.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) AIRPORTS FEDERAL AID TO AIRPORTS Rules and Procedures for Advance Planning and Engineering Proposals § 151... costs are also applicable to advance planning proposal costs. However, the requirement of segregating and grouping costs applies only to § 151.55(a) (5) and (7) classifications. ...

  11. 14 CFR 151.127 - Accounting and audit.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) AIRPORTS FEDERAL AID TO AIRPORTS Rules and Procedures for Advance Planning and Engineering Proposals § 151... costs are also applicable to advance planning proposal costs. However, the requirement of segregating and grouping costs applies only to § 151.55(a) (5) and (7) classifications. ...

  12. Health care reform: motivation for discrimination?

    PubMed

    Navin, J C; Pettit, M A

    1995-01-01

    One of the major issues in the health care reform debate is the requirement that employers pay a portion of their employees' health insurance premiums. This paper examines the method for calculating the employer share of the health care premiums, as specified in the President's health care reform proposal. The calculation of the firm's cost of providing employee health care benefits is a function of marital status as well as the incidence of two-income earner households. This paper demonstrates that this method provides for lower than average premiums for married employees with no dependents in communities in which there is at least one married couple where both individuals participate in the labor market. This raises the non-wage labor costs of employing single individuals relative to individuals which are identical in every respect except their marital status. This paper explores the economic implications for hiring, as well as profits, for firms located in a perfectly-competitive industry. The results of the theoretical model presented here are clear. Under this proposed version of health care reform, ceteris paribus, firms have a clear preference for two-earner households. This paper also demonstrates that the incentive to discriminate is related to the size of the firm and to the size of the average wage of full-time employees for firms which employ fewer than fifty individuals. While this paper examines the specifics of President Clinton's original proposal, the conclusions reached here would apply to any form of employer-mandated coverage in which the premiums are a function of family status and the incidence of two-earner households.

  13. Separate valuation subsystems for delay and effort decision costs.

    PubMed

    Prévost, Charlotte; Pessiglione, Mathias; Météreau, Elise; Cléry-Melin, Marie-Laure; Dreher, Jean-Claude

    2010-10-20

    Decision making consists of choosing among available options on the basis of a valuation of their potential costs and benefits. Most theoretical models of decision making in behavioral economics, psychology, and computer science propose that the desirability of outcomes expected from alternative options can be quantified by utility functions. These utility functions allow a decision maker to assign subjective values to each option under consideration by weighting the likely benefits and costs resulting from an action and to select the one with the highest subjective value. Here, we used model-based neuroimaging to test whether the human brain uses separate valuation systems for rewards (erotic stimuli) associated with different types of costs, namely, delay and effort. We show that humans devalue rewards associated with physical effort in a strikingly similar fashion to those they devalue that are associated with delays, and that a single computational model derived from economics theory can account for the behavior observed in both delay discounting and effort discounting. However, our neuroimaging data reveal that the human brain uses distinct valuation subsystems for different types of costs, reflecting in opposite fashion delayed reward and future energetic expenses. The ventral striatum and the ventromedial prefrontal cortex represent the increasing subjective value of delayed rewards, whereas a distinct network, composed of the anterior cingulate cortex and the anterior insula, represent the decreasing value of the effortful option, coding the expected expense of energy. Together, these data demonstrate that the valuation processes underlying different types of costs can be fractionated at the cerebral level.

  14. 75 FR 15435 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ... Collection: Business Proposal Forms for Quality Improvement Organizations (QIOs); Use: The submission of... request is used by CMS to negotiate QIO contracts. The revised business proposal forms will be useful in a... cost reports to the proposed costs noted on the business proposal forms. Subsequent contract and...

  15. Exploring the Pareto frontier using multisexual evolutionary algorithms: an application to a flexible manufacturing problem

    NASA Astrophysics Data System (ADS)

    Bonissone, Stefano R.; Subbu, Raj

    2002-12-01

    In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.

  16. An evaluation of the proposed organisation restructuring at Kadoma city, 2015.

    PubMed

    Muringazuva, A Caroline; Chirundu, Daniel; Shamu, Shepherd; Shambira, Gerald; Gombe, Notion; Juru, Tsitsi; Bangure, Donewell; Tshimanga, Mufuta

    2017-01-01

    Restructuring is the corporate management term for the act of reorganizing the legal, operational, or other structures of a company for the purpose of making it more profitable or better organized for its present needs. However, preparing an organization to accept and welcome any change is crucial. There is concern though over poor service delivery, untimely payment of workers, top management structure which is thought to be top heavy and employee costs taking (58%) of total expenditure. A descriptive cross sectional study was carried out. A cost benefit analysis was used to assess the cost and benefits of the proposed retrenchment exercise. A descriptive cross sectional study survey was conducted to assess the workers' perceptions towards the proposed restructuring exercise. A pretested self-administered questionnaire was used for data collection and data were analysed using EpiInfoTM (CDC 2012).Written informed consent was obtained from all study participants. Sixty nine percent of the respondents were males. The median years working for the organisation was 8 years (Q1=1; Q3=17). The total income was surpassed by expenditure with USD$11 000 and 52% of expenditures was going towards employment costs. A midyear financial review showed that 1% was channeled towards capital expenditure 2% on repairs and maintenance and employee costs accounting to 58% of all incurred expenditure. Current departmental salary budget amounted to USD 3,3million dollars. Estimated salary costs for the proposed departmental structures amount to USD 3,8 million dollars. Comparison of the current and proposed structure showed that the proposed structure costs USD$486 000 more. Projected benefits of the proposed structure aims to improve service delivery from 60%-85% . Unlike managers, lower levels workers did not want the exercise to be carried out. The proposed structure has higher costs than the current structure but with more benefits in terms of service delivery. Generally workers perceived restructuring negatively and did not want it done.

  17. An evaluation of the proposed organisation restructuring at Kadoma city, 2015

    PubMed Central

    Muringazuva, A Caroline; Chirundu, Daniel; Shamu, Shepherd; Shambira, Gerald; Gombe, Notion; Juru, Tsitsi; Bangure, Donewell; Tshimanga, Mufuta

    2017-01-01

    Introduction Restructuring is the corporate management term for the act of reorganizing the legal, operational, or other structures of a company for the purpose of making it more profitable or better organized for its present needs. However, preparing an organization to accept and welcome any change is crucial. There is concern though over poor service delivery, untimely payment of workers, top management structure which is thought to be top heavy and employee costs taking (58%) of total expenditure. Methods A descriptive cross sectional study was carried out. A cost benefit analysis was used to assess the cost and benefits of the proposed retrenchment exercise. A descriptive cross sectional study survey was conducted to assess the workers’ perceptions towards the proposed restructuring exercise. A pretested self-administered questionnaire was used for data collection and data were analysed using EpiInfoTM (CDC 2012).Written informed consent was obtained from all study participants. Results Sixty nine percent of the respondents were males. The median years working for the organisation was 8 years (Q1=1; Q3=17). The total income was surpassed by expenditure with USD$11 000 and 52% of expenditures was going towards employment costs. A midyear financial review showed that 1% was channeled towards capital expenditure 2% on repairs and maintenance and employee costs accounting to 58% of all incurred expenditure. Current departmental salary budget amounted to USD 3,3million dollars. Estimated salary costs for the proposed departmental structures amount to USD 3,8 million dollars. Comparison of the current and proposed structure showed that the proposed structure costs USD$486 000 more. Projected benefits of the proposed structure aims to improve service delivery from 60%-85% . Unlike managers, lower levels workers did not want the exercise to be carried out. Conclusion The proposed structure has higher costs than the current structure but with more benefits in terms of service delivery. Generally workers perceived restructuring negatively and did not want it done. PMID:28748021

  18. Cost Validation Using PRICE H

    NASA Technical Reports Server (NTRS)

    Jack, John; Kwan, Eric; Wood, Milana

    2011-01-01

    PRICE H was introduced into the JPL cost estimation tool set circa 2003. It became more available at JPL when IPAO funded the NASA-wide site license for all NASA centers. PRICE H was mainly used as one of the cost tools to validate proposal grassroots cost estimates. Program offices at JPL view PRICE H as an additional crosscheck to Team X (JPL Concurrent Engineering Design Center) estimates. PRICE H became widely accepted ca, 2007 at JPL when the program offices moved away from grassroots cost estimation for Step 1 proposals. PRICE H is now one of the key cost tools used for cost validation, cost trades, and independent cost estimates.

  19. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  20. Siting and sizing of distributed generators based on improved simulated annealing particle swarm optimization.

    PubMed

    Su, Hongsheng

    2017-12-18

    Distributed power grids generally contain multiple diverse types of distributed generators (DGs). Traditional particle swarm optimization (PSO) and simulated annealing PSO (SA-PSO) algorithms have some deficiencies in site selection and capacity determination of DGs, such as slow convergence speed and easily falling into local trap. In this paper, an improved SA-PSO (ISA-PSO) algorithm is proposed by introducing crossover and mutation operators of genetic algorithm (GA) into SA-PSO, so that the capabilities of the algorithm are well embodied in global searching and local exploration. In addition, diverse types of DGs are made equivalent to four types of nodes in flow calculation by the backward or forward sweep method, and reactive power sharing principles and allocation theory are applied to determine initial reactive power value and execute subsequent correction, thus providing the algorithm a better start to speed up the convergence. Finally, a mathematical model of the minimum economic cost is established for the siting and sizing of DGs under the location and capacity uncertainties of each single DG. Its objective function considers investment and operation cost of DGs, grid loss cost, annual purchase electricity cost, and environmental pollution cost, and the constraints include power flow, bus voltage, conductor current, and DG capacity. Through applications in an IEEE33-node distributed system, it is found that the proposed method can achieve desirable economic efficiency and safer voltage level relative to traditional PSO and SA-PSO algorithms, and is a more effective planning method for the siting and sizing of DGs in distributed power grids.

  1. Study of surface functionalization on IDE by using 3-aminopropyl triethoxysilane (APTES) for cervical cancer detection

    NASA Astrophysics Data System (ADS)

    Raqeema, S.; Hashim, U.; Azizah, N.

    2016-07-01

    This paper presented the study of surface functionalization on IDE by using 3-Aminopropyl triethoxysilane (APTES). The DNA nanochip based interdigitated (IDE) has been proposed to optimized the sensitivity of the device due to the cervical cancer detection. The DNA nanochip will be more efficient using surface modification of TiO2 nanoparticles with 3-Aminopropyl triethoxysilane (APTES). Furthermore, APTES gain the better functionalization of the adsorption mechanism on IDE. The combination of the DNA probe and the HPV target will produce more sensitivity and speed of the DNA nanochip due to their properties. The IDE has been characterized using current-voltage (IV) measurement. This functionalization of the surface would be applicable, sensitive, selective and low cost for cervical cancer detection.

  2. An Improved Graph Model for Conflict Resolution Based on Option Prioritization and Its Application

    PubMed Central

    Yin, Kedong; Li, Xuemei

    2017-01-01

    In order to quantitatively depict differences regarding the preferences of decision makers for different states, a score function is proposed. As a foundation, coalition motivation and real-coalition analysis are discussed when external circumstance or opportunity costs are considering. On the basis of a confidence-level function, we establish the score function using a “preference tree”. We not only measure the preference for each state, but we also build a collation improvement function to measure coalition motivation and to construct a coordinate system in which to analyze real-coalition stability. All of these developments enhance the applicability of the graph model for conflict resolution (GMCR). Finally, an improved GMCR is applied in the “Changzhou Conflict” to demonstrate how it can be conveniently utilized in practice. PMID:29077049

  3. An Improved Graph Model for Conflict Resolution Based on Option Prioritization and Its Application.

    PubMed

    Yin, Kedong; Yu, Li; Li, Xuemei

    2017-10-27

    In order to quantitatively depict differences regarding the preferences of decision makers for different states, a score function is proposed. As a foundation, coalition motivation and real-coalition analysis are discussed when external circumstance or opportunity costs are considering. On the basis of a confidence-level function, we establish the score function using a "preference tree". We not only measure the preference for each state, but we also build a collation improvement function to measure coalition motivation and to construct a coordinate system in which to analyze real-coalition stability. All of these developments enhance the applicability of the graph model for conflict resolution (GMCR). Finally, an improved GMCR is applied in the "Changzhou Conflict" to demonstrate how it can be conveniently utilized in practice.

  4. Approximately adaptive neural cooperative control for nonlinear multiagent systems with performance guarantee

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Yang, Tianyu; Staskevich, Gennady; Abbe, Brian

    2017-04-01

    This paper studies the cooperative control problem for a class of multiagent dynamical systems with partially unknown nonlinear system dynamics. In particular, the control objective is to solve the state consensus problem for multiagent systems based on the minimisation of certain cost functions for individual agents. Under the assumption that there exist admissible cooperative controls for such class of multiagent systems, the formulated problem is solved through finding the optimal cooperative control using the approximate dynamic programming and reinforcement learning approach. With the aid of neural network parameterisation and online adaptive learning, our method renders a practically implementable approximately adaptive neural cooperative control for multiagent systems. Specifically, based on the Bellman's principle of optimality, the Hamilton-Jacobi-Bellman (HJB) equation for multiagent systems is first derived. We then propose an approximately adaptive policy iteration algorithm for multiagent cooperative control based on neural network approximation of the value functions. The convergence of the proposed algorithm is rigorously proved using the contraction mapping method. The simulation results are included to validate the effectiveness of the proposed algorithm.

  5. Optimization in the design of a 12 gigahertz low cost ground receiving system for broadcast satellites. Volume 2: Antenna system and interference

    NASA Technical Reports Server (NTRS)

    Ohkubo, K.; Han, C. C.; Albernaz, J.; Janky, J. M.; Lusignan, B. B.

    1972-01-01

    The antenna characteristics are analyzed of a low cost mass-producible ground station to be used in broadcast satellite systems. It is found that a prime focus antenna is sufficient for a low-cost but not a low noise system. For the antenna feed waveguide systems are the best choice for the 12 GHz band, while printed-element systems are recommended for the 2.6 GHz band. Zoned reflectors are analyzed and appear to be attractive from the standpoint of cost. However, these reflectors suffer a gain reduction of about one db and a possible increase in sidelobe levels. The off-axis gain of a non-auto-tracking station can be optimized by establishing a special illumination function at the reflector aperture. A step-feed tracking system is proposed to provide automatic procedures for searching for peak signal from a geostationary satellite. This system uses integrated circuitry and therefore results in cost saving under mass production. It is estimated that a complete step-track system would cost only $512 for a production quantity of 1000 units per year.

  6. Simulation of value stream mapping and discrete optimization of energy consumption in modular construction

    NASA Astrophysics Data System (ADS)

    Chowdhury, Md Mukul

    With the increased practice of modularization and prefabrication, the construction industry gained the benefits of quality management, improved completion time, reduced site disruption and vehicular traffic, and improved overall safety and security. Whereas industrialized construction methods, such as modular and manufactured buildings, have evolved over decades, core techniques used in prefabrication plants vary only slightly from those employed in traditional site-built construction. With a focus on energy and cost efficient modular construction, this research presents the development of a simulation, measurement and optimization system for energy consumption in the manufacturing process of modular construction. The system is based on Lean Six Sigma principles and loosely coupled system operation to identify the non-value adding tasks and possible causes of low energy efficiency. The proposed system will also include visualization functions for demonstration of energy consumption in modular construction. The benefits of implementing this system include a reduction in the energy consumption in production cost, decrease of energy cost in the production of lean-modular construction, and increase profit. In addition, the visualization functions will provide detailed information about energy efficiency and operation flexibility in modular construction. A case study is presented to validate the reliability of the system.

  7. Application of Particle Swarm Optimization in Computer Aided Setup Planning

    NASA Astrophysics Data System (ADS)

    Kafashi, Sajad; Shakeri, Mohsen; Abedini, Vahid

    2011-01-01

    New researches are trying to integrate computer aided design (CAD) and computer aided manufacturing (CAM) environments. The role of process planning is to convert the design specification into manufacturing instructions. Setup planning has a basic role in computer aided process planning (CAPP) and significantly affects the overall cost and quality of machined part. This research focuses on the development for automatic generation of setups and finding the best setup plan in feasible condition. In order to computerize the setup planning process, three major steps are performed in the proposed system: a) Extraction of machining data of the part. b) Analyzing and generation of all possible setups c) Optimization to reach the best setup plan based on cost functions. Considering workshop resources such as machine tool, cutter and fixture, all feasible setups could be generated. Then the problem is adopted with technological constraints such as TAD (tool approach direction), tolerance relationship and feature precedence relationship to have a completely real and practical approach. The optimal setup plan is the result of applying the PSO (particle swarm optimization) algorithm into the system using cost functions. A real sample part is illustrated to demonstrate the performance and productivity of the system.

  8. 32 CFR 165.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... evaluation. That includes costs of any engineering change proposals initiated before the date of calculations... includes costs of any engineering change proposal started before the date of calculation of the NC... NONRECURRING COSTS (NCs) ON SALES OF U.S. ITEMS § 165.3 Definitions. The following definitions apply to this...

  9. 32 CFR 165.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... evaluation. That includes costs of any engineering change proposals initiated before the date of calculations... includes costs of any engineering change proposal started before the date of calculation of the NC... NONRECURRING COSTS (NCs) ON SALES OF U.S. ITEMS § 165.3 Definitions. The following definitions apply to this...

  10. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  11. Application of multi-objective optimization to pooled experiments of next generation sequencing for detection of rare mutations.

    PubMed

    Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario

    2014-01-01

    In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.

  12. Full cost accounting in the analysis of separated waste collection efficiency: A methodological proposal.

    PubMed

    D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco

    2016-02-01

    Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A Complex Prime Numerical Representation of Amino Acids for Protein Function Comparison.

    PubMed

    Chen, Duo; Wang, Jiasong; Yan, Ming; Bao, Forrest Sheng

    2016-08-01

    Computationally assessing the functional similarity between proteins is an important task of bioinformatics research. It can help molecular biologists transfer knowledge on certain proteins to others and hence reduce the amount of tedious and costly benchwork. Representation of amino acids, the building blocks of proteins, plays an important role in achieving this goal. Compared with symbolic representation, representing amino acids numerically can expand our ability to analyze proteins, including comparing the functional similarity of them. Among the state-of-the-art methods, electro-ion interaction pseudopotential (EIIP) is widely adopted for the numerical representation of amino acids. However, it could suffer from degeneracy that two different amino acid sequences have the same numerical representation, due to the design of EIIP. In light of this challenge, we propose a complex prime numerical representation (CPNR) of amino acids, inspired by the similarity between a pattern among prime numbers and the number of codons of amino acids. To empirically assess the effectiveness of the proposed method, we compare CPNR against EIIP. Experimental results demonstrate that the proposed method CPNR always achieves better performance than EIIP. We also develop a framework to combine the advantages of CPNR and EIIP, which enables us to improve the performance and study the unique characteristics of different representations.

  14. Quantitative analysis of airway abnormalities in CT

    NASA Astrophysics Data System (ADS)

    Petersen, Jens; Lo, Pechin; Nielsen, Mads; Edula, Goutham; Ashraf, Haseem; Dirksen, Asger; de Bruijne, Marleen

    2010-03-01

    A coupled surface graph cut algorithm for airway wall segmentation from Computed Tomography (CT) images is presented. Using cost functions that highlight both inner and outer wall borders, the method combines the search for both borders into one graph cut. The proposed method is evaluated on 173 manually segmented images extracted from 15 different subjects and shown to give accurate results, with 37% less errors than the Full Width at Half Maximum (FWHM) algorithm and 62% less than a similar graph cut method without coupled surfaces. Common measures of airway wall thickness such as the Interior Area (IA) and Wall Area percentage (WA%) was measured by the proposed method on a total of 723 CT scans from a lung cancer screening study. These measures were significantly different for participants with Chronic Obstructive Pulmonary Disease (COPD) compared to asymptomatic participants. Furthermore, reproducibility was good as confirmed by repeat scans and the measures correlated well with the outcomes of pulmonary function tests, demonstrating the use of the algorithm as a COPD diagnostic tool. Additionally, a new measure of airway wall thickness is proposed, Normalized Wall Intensity Sum (NWIS). NWIS is shown to correlate better with lung function test values and to be more reproducible than previous measures IA, WA% and airway wall thickness at a lumen perimeter of 10 mm (PI10).

  15. A current-excited triple-time-voltage oversampling method for bio-impedance model for cost-efficient circuit system.

    PubMed

    Yan Hong; Yong Wang; Wang Ling Goh; Yuan Gao; Lei Yao

    2015-08-01

    This paper presents a mathematic method and a cost-efficient circuit to measure the value of each component of the bio-impedance model at electrode-electrolyte interface. The proposed current excited triple-time-voltage oversampling (TTVO) method deduces the component values by solving triple simultaneous electric equation (TSEE) at different time nodes during a current excitation, which are the voltage functions of time. The proposed triple simultaneous electric equations (TSEEs) allows random selections of the time nodes, hence numerous solutions can be obtained during a single current excitation. Following that, the oversampling approach is engaged by averaging all solutions of multiple TSEEs acquired after a single current excitation, which increases the practical measurement accuracy through the improvement of the signal-to-noise ratio (SNR). In addition, a print circuit board (PCB) that consists a switched current exciter and an analog-to-digital converter (ADC) is designed for signal acquisition. This presents a great cost reduction when compared against other instrument-based measurement data reported [1]. Through testing, the measured values of this work is proven to be in superb agreements on the true component values of the electrode-electrolyte interface model. This work is most suited and also useful for biological and biomedical applications, to perform tasks such as stimulations, recordings, impedance characterizations, etc.

  16. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  17. Spectral functions of strongly correlated extended systems via an exact quantum embedding

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Chan, Garnet Kin-Lic

    2015-04-01

    Density matrix embedding theory (DMET) [Phys. Rev. Lett. 109, 186404 (2012), 10.1103/PhysRevLett.109.186404], introduced an approach to quantum cluster embedding methods whereby the mapping of strongly correlated bulk problems to an impurity with finite set of bath states was rigorously formulated to exactly reproduce the entanglement of the ground state. The formalism provided similar physics to dynamical mean-field theory at a tiny fraction of the cost but was inherently limited by the construction of a bath designed to reproduce ground-state, static properties. Here, we generalize the concept of quantum embedding to dynamic properties and demonstrate accurate bulk spectral functions at similarly small computational cost. The proposed spectral DMET utilizes the Schmidt decomposition of a response vector, mapping the bulk dynamic correlation functions to that of a quantum impurity cluster coupled to a set of frequency-dependent bath states. The resultant spectral functions are obtained on the real-frequency axis, without bath discretization error, and allows for the construction of arbitrary dynamic correlation functions. We demonstrate the method on the one- (1D) and two-dimensional (2D) Hubbard model, where we obtain zero temperature and thermodynamic limit spectral functions, and show the trivial extension to two-particle Green's functions. This advance therefore extends the scope and applicability of DMET in condensed-matter problems as a computationally tractable route to correlated spectral functions of extended systems and provides a competitive alternative to dynamical mean-field theory for dynamic quantities.

  18. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  19. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  20. 77 FR 40878 - Notice of Administrative Settlement Agreement for Recovery of Past Response Costs Pursuant to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-11

    ... recovery of past response costs (``Proposed Agreement'') associated with the Browning Lumber Company... Protection Agency, 1650 Arch Street, Philadelphia, PA 19103. Comments should reference the ``Browning Lumber... past response costs (``Proposed Agreement'') associated with the Browning Lumber Company Superfund Site...

  1. 48 CFR 552.270-13 - Proposals for Adjustment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operation to be performed); (3) Equipment costs; (4) Worker's compensation and public liability insurance... within the general scope of the lease, the Lessor shall submit, in a timely manner, an itemized cost proposal for the work to be accomplished or services to be performed when the cost exceeds $100,000. The...

  2. Intermodal transport and distribution patterns in ports relationship to hinterland

    NASA Astrophysics Data System (ADS)

    Dinu, O.; Dragu, V.; Ruscă, F.; Ilie, A.; Oprea, C.

    2017-08-01

    It is of great importance to examine all interactions between ports, terminals, intermodal transport and logistic actors of distribution channels, as their optimization can lead to operational improvement. Proposed paper starts with a brief overview of different goods types and allocation of their logistic costs, with emphasis on storage component. Present trend is to optimize storage costs by means of port storage area buffer function, by making the best use of free storage time available, most of the ports offer. As a research methodology, starting point is to consider the cost structure of a generic intermodal transport (storage, handling and transport costs) and to link this to intermodal distribution patterns most frequently cast-off in port relationship to hinterland. The next step is to evaluate storage costs impact on distribution pattern selection. For a given value of port free storage time, a corresponding value of total storage time in the distribution channel can be identified, in order to substantiate a distribution pattern shift. Different scenarios for transport and handling costs variation, recorded when distribution pattern shift, are integrated in order to establish the reaction of the actors involved in port related logistic and intermodal transport costs evolution is analysed in order to optimize distribution pattern selection.

  3. Medical costs of war in 2035: long-term care challenges for veterans of Iraq and Afghanistan.

    PubMed

    Geiling, James; Rosen, Joseph M; Edwards, Ryan D

    2012-11-01

    War-related medical costs for U.S. veterans of Iraq and Afghanistan may be enormous because of differences between these wars and previous conflicts: (1) Many veterans survive injuries that would have killed them in past wars, and (2) improvised explosive device attacks have caused "polytraumatic" injuries (multiple amputations; brain injury; severe facial trauma or blindness) that require decades of costly rehabilitation. In 2035, today's veterans will be middle-aged, with health issues like those seen in aging Vietnam veterans, complicated by comorbidities of posttraumatic stress disorder, traumatic brain injury, and polytrauma. This article cites emerging knowledge about best practices that have demonstrated cost-effectiveness in mitigating the medical costs of war. We propose that clinicians employ early interventions (trauma care, physical therapy, early post-traumatic stress disorder diagnosis) and preventive health programs (smoking cessation, alcohol-abuse counseling, weight control, stress reduction) to treat primary medical conditions now so that we can avoid treating costly secondary and tertiary complications in 2035. (We should help an amputee reduce his cholesterol and maintain his weight at age 30, rather than treating his heart disease or diabetes at age 50.) Appropriate early interventions for primary illness should preserve veterans' functional status, ensure quality clinical care, and reduce the potentially enormous cost burden of their future health care.

  4. Life cycle cost modeling of conceptual space vehicles

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles

    1993-01-01

    This paper documents progress to date by the University of Dayton on the development of a life cycle cost model for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of a life cycle cost model. Cost categories are initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. The focus will be on operations and maintenance costs and other recurring costs. Secondary tasks performed concurrent with the development of the life cycle costing model include continual support and upgrade of the R&M model. The primary result of the completed research will be a methodology and a computer implementation of the methodology to provide for timely cost analysis in support of the conceptual design activities. The major objectives of this research are: to obtain and to develop improved methods for estimating manpower, spares, software and hardware costs, facilities costs, and other cost categories as identified by NASA personnel; to construct a life cycle cost model of a space transportation system for budget exercises and performance-cost trade-off analysis during the conceptual and development stages; to continue to support modifications and enhancements to the R&M model; and to continue to assist in the development of a simulation model to provide an integrated view of the operations and support of the proposed system.

  5. The evolution of human phenotypic plasticity: age and nutritional status at maturity.

    PubMed

    Gage, Timothy B

    2003-08-01

    Several evolutionary optimal models of human plasticity in age and nutritional status at reproductive maturation are proposed and their dynamics examined. These models differ from previously published models because fertility is not assumed to be a function of body size or nutritional status. Further, the models are based on explicitly human demographic patterns, that is, model human life-tables, model human fertility tables, and, a nutrient flow-based model of maternal nutritional status. Infant survival (instead of fertility as in previous models) is assumed to be a function of maternal nutritional status. Two basic models are examined. In the first the cost of reproduction is assumed to be a constant proportion of total nutrient flow. In the second the cost of reproduction is constant for each birth. The constant proportion model predicts a negative slope of age and nutritional status at maturation. The constant cost per birth model predicts a positive slope of age and nutritional status at maturation. Either model can account for the secular decline in menarche observed over the last several centuries in Europe. A search of the growth literature failed to find definitive empirical documentation of human phenotypic plasticity in age and nutritional status at maturation. Most research strategies confound genetics with phenotypic plasticity. The one study that reports secular trends suggests a marginally insignificant, but positive slope. This view tends to support the constant cost per birth model.

  6. Model application for rapid detection of the exact location when calling an ambulance using OGC Open GeoSMS Standards

    NASA Astrophysics Data System (ADS)

    Sukic, Enes; Stoimenov, Leonid

    2016-02-01

    The web has penetrated just about every sphere of human interest and using information from the web has become ubiquitous among different categories of users. Medicine has long being using the benefits of modern technologies and without them it cannot function. This paper offers a proposal of use and mutual collaboration of several modern technologies within facilitating the location and communication between persons in need of emergency medical assistance and the emergency head offices, i.e., the ambulance. The main advantage of the proposed model is the technical possibility of implementation and use of these technologies in developing countries and low implementation cost.

  7. ONCHIT security in distributed environments: a proposed model for implantable devices.

    PubMed

    Lorence, Daniel; Lee, James; Richards, Michael

    2010-08-01

    Recent ONCHIT mandates call for increased individual health data collection efforts as well as heightened security measures. To date most healthcare organizations have been reluctant to exchange information, citing confidentiality concerns and unshared costs incurred by specific organizations. Implantable monitoring and treatment devices are rapidly emerging as data collection interface tools in response to such mandates. Proposed here is a translational, device-independent consumer-based solution, which focuses on information controlled by specific patients, and functions within a distributed (organization neutral) environment. While the conceptual applications employed in this technology set are provided by way of illustration, they may also serve as a transformative model for emerging EMR/EHR requirements.

  8. [How do we heal the Argentine health care system?].

    PubMed

    Tobar, Federico

    2002-04-01

    This article proposes a set of measures to reform the Argentine health care system and turn the country's current crisis into an opportunity for progressive, sustainable change. The proposal consists of a model for the intergovernmental division of health responsibilities. The national government would be responsible for strengthening its leadership role and for developing national insurance for low-prevalence high-cost diseases. With the provincial governments, the insurance role would be strengthened, with public health insurance making certain that there is universal coverage. Public hospitals would function as autonomous entities financed by social insurance, private insurance, and provincial public insurance. Municipalities would have an active role in disease prevention and health promotion, principally through primary care.

  9. Bio-inspired secure data mules for medical sensor network

    NASA Astrophysics Data System (ADS)

    Muraleedharan, Rajani; Gao, Weihua; Osadciw, Lisa A.

    2010-04-01

    Medical sensor network consist of heterogeneous nodes, wireless, mobile and wired with varied functionality. The resources at each sensor require to be exploited minimally while sensitive information is sensed and communicated to its access points using secure data mules. In this paper, we analyze the flat architecture, where different functionality and priority information require varied resources forms a non-deterministic polynomial-time hard problem. Hence, a bio-inspired data mule that helps to obtain dynamic multi-objective solution with minimal resource and secure path is applied. The performance of the proposed approach is based on reduced latency, data delivery rate and resource cost.

  10. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  11. Research Costs Investigated: A Study Into the Budgets of Dutch Publicly Funded Drug-Related Research.

    PubMed

    van Asselt, Thea; Ramaekers, Bram; Corro Ramos, Isaac; Joore, Manuela; Al, Maiwenn; Lesman-Leegte, Ivonne; Postma, Maarten; Vemer, Pepijn; Feenstra, Talitha

    2018-01-01

    The costs of performing research are an important input in value of information (VOI) analyses but are difficult to assess. The aim of this study was to investigate the costs of research, serving two purposes: (1) estimating research costs for use in VOI analyses; and (2) developing a costing tool to support reviewers of grant proposals in assessing whether the proposed budget is realistic. For granted study proposals from the Netherlands Organization for Health Research and Development (ZonMw), type of study, potential cost drivers, proposed budget, and general characteristics were extracted. Regression analysis was conducted in an attempt to generate a 'predicted budget' for certain combinations of cost drivers, for implementation in the costing tool. Of 133 drug-related research grant proposals, 74 were included for complete data extraction. Because an association between cost drivers and budgets was not confirmed, we could not generate a predicted budget based on regression analysis, but only historic reference budgets given certain study characteristics. The costing tool was designed accordingly, i.e. with given selection criteria the tool returns the range of budgets in comparable studies. This range can be used in VOI analysis to estimate whether the expected net benefit of sampling will be positive to decide upon the net value of future research. The absence of association between study characteristics and budgets may indicate inconsistencies in the budgeting or granting process. Nonetheless, the tool generates useful information on historical budgets, and the option to formally relate VOI to budgets. To our knowledge, this is the first attempt at creating such a tool, which can be complemented with new studies being granted, enlarging the underlying database and keeping estimates up to date.

  12. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  13. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE PAGES

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...

    2017-10-10

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  14. MFP scanner diagnostics using a self-printed target to measure the modulation transfer function

    NASA Astrophysics Data System (ADS)

    Wang, Weibao; Bauer, Peter; Wagner, Jerry; Allebach, Jan P.

    2014-01-01

    In the current market, reduction of warranty costs is an important avenue for improving profitability by manufacturers of printer products. Our goal is to develop an autonomous capability for diagnosis of printer and scanner caused defects with mid-range laser multifunction printers (MFPs), so as to reduce warranty costs. If the scanner unit of the MFP is not performing according to specification, this issue needs to be diagnosed. If there is a print quality issue, this can be diagnosed by printing a special test page that is resident in the firmware of the MFP unit, and then scanning it. However, the reliability of this process will be compromised if the scanner unit is defective. Thus, for both scanner and printer image quality issues, it is important to be able to properly evaluate the scanner performance. In this paper, we consider evaluation of the scanner performance by measuring its modulation transfer function (MTF). The MTF is a fundamental tool for assessing the performance of imaging systems. Several ways have been proposed to measure the MTF, all of which require a special target, for example a slanted-edge target. It is unacceptably expensive to ship every MFP with such a standard target, and to expect that the customer can keep track of it. To reduce this cost, in this paper, we develop new approach to this task. It is based on a self-printed slanted-edge target. Then, we propose algorithms to improve the results using a self-printed slanted-edge target. Finally, we present experimental results for MTF measurement using self-printed targets and compare them to the results obtained with standard targets.

  15. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  16. Optimal Fault-Tolerant Control for Discrete-Time Nonlinear Strict-Feedback Systems Based on Adaptive Critic Design.

    PubMed

    Wang, Zhanshan; Liu, Lei; Wu, Yanming; Zhang, Huaguang

    2018-06-01

    This paper investigates the problem of optimal fault-tolerant control (FTC) for a class of unknown nonlinear discrete-time systems with actuator fault in the framework of adaptive critic design (ACD). A pivotal highlight is the adaptive auxiliary signal of the actuator fault, which is designed to offset the effect of the fault. The considered systems are in strict-feedback forms and involve unknown nonlinear functions, which will result in the causal problem. To solve this problem, the original nonlinear systems are transformed into a novel system by employing the diffeomorphism theory. Besides, the action neural networks (ANNs) are utilized to approximate a predefined unknown function in the backstepping design procedure. Combined the strategic utility function and the ACD technique, a reinforcement learning algorithm is proposed to set up an optimal FTC, in which the critic neural networks (CNNs) provide an approximate structure of the cost function. In this case, it not only guarantees the stability of the systems, but also achieves the optimal control performance as well. In the end, two simulation examples are used to show the effectiveness of the proposed optimal FTC strategy.

  17. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  18. Function assertive community treatment (FACT) and psychiatric service use in patients diagnosed with severe mental illness.

    PubMed

    Drukker, M; van Os, J; Sytema, S; Driessen, G; Visser, E; Delespaul, P

    2011-09-01

    Previous work suggests that the Dutch variant of assertive community treatment (ACT), known as Function ACT (FACT), may be effective in increasing symptomatic remission rates when replacing a system of hospital-based care and separate community-based facilities. FACT guidelines propose a different pattern of psychiatric service consumption compared to traditional services, which should result in different costing parameters than care as usual (CAU). South-Limburg FACT patients, identified through the local psychiatric case register, were matched with patients from a non-FACT control region in the North of the Netherlands (NN). Matching was accomplished using propensity scoring including, among others, total and outpatient care consumption. Assessment, as an important ingredient of FACT, was the point of departure of the present analysis. FACT patients, compared to CAU, had five more outpatient contacts after the index date. Cost-effectiveness was difficult to assess. Implementation of FACT results in measurable changes in mental health care use.

  19. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  20. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  1. Multi-Party Privacy-Preserving Set Intersection with Quasi-Linear Complexity

    NASA Astrophysics Data System (ADS)

    Cheon, Jung Hee; Jarecki, Stanislaw; Seo, Jae Hong

    Secure computation of the set intersection functionality allows n parties to find the intersection between their datasets without revealing anything else about them. An efficient protocol for such a task could have multiple potential applications in commerce, health care, and security. However, all currently known secure set intersection protocols for n>2 parties have computational costs that are quadratic in the (maximum) number of entries in the dataset contributed by each party, making secure computation of the set intersection only practical for small datasets. In this paper, we describe the first multi-party protocol for securely computing the set intersection functionality with both the communication and the computation costs that are quasi-linear in the size of the datasets. For a fixed security parameter, our protocols require O(n2k) bits of communication and Õ(n2k) group multiplications per player in the malicious adversary setting, where k is the size of each dataset. Our protocol follows the basic idea of the protocol proposed by Kissner and Song, but we gain efficiency by using different representations of the polynomials associated with users' datasets and careful employment of algorithms that interpolate or evaluate polynomials on multiple points more efficiently. Moreover, the proposed protocol is robust. This means that the protocol outputs the desired result even if some corrupted players leave during the execution of the protocol.

  2. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  3. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  4. Optimal slew path planning for the Sino-French Space-based multiband astronomical Variable Objects Monitor mission

    NASA Astrophysics Data System (ADS)

    She, Yuchen; Li, Shuang

    2018-01-01

    The planning algorithm to calculate a satellite's optimal slew trajectory with a given keep-out constraint is proposed. An energy-optimal formulation is proposed for the Space-based multiband astronomical Variable Objects Monitor Mission Analysis and Planning (MAP) system. The innovative point of the proposed planning algorithm lies in that the satellite structure and control limitation are not considered as optimization constraints but are formulated into the cost function. This modification is able to relieve the burden of the optimizer and increases the optimization efficiency, which is the major challenge for designing the MAP system. Mathematical analysis is given to prove that there is a proportional mapping between the formulation and the satellite controller output. Simulations with different scenarios are given to demonstrate the efficiency of the developed algorithm.

  5. A hierarchical two-phase framework for selecting genes in cancer datasets with a neuro-fuzzy system.

    PubMed

    Lim, Jongwoo; Wang, Bohyun; Lim, Joon S

    2016-04-29

    Finding the minimum number of appropriate biomarkers for specific targets such as a lung cancer has been a challenging issue in bioinformatics. We propose a hierarchical two-phase framework for selecting appropriate biomarkers that extracts candidate biomarkers from the cancer microarray datasets and then selects the minimum number of appropriate biomarkers from the extracted candidate biomarkers datasets with a specific neuro-fuzzy algorithm, which is called a neural network with weighted fuzzy membership function (NEWFM). In this context, as the first phase, the proposed framework is to extract candidate biomarkers by using a Bhattacharyya distance method that measures the similarity of two discrete probability distributions. Finally, the proposed framework is able to reduce the cost of finding biomarkers by not receiving medical supplements and improve the accuracy of the biomarkers in specific cancer target datasets.

  6. Dynamic optimization approach for integrated supplier selection and tracking control of single product inventory system with product discount

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Heru Tjahjana, R.

    2017-01-01

    In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.

  7. Improvement of a Pneumatic Control Valve with Self-Holding Function

    NASA Astrophysics Data System (ADS)

    Dohta, Shujiro; Akagi, Tetsuya; Kobayashi, Wataru; Shimooka, So; Masago, Yusuke

    2017-10-01

    The purpose of this study is to develop a small-sized, lightweight and low-cost control valve with low energy consumption and to apply it to the assistive system. We have developed some control valves; a tiny on/off valve using a vibration motor, and an on/off valve with self-holding function. We have also proposed and tested the digital servo valve with self-holding function using permanent magnets and a small-sized servo motor. In this paper, in order to improve the valve, an analytical model of the digital servo valve is proposed. And the simulated results by using the analytical model and identified parameters were compared with the experimental results. Then, the improved digital servo valve was designed based on the calculated results and tested. As a result, we realized the digital servo valve that can control the flow rate more precisely while maintaining its volume and weight compared with the previous valve. As an application of the improved valve, a position control system of rubber artificial muscle was built and the position control was performed successfully.

  8. Linearized self-consistent GW approach satisfying the Ward identity

    NASA Astrophysics Data System (ADS)

    Kuwahara, Riichi; Ohno, Kaoru

    2014-09-01

    We propose a linearized self-consistent GW approach satisfying the Ward identity. The vertex function derived from the Ward-Takahashi identity in the limit of q =0 and ω -ω'=0 is included in the self-energy and the polarization function as a consequence of the linearization of the quasiparticle equation. Due to the energy dependence of the self-energy, the Hamiltonian is a non-Hermitian operator and quasiparticle states are nonorthonormal and linearly dependent. However, the linearized quasiparticle states recover orthonormality and fulfill the completeness condition. This approach is very efficient, and the resulting quasiparticle energies are greatly improved compared to the nonlinearized self-consistent GW approach, although its computational cost is not much increased. We show the results for atoms and dimers of Li and Na compared with other approaches. We also propose convenient ways to calculate the Luttinger-Ward functional Φ based on a plasmon-pole model and calculate the total energy for the ground state. As a result, we conclude that the linearization improves overall behaviors in the self-consistent GW approach.

  9. Economic Analysis of a Postulated space Tourism Transportation System

    NASA Astrophysics Data System (ADS)

    Hill, Allan S.

    2002-01-01

    Design concepts and associated costs were defined for a family of launch vehicles supporting a space tourism endeavor requiring the weekly transport of space tourists to and from an Earth- orbiting facility. The stated business goal for the Space Tourist Transportation System (STTS) element of the proposed commercial space venture was to transport and return ~50 passengers a week to LEO at a cost of roughly 50 K per seat commencing in 2005. This paper summarizes the economic analyses conducted within a broader Systems Engineering study of the postulated concept. Parametric costs were derived using TransCostSystems' (TCS) Cost Engineering Handbook, version 7. Costs were developed as a function of critical system characteristics and selected business scenarios. Various economic strategies directed toward achieving a cost of ~50 K per seat were identified and examined. The study indicated that with a `nominal' business scenario, the initial cost for developing and producing a fully reusable, 2-stage STTS element for a baseline of 46-passengers was about 15.5 B assuming a plausible `commercialization factor' of 0.333. The associated per-seat ticket cost was ~890 K, more than an order of magnitude higher than desired. If the system is enlarged to 104 passengers for better efficiency, the STTS initial cost for the nominal business scenario is increased to about 19.8 B and the per-seat ticket cost is reduced to ~530 K. It was concluded that achieving the desired ticket cost of 50 K per seat is not feasible unless the size of the STTS, and therefore of the entire system, is substantially increased. However, for the specified operational characteristics, it was shown that a system capacity of thousands of passengers per week is required. This implies an extremely high total system development cost, which is not very realistic as a commercial venture, especially in the proposed time frame. These results suggested that ambitious commercial space ventures may have to rely on sizeable government subsidies for economic viability. For example, in this study a hypothesized government subsidy of half the STTS development cost reduced the per-seat ticket cost by about 35%. A number of other business scenarios were also investigated, including `expensing' the entire program initial cost. These analyses showed that even greater government participation, additional aggressive business strategies and/or very low commercialization factors (in the range of 1/9 to 1/30) must be implemented or attained to achieve the desired per-seat cost of 50 K per passenger with reasonably sized vehicles.

  10. An economic order quantity model with nonlinear holding cost, partial backlogging and ramp-type demand

    NASA Astrophysics Data System (ADS)

    San-José, Luis A.; Sicilia, Joaquín; González-de-la-Rosa, Manuel; Febles-Acosta, Jaime

    2018-07-01

    In this article, a deterministic inventory model with a ramp-type demand depending on price and time is developed. The cumulative holding cost is assumed to be a nonlinear function of time. Shortages are allowed and are partially backlogged. Thus, the fraction of backlogged demand depends on the waiting time and on the stock-out period. The aim is to maximize the total profit per unit time. To do this, a procedure that determines the economic lot size, the optimal inventory cycle and the maximum profit is presented. The inventory system studied here extends diverse inventory models proposed in the literature. Finally, some numerical examples are provided to illustrate the theoretical results previously propounded.

  11. Path integral learning of multidimensional movement trajectories

    NASA Astrophysics Data System (ADS)

    André, João; Santos, Cristina; Costa, Lino

    2013-10-01

    This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

  12. Recent proposals to limit Medigap coverage and modify Medicare cost sharing.

    PubMed

    Linehan, Kathryn

    2012-02-24

    As policymakers look for savings from the Medicare program, some have proposed eliminating or discouraging "first-dollar coverage" available through privately purchased Medigap policies. Medigap coverage, which beneficiaries obtain to protect themselves from Medicare's cost-sharing requirements and its lack of a cap on out-of-pocket spending, may discourage the judicious use of medical services by reducing or eliminating beneficiary cost sharing. It is estimated that eliminating such coverage, which has been shown to be associated with higher Medicare spending, and requiring some cost sharing would encourage beneficiaries to reduce their service use and thus reduce pro­gram spending. However, eliminating first-dollar coverage could cause some beneficiaries to incur higher spending or forego necessary services. Some policy proposals to eliminate first-dollar coverage would also modify Medicare's cost sharing and add an out-of-pocket spending cap for fee-for-service Medicare. This paper discusses Medicare's current cost-sharing requirements, Medigap insurance, and proposals to modify Medicare's cost sharing and eliminate first-dollar coverage in Medigap plans. It reviews the evidence on the effects of first-dollar coverage on spending, some objections to eliminating first-dollar coverage, and results of research that has modeled the impact of eliminating first-dollar coverage, modifying Medicare's cost-sharing requirements, and adding an out-of-pocket limit on beneficiaries' spending.

  13. Cost function approach for estimating derived demand for composite wood products

    Treesearch

    T. C. Marcin

    1991-01-01

    A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.

  14. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  15. 48 CFR 2052.215-75 - Proposal presentation and format.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Package/Offer. Two (2) original signed copies of this solicitation package/offer. All applicable sections... exception to submitting cost or pricing data shall be made in accordance with FAR 52.215-20(a). (iii) If the... data, the offeror's cost proposal shall conform with the requirements of FAR 52.215-20(b). Cost...

  16. 48 CFR 2052.215-75 - Proposal presentation and format.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Package/Offer. Two (2) original signed copies of this solicitation package/offer. All applicable sections... exception to submitting cost or pricing data shall be made in accordance with FAR 52.215-20(a). (iii) If the... data, the offeror's cost proposal shall conform with the requirements of FAR 52.215-20(b). Cost...

  17. 48 CFR 2052.215-75 - Proposal presentation and format.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Package/Offer. Two (2) original signed copies of this solicitation package/offer. All applicable sections... exception to submitting cost or pricing data shall be made in accordance with FAR 52.215-20(a). (iii) If the... data, the offeror's cost proposal shall conform with the requirements of FAR 52.215-20(b). Cost...

  18. 7 CFR 1.673 - How will the Forest Service analyze a proposed alternative and formulate its modified condition?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... evidence on the implementation costs or operational impacts for electricity production of the proposed... alternative: (1) Will, as compared to the Forest Service's preliminary condition: (i) Cost significantly less... alternative not adopted on: (1) Energy supply, distribution, cost, and use; (2) Flood control; (3) Navigation...

  19. Impacts of the driver's bounded rationality on the traffic running cost under the car-following model

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Luo, Xiao-Feng; Liu, Kai

    2016-09-01

    The driver's bounded rationality has significant influences on the micro driving behavior and researchers proposed some traffic flow models with the driver's bounded rationality. However, little effort has been made to explore the effects of the driver's bounded rationality on the trip cost. In this paper, we use our recently proposed car-following model to study the effects of the driver's bounded rationality on his running cost and the system's total cost under three traffic running costs. The numerical results show that considering the driver's bounded rationality will enhance his each running cost and the system's total cost under the three traffic running costs.

  20. Acoustic Sensor Planning for Gunshot Location in National Parks: A Pareto Front Approach

    PubMed Central

    González-Castaño, Francisco Javier; Alonso, Javier Vales; Costa-Montenegro, Enrique; López-Matencio, Pablo; Vicente-Carrasco, Francisco; Parrado-García, Francisco J.; Gil-Castiñeira, Felipe; Costas-Rodríguez, Sergio

    2009-01-01

    In this paper, we propose a solution for gunshot location in national parks. In Spain there are agencies such as SEPRONA that fight against poaching with considerable success. The DiANa project, which is endorsed by Cabaneros National Park and the SEPRONA service, proposes a system to automatically detect and locate gunshots. This work presents its technical aspects related to network design and planning. The system consists of a network of acoustic sensors that locate gunshots by hyperbolic multi-lateration estimation. The differences in sound time arrivals allow the computation of a low error estimator of gunshot location. The accuracy of this method depends on tight sensor clock synchronization, which an ad-hoc time synchronization protocol provides. On the other hand, since the areas under surveillance are wide, and electric power is scarce, it is necessary to maximize detection coverage and minimize system cost at the same time. Therefore, sensor network planning has two targets, i.e., coverage and cost. We model planning as an unconstrained problem with two objective functions. We determine a set of candidate solutions of interest by combining a derivative-free descent method we have recently proposed with a Pareto front approach. The results are clearly superior to random seeding in a realistic simulation scenario. PMID:22303135

Top