Sample records for empirically optimizing large

  1. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  2. DFT Performance Prediction in FFTW

    NASA Astrophysics Data System (ADS)

    Gu, Liang; Li, Xiaoming

    Fastest Fourier Transform in the West (FFTW) is an adaptive FFT library that generates highly efficient Discrete Fourier Transform (DFT) implementations. It is one of the fastest FFT libraries available and it outperforms many adaptive or hand-tuned DFT libraries. Its success largely relies on the huge search space spanned by several FFT algorithms and a set of compiler generated C code (called codelets) for small size DFTs. FFTW empirically finds the best algorithm by measuring the performance of different algorithm combinations. Although the empirical search works very well for FFTW, the search process does not explain why the best plan found performs best, and the search overhead grows polynomially as the DFT size increases. The opposite of empirical search is model-driven optimization. However, it is widely believed that model-driven optimization is inferior to empirical search and is particularly powerless to solve problems as complex as the optimization of DFT.

  3. Vast Portfolio Selection with Gross-exposure Constraints*

    PubMed Central

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404

  4. Potential relative increment (PRI): a new method to empirically derive optimal tree diameter growth

    Treesearch

    Don C Bragg

    2001-01-01

    Potential relative increment (PRI) is a new method to derive optimal diameter growth equations using inventory information from a large public database. Optimal growth equations for 24 species were developed using plot and tree records from several states (Michigan, Minnesota, and Wisconsin) of the North Central US. Most species were represented by thousands of...

  5. An optimum organizational structure for a large earth-orbiting multidisciplinary space base. Ph.D. Thesis - Fla. State Univ., 1973

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1975-01-01

    An optimum hypothetical organizational structure was studied for a large earth-orbiting, multidisciplinary research and applications space base manned by a crew of technologists. Because such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than with the empirical testing of the model. The essential finding of this research was that a four-level project type total matrix model will optimize the efficiency and effectiveness of space base technologists.

  6. An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1973-01-01

    The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.

  7. Effect of collision energy optimization on the measurement of peptides by selected reaction monitoring (SRM) mass spectrometry.

    PubMed

    Maclean, Brendan; Tomazela, Daniela M; Abbatiello, Susan E; Zhang, Shucha; Whiteaker, Jeffrey R; Paulovich, Amanda G; Carr, Steven A; Maccoss, Michael J

    2010-12-15

    Proteomics experiments based on Selected Reaction Monitoring (SRM, also referred to as Multiple Reaction Monitoring or MRM) are being used to target large numbers of protein candidates in complex mixtures. At present, instrument parameters are often optimized for each peptide, a time and resource intensive process. Large SRM experiments are greatly facilitated by having the ability to predict MS instrument parameters that work well with the broad diversity of peptides they target. For this reason, we investigated the impact of using simple linear equations to predict the collision energy (CE) on peptide signal intensity and compared it with the empirical optimization of the CE for each peptide and transition individually. Using optimized linear equations, the difference between predicted and empirically derived CE values was found to be an average gain of only 7.8% of total peak area. We also found that existing commonly used linear equations fall short of their potential, and should be recalculated for each charge state and when introducing new instrument platforms. We provide a fully automated pipeline for calculating these equations and individually optimizing CE of each transition on SRM instruments from Agilent, Applied Biosystems, Thermo-Scientific and Waters in the open source Skyline software tool ( http://proteome.gs.washington.edu/software/skyline ).

  8. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE PAGES

    Brito, Thiago V.; Morley, Steven K.

    2017-10-25

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  9. Improving Empirical Magnetic Field Models by Fitting to In Situ Data Using an Optimized Parameter Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, Thiago V.; Morley, Steven K.

    A method for comparing and optimizing the accuracy of empirical magnetic field models using in situ magnetic field measurements is presented in this paper. The optimization method minimizes a cost function—τ—that explicitly includes both a magnitude and an angular term. A time span of 21 days, including periods of mild and intense geomagnetic activity, was used for this analysis. A comparison between five magnetic field models (T96, T01S, T02, TS04, and TS07) widely used by the community demonstrated that the T02 model was, on average, the most accurate when driven by the standard model input parameters. The optimization procedure, performedmore » in all models except TS07, generally improved the results when compared to unoptimized versions of the models. Additionally, using more satellites in the optimization procedure produces more accurate results. This procedure reduces the number of large errors in the model, that is, it reduces the number of outliers in the error distribution. The TS04 model shows the most accurate results after the optimization in terms of both the magnitude and direction, when using at least six satellites in the fitting. It gave a smaller error than its unoptimized counterpart 57.3% of the time and outperformed the best unoptimized model (T02) 56.2% of the time. Its median percentage error in |B| was reduced from 4.54% to 3.84%. Finally, the difference among the models analyzed, when compared in terms of the median of the error distributions, is not very large. However, the unoptimized models can have very large errors, which are much reduced after the optimization.« less

  10. Exploring the effect of power law social popularity on language evolution.

    PubMed

    Gong, Tao; Shuai, Lan

    2014-01-01

    We evaluate the effect of a power-law-distributed social popularity on the origin and change of language, based on three artificial life models meticulously tracing the evolution of linguistic conventions including lexical items, categories, and simple syntax. A cross-model analysis reveals an optimal social popularity, in which the λ value of the power law distribution is around 1.0. Under this scaling, linguistic conventions can efficiently emerge and widely diffuse among individuals, thus maintaining a useful level of mutual understandability even in a big population. From an evolutionary perspective, we regard this social optimality as a tradeoff among social scaling, mutual understandability, and population growth. Empirical evidence confirms that such optimal power laws exist in many large-scale social systems that are constructed primarily via language-related interactions. This study contributes to the empirical explorations and theoretical discussions of the evolutionary relations between ubiquitous power laws in social systems and relevant individual behaviors.

  11. Education and Work

    ERIC Educational Resources Information Center

    Trostel, Philip; Walker, Ian

    2006-01-01

    This paper examines the relationship between the incentives to work and to invest in human capital through education in a lifecycle optimizing model. These incentives are shown to be mutually reinforcing in a simple stylized model. This theoretical prediction is investigated empirically using three large micro datasets covering a broad range of…

  12. Emotion Matters: Exploring the Emotional Labor of Teaching

    ERIC Educational Resources Information Center

    Brown, Elizabeth Levine

    2011-01-01

    A large empirical body of literature suggests that teachers make a difference in the lives of students both academically (Pianta & Allen, 2008) and personally (McCaffrey, Lockwood, Koretz, & Hamilton, 2003). Teachers influence students through not only their delivery of content knowledge, but also their development of optimal learning conditions…

  13. Design optimization of large-size format edge-lit light guide units

    NASA Astrophysics Data System (ADS)

    Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.

    2016-04-01

    In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.

  14. Wavelet-bounded empirical mode decomposition for measured time series analysis

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2018-01-01

    Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.

  15. Water velocity in commercial RAS culture tanks for Atlantic salmon smolt production

    USDA-ARS?s Scientific Manuscript database

    An optimal flow domain in culture tanks is vital for fish growth and welfare. This paper presents empirical data on rotational velocity and water quality in circular and octagonal tanks at two large commercial smolt production sites, with an approximate production rate of 1000 and 1300 ton smolt ann...

  16. DFT energy optimization of a large carbohydrate: cyclomaltohexaicosaose (CA-26)

    USDA-ARS?s Scientific Manuscript database

    CA-26 is the largest cyclodextrin (546 atoms) for which refined X-ray structural data is available. Because of its size, 26 D-glucose residues, it is beyond the scope of study of most ab initio or density functional methods, and to date has only been computationally examined using empirical force fi...

  17. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  18. Epidemic extinction paths in complex networks

    NASA Astrophysics Data System (ADS)

    Hindes, Jason; Schwartz, Ira B.

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  19. Epidemic extinction paths in complex networks.

    PubMed

    Hindes, Jason; Schwartz, Ira B

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  20. Integral criteria for large-scale multiple fingerprint solutions

    NASA Astrophysics Data System (ADS)

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  1. An Empirically-Derived Index of High School Academic Rigor. ACT Working Paper 2017-5

    ERIC Educational Resources Information Center

    Allen, Jeff; Ndum, Edwin; Mattern, Krista

    2017-01-01

    We derived an index of high school academic rigor by optimizing the prediction of first-year college GPA based on high school courses taken, grades, and indicators of advanced coursework. Using a large data set (n~108,000) and nominal parameterization of high school course outcomes, the high school academic rigor (HSAR) index capitalizes on…

  2. Parametrically Optimized Carbon Nanotube-Coated Cold Cathode Spindt Arrays

    PubMed Central

    Yuan, Xuesong; Cole, Matthew T.; Zhang, Yu; Wu, Jianqiang; Milne, William I.; Yan, Yang

    2017-01-01

    Here, we investigate, through parametrically optimized macroscale simulations, the field electron emission from arrays of carbon nanotube (CNT)-coated Spindts towards the development of an emerging class of novel vacuum electron devices. The present study builds on empirical data gleaned from our recent experimental findings on the room temperature electron emission from large area CNT electron sources. We determine the field emission current of the present microstructures directly using particle in cell (PIC) software and present a new CNT cold cathode array variant which has been geometrically optimized to provide maximal emission current density, with current densities of up to 11.5 A/cm2 at low operational electric fields of 5.0 V/μm. PMID:28336845

  3. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  4. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  5. Mental workload prediction based on attentional resource allocation and information processing.

    PubMed

    Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin

    2015-01-01

    Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.

  6. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  7. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  8. Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.

    PubMed

    King, Leandra; Wakeley, John

    2016-09-01

    We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.

  9. A multicenter phase 2 study of empirical low-dose liposomal amphotericin B in patients with refractory febrile neutropenia.

    PubMed

    Miyao, Kotaro; Sawa, Masashi; Kurata, Mio; Suzuki, Ritsuro; Sakemura, Reona; Sakai, Toshiyasu; Kato, Tomonori; Sahashi, Satomi; Tsushita, Natsuko; Ozawa, Yukiyasu; Tsuzuki, Motohiro; Kohno, Akio; Adachi, Tatsuya; Watanabe, Keisuke; Ohbayashi, Kaneyuki; Inagaki, Yuichiro; Atsuta, Yoshiko; Emi, Nobuhiko

    2017-01-01

    Invasive fungal infection (IFI) is a major life-threatening problem encountered by patients with hematological malignancies receiving intensive chemotherapy. Empirical antifungal agents are therefore important. Despite the availability of antifungal agents for such situations, the optimal agents and administration methods remain unclear. We conducted a prospective phase 2 study of empirical 1 mg/kg/day liposomal amphotericin B (L-AMB) in 80 patients receiving intensive chemotherapy for hematological malignancies. All enrolled patients were high-risk and had recurrent prolonged febrile neutropenia despite having received broad-spectrum antibacterial therapy for at least 72 hours. Fifty-three patients (66.3 %) achieved the primary endpoint of successful treatment, thus exceeding the predefined threshold success rate. No patients developed IFI. The treatment completion rate was 73.8 %, and only two cases ceased treatment because of adverse events. The most frequent events were reversible electrolyte abnormalities. We consider low-dose L-AMB to provide comparable efficacy and improved safety and cost-effectiveness when compared with other empirical antifungal therapies. Additional large-scale randomized studies are needed to determine the clinical usefulness of L-AMB relative to other empirical antifungal therapies.

  10. Mission Operations Planning with Preferences: An Empirical Study

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Khatib, Lina; McGann, Conor

    2006-01-01

    This paper presents an empirical study of some nonexhaustive approaches to optimizing preferences within the context of constraint-based, mixed-initiative planning for mission operations. This work is motivated by the experience of deploying and operating the MAPGEN (Mixed-initiative Activity Plan GENerator) system for the Mars Exploration Rover Mission. Responsiveness to the user is one of the important requirements for MAPGEN, hence, the additional computation time needed to optimize preferences must be kept within reasonabble bounds. This was the primary motivation for studying non-exhaustive optimization approaches. The specific goals of rhe empirical study are to assess the impact on solution quality of two greedy heuristics used in MAPGEN and to assess the improvement gained by applying a linear programming optimization technique to the final solution.

  11. An empirical model for optimal highway durability in cold regions.

    DOT National Transportation Integrated Search

    2016-03-10

    We develop an empirical tool to estimate optimal highway durability in cold regions. To test the model, we assemble a data set : containing all highway construction and maintenance projects in Arizona and Washington State from 1990 to 2014. The data ...

  12. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  13. Disturbance, life history, and optimal management for biodiversity

    USGS Publications Warehouse

    Guo, Q.

    2003-01-01

    Both frequency and intensity of disturbances in many ecosystems have been greatly enhanced by increasing human activities. As a consequence, the short-lived plant species including many exotics might have been dramatically increased in term of both richness and abundance on our planet while many long-lived species might have been lost. Such conclusions can be drawn from broadly observed successional cycles in both theoretical and empirical studies. This article discusses two major issues that have been largely overlooked in current ecosystem management policies and conservation efforts, i.e., life history constraints and future global warming trends. It also addresses the importance of these two factors in balancing disturbance frequency and intensity for optimal biodiversity maintenance and ecosystem management.

  14. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  15. Essays on empirical analysis of multi-unit auctions: Impacts of financial transmission rights on the restructured electricity industry

    NASA Astrophysics Data System (ADS)

    Zang, Hailing

    This dissertation uses recently developed empirical methodologies for the study of multi-unit auctions to test the impacts of Financial Transmission Rights (FTRs) on the competitiveness of restructured electricity markets. FTRs are a special type of financial option that hedge against volatility in the cost of transporting electricity over the grid. Policy makers seek to use the prices of FTRs as market signals to incentivize efficient investment and utilization of transmission capacity. However, prices will not send the correct signals if market participants strategically use FTRs. This dissertation uses data from the Texas electricity market to test whether the prices of FTRs are efficient to achieve such goals. The auctions studied are multi-unit, uniform-price, sealed-bid auctions. The first part of the dissertation studies the auctions on the spot market of the wholesale electricity industry. I derive structural empirical models to test theoretical predictions as to whether bidders fully internalize the effect of FTRs on profits into their bidding decisions. I find that bidders are learning as to how to optimally bid above marginal cost for their inframarginal capacities. The bidders also learn to bid to include FTRs into their profit maximization problem during the course of the first year. But starting from the second year, they deviated from optimal bidding that includes FTRs in the profit maximization problems. Counterfactual analysis show that the primary effect of FTRs on market outcomes is changing the level of prices rather than production efficiency. Finally, I find that in most months, the current allocations of FTRs are statistically equivalent to the optimal allocations. The second part of the dissertation studies the bidding behavior in the FTR auctions. I find that FTRs' strategic impact on the FTR purchasing behavior is significant for large bidders---firms exercising market power in the FTR auctions. Second, trader forecasts future FTR credit very accurately while large generators' forecasts of future FTR credit tends to be biased upward. Finally, the bid shading patterns are consistent with theoretical predictions and support the existence of common values.

  16. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  17. PATENTS AND RESEARCH INVESTMENTS: ASSESSING THE EMPIRICAL EVIDENCE.

    PubMed

    Budish, Eric; Roin, Benjamin N; Williams, Heidi L

    2016-05-01

    A well-developed theoretical literature - dating back at least to Nordhaus (1969) - has analyzed optimal patent policy design. We re-present the core trade-off of the Nordhaus model and highlight an empirical question which emerges from the Nordhaus framework as a key input into optimal patent policy design: namely, what is the elasticity of R&D investment with respect to the patent term? We then review the - surprisingly small - body of empirical evidence that has been developed on this question over the nearly half century since the publication of Nordhaus's book.

  18. Generalized SMO algorithm for SVM-based multitask learning.

    PubMed

    Cai, Feng; Cherkassky, Vladimir

    2012-06-01

    Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik proposed a general approach to formalizing such problems, known as "learning with structured data" and its support vector machine (SVM) based optimization formulation called SVM+. Liang and Cherkassky showed the connection between SVM+ and multitask learning (MTL) approaches in machine learning, and proposed an SVM-based formulation for MTL called SVM+MTL for classification. Training the SVM+MTL classifier requires the solution of a large quadratic programming optimization problem which scales as O(n(3)) with sample size n. So there is a need to develop computationally efficient algorithms for implementing SVM+MTL. This brief generalizes Platt's sequential minimal optimization (SMO) algorithm to the SVM+MTL setting. Empirical results show that, for typical SVM+MTL problems, the proposed generalized SMO achieves over 100 times speed-up, in comparison with general-purpose optimization routines.

  19. Optimization in optical systems revisited: Beyond genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gagnon, Denis; Dumont, Joey; Dubé, Louis

    2013-05-01

    Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

  20. Integrated, Team-Based Chronic Pain Management: Bridges from Theory and Research to High Quality Patient Care.

    PubMed

    Driscoll, Mary A; Kerns, Robert D

    Chronic pain is a significant public health concern. For many, chronic pain is associated with declines in physical functioning and increases in emotional distress. Additionally, the socioeconomic burden associated with costs of care, lost wages and declines in productivity are significant. A large and growing body of research continues to support the biopsychosocial model as the predominant framework for conceptualizing the experience of chronic pain and its multiple negative impacts. The model also informs a widely accepted and empirically supported approach for the optimal management of chronic pain. This chapter briefly articulates the historical foundations of the biopsychosocial model of chronic pain followed by a relatively detailed discussion of an empirically informed, integrated, multimodal and interdisciplinary treatment approach. The role of mental health professionals, especially psychologists, in the management of chronic pain is particularly highlighted.

  1. Evaluation of the Optimum Composition of Low-Temperature Fuel Cell Electrocatalysts for Methanol Oxidation by Combinatorial Screening.

    PubMed

    Antolini, Ermete

    2017-02-13

    Combinatorial chemistry and high-throughput screening represent an innovative and rapid tool to prepare and evaluate a large number of new materials, saving time and expense for research and development. Considering that the activity and selectivity of catalysts depend on complex kinetic phenomena, making their development largely empirical in practice, they are prime candidates for combinatorial discovery and optimization. This review presents an overview of recent results of combinatorial screening of low-temperature fuel cell electrocatalysts for methanol oxidation. Optimum catalyst compositions obtained by combinatorial screening were compared with those of bulk catalysts, and the effect of the library geometry on the screening of catalyst composition is highlighted.

  2. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  3. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  4. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  5. Five Skills Psychiatrists Should Have in Order to Provide Patients with Optimal Ethical Care

    PubMed Central

    2011-01-01

    Analyses of empirical research and ethical problems require different skills and approaches. This article presents five core skills psychiatrists need to be able to address ethical problems optimally. These include their being able to recognize ethical conflicts and distinguish them from empirical questions, apply all morally relevant values, and know good from bad ethical arguments. Clinical examples of each are provided. PMID:21487542

  6. C-tactile afferent stimulating touch carries a positive affective value.

    PubMed

    Pawling, Ralph; Cannon, Peter R; McGlone, Francis P; Walker, Susannah C

    2017-01-01

    The rewarding sensation of touch in affiliative interactions is hypothesized to be underpinned by a specialized system of nerve fibers called C-Tactile afferents (CTs), which respond optimally to slowly moving, gentle touch, typical of a caress. However, empirical evidence to support the theory that CTs encode socially relevant, rewarding tactile information in humans is currently limited. While in healthy participants, touch applied at CT optimal velocities (1-10cm/sec) is reliably rated as subjectively pleasant, neuronopathy patients lacking large myelinated afferents, but with intact C-fibres, report that the conscious sensation elicited by stimulation of CTs is rather vague. Given this weak perceptual impact the value of self-report measures for assessing the specific affective value of CT activating touch appears limited. Therefore, we combined subjective ratings of touch pleasantness with implicit measures of affective state (facial electromyography) and autonomic arousal (heart rate) to determine whether CT activation carries a positive affective value. We recorded the activity of two key emotion-relevant facial muscle sites (zygomaticus major-smile muscle, positive affect & corrugator supercilii-frown muscle, negative affect) while participants evaluated the pleasantness of experimenter administered stroking touch, delivered using a soft brush, at two velocities (CT optimal 3cm/sec & CT non-optimal 30cm/sec), on two skin sites (CT innervated forearm & non-CT innervated palm). On both sites, 3cm/sec stroking touch was rated as more pleasant and produced greater heart rate deceleration than 30cm/sec stimulation. However, neither self-report ratings nor heart rate responses discriminated stimulation on the CT innervated arm from stroking of the non-CT innervated palm. In contrast, significantly greater activation of the zygomaticus major (smiling muscle) was seen specifically to CT optimal, 3cm/sec, stroking on the forearm in comparison to all other stimuli. These results offer the first empirical evidence in humans that tactile stimulation that optimally activates CTs carries a positive affective valence that can be measured implicitly.

  7. A methodology for selecting optimum organizations for space communities

    NASA Technical Reports Server (NTRS)

    Ragusa, J. M.

    1978-01-01

    This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.

  8. Optimality and Conductivity for Water Flow: From Landscapes, to Unsaturated Soils, to Plant Leaves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H.H.

    2012-02-23

    Optimality principles have been widely used in many areas. Based on an optimality principle that any flow field will tend toward a minimum in the energy dissipation rate, this work shows that there exists a unified form of conductivity relationship for three different flow systems: landscapes, unsaturated soils and plant leaves. The conductivity, the ratio of water flux to energy gradient, is a power function of water flux although the power value is system dependent. This relationship indicates that to minimize energy dissipation rate for a whole system, water flow has a small resistance (or a large conductivity) at amore » location of large water flux. Empirical evidence supports validity of the relationship for landscape and unsaturated soils (under gravity dominated conditions). Numerical simulation results also show that the relationship can capture the key features of hydraulic structure for a plant leaf, although more studies are needed to further confirm its validity. Especially, it is of interest that according to this relationship, hydraulic conductivity for gravity-dominated unsaturated flow, unlike that defined in the classic theories, depends on not only capillary pressure (or saturation), but also the water flux. Use of the optimality principle allows for determining useful results that are applicable to a broad range of areas involving highly non-linear processes and may not be possible to obtain from classic theories describing water flow processes.« less

  9. Optimizing integrated luminosity of future hadron colliders

    NASA Astrophysics Data System (ADS)

    Benedikt, Michael; Schulte, Daniel; Zimmermann, Frank

    2015-10-01

    The integrated luminosity, a key figure of merit for any particle-physics collider, is closely linked to the peak luminosity and to the beam lifetime. The instantaneous peak luminosity of a collider is constrained by a number of boundary conditions, such as the available beam current, the maximum beam-beam tune shift with acceptable beam stability and reasonable luminosity lifetime (i.e., the empirical "beam-beam limit"), or the event pileup in the physics detectors. The beam lifetime at high-luminosity hadron colliders is largely determined by particle burn off in the collisions. In future highest-energy circular colliders synchrotron radiation provides a natural damping mechanism, which can be exploited for maximizing the integrated luminosity. In this article, we derive analytical expressions describing the optimized integrated luminosity, the corresponding optimum store length, and the time evolution of relevant beam parameters, without or with radiation damping, while respecting a fixed maximum value for the total beam-beam tune shift or for the event pileup in the detector. Our results are illustrated by examples for the proton-proton luminosity of the existing Large Hadron Collider (LHC) at its design parameters, of the High-Luminosity Large Hadron Collider (HL-LHC), and of the Future Circular Collider (FCC-hh).

  10. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  11. Three Essays on Macroeconomics

    NASA Astrophysics Data System (ADS)

    Doda, Lider Baran

    This dissertation consists of three independent essays in macroeconomics. The first essay studies the transition to a low carbon economy using an extension of the neoclassical growth model featuring endogenous energy efficiency, exhaustible energy and explicit climate-economy interaction. I derive the properties of the laissez faire equilibrium and compare them to the optimal allocations of a social planner who internalizes the climate change externality. Three main results emerge. First, the exhaustibility of energy generates strong market based incentives to improve energy efficiency and reduce CO 2 emissions without any government intervention. Second, the market and optimal allocations are substantially different suggesting a role for the government. Third, high and persistent taxes are required to implement the optimal allocations as a competitive equilibrium with taxes. The second essay focuses on coal fired power plants (CFPP) - one of the largest sources of CO2 emissions globally - and their generation efficiency using a macroeconomic model with an embedded CFPP sector. A key feature of the model is the endogenous choice of production technologies which differ in their energy efficiency. After establishing four empirical facts about the CFPP sector, I analyze the long run quantitative effects of energy taxes. Using the calibrated model, I find that sector-specific coal taxes have large effects on generation efficiency by inducing the use of more efficient technologies. Moreover, such taxes achieve large CO2 emissions reductions with relatively small effects on consumption and output. The final essay studies the procyclicality of fiscal policy in developing countries, which is a well-documented empirical observation seemingly at odds with Neoclassical and Keynesian policy prescriptions. I examine this issue by solving the optimal fiscal policy problem of a small open economy government when the interest rates on external debt are endogenous. Given an incomplete asset market, endogeneity is achieved by removing the government's ability to commit to repaying its external obligations. When calibrated to Argentina, the model generates procyclical government spending and countercyclical labor income tax rates. Simultaneously, the model's implications for key business cycle moments align well with the data.

  12. Transonic airfoil design for helicopter rotor applications

    NASA Technical Reports Server (NTRS)

    Hassan, Ahmed A.; Jackson, B.

    1989-01-01

    Despite the fact that the flow over a rotor blade is strongly influenced by locally three-dimensional and unsteady effects, practical experience has always demonstrated that substantial improvements in the aerodynamic performance can be gained by improving the steady two-dimensional charateristics of the airfoil(s) employed. The two phenomena known to have great impact on the overall rotor performance are: (1) retreating blade stall with the associated large pressure drag, and (2) compressibility effects on the advancing blade leading to shock formation and the associated wave drag and boundary-layer separation losses. It was concluded that: optimization routines are a powerful tool for finding solutions to multiple design point problems; the optimization process must be guided by the judicious choice of geometric and aerodynamic constraints; optimization routines should be appropriately coupled to viscous, not inviscid, transonic flow solvers; hybrid design procedures in conjunction with optimization routines represent the most efficient approach for rotor airfroil design; unsteady effects resulting in the delay of lift and moment stall should be modeled using simple empirical relations; and inflight optimization of aerodynamic loads (e.g., use of variable rate blowing, flaps, etc.) can satisfy any number of requirements at design and off-design conditions.

  13. Using "big data" to optimally model hydrology and water quality across expansive regions

    USGS Publications Warehouse

    Roehl, E.A.; Cook, J.B.; Conrads, P.A.

    2009-01-01

    This paper describes a new divide and conquer approach that leverages big environmental data, utilizing all available categorical and time-series data without subjectivity, to empirically model hydrologic and water-quality behaviors across expansive regions. The approach decomposes large, intractable problems into smaller ones that are optimally solved; decomposes complex signals into behavioral components that are easier to model with "sub- models"; and employs a sequence of numerically optimizing algorithms that include time-series clustering, nonlinear, multivariate sensitivity analysis and predictive modeling using multi-layer perceptron artificial neural networks, and classification for selecting the best sub-models to make predictions at new sites. This approach has many advantages over traditional modeling approaches, including being faster and less expensive, more comprehensive in its use of available data, and more accurate in representing a system's physical processes. This paper describes the application of the approach to model groundwater levels in Florida, stream temperatures across Western Oregon and Wisconsin, and water depths in the Florida Everglades. ?? 2009 ASCE.

  14. Optimal temperature for malaria transmission is dramaticallylower than previously predicted

    USGS Publications Warehouse

    Mordecai, Eerin A.; Paaijmans, Krijin P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  15. Optimal temperature for malaria transmission is dramatically lower than previously predicted

    USGS Publications Warehouse

    Mordecai, Erin A.; Paaijmans, Krijn P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.

    2013-01-01

    The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.

  16. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  17. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    PubMed

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Empirical validation of a real options theory based method for optimizing evacuation decisions within chemical plants.

    PubMed

    Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K

    2011-02-15

    This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. φq-field theory for portfolio optimization: “fat tails” and nonlinear correlations

    NASA Astrophysics Data System (ADS)

    Sornette, D.; Simonetti, P.; Andersen, J. V.

    2000-08-01

    Physics and finance are both fundamentally based on the theory of random walks (and their generalizations to higher dimensions) and on the collective behavior of large numbers of correlated variables. The archetype examplifying this situation in finance is the portfolio optimization problem in which one desires to diversify on a set of possibly dependent assets to optimize the return and minimize the risks. The standard mean-variance solution introduced by Markovitz and its subsequent developments is basically a mean-field Gaussian solution. It has severe limitations for practical applications due to the strongly non-Gaussian structure of distributions and the nonlinear dependence between assets. Here, we present in details a general analytical characterization of the distribution of returns for a portfolio constituted of assets whose returns are described by an arbitrary joint multivariate distribution. In this goal, we introduce a non-linear transformation that maps the returns onto Gaussian variables whose covariance matrix provides a new measure of dependence between the non-normal returns, generalizing the covariance matrix into a nonlinear covariance matrix. This nonlinear covariance matrix is chiseled to the specific fat tail structure of the underlying marginal distributions, thus ensuring stability and good conditioning. The portfolio distribution is then obtained as the solution of a mapping to a so-called φq field theory in particle physics, of which we offer an extensive treatment using Feynman diagrammatic techniques and large deviation theory, that we illustrate in details for multivariate Weibull distributions. The interaction (non-mean field) structure in this field theory is a direct consequence of the non-Gaussian nature of the distribution of asset price returns. We find that minimizing the portfolio variance (i.e. the relatively “small” risks) may often increase the large risks, as measured by higher normalized cumulants. Extensive empirical tests are presented on the foreign exchange market that validate satisfactorily the theory. For “fat tail” distributions, we show that an adequate prediction of the risks of a portfolio relies much more on the correct description of the tail structure rather than on their correlations. For the case of asymmetric return distributions, our theory allows us to generalize the return-risk efficient frontier concept to incorporate the dimensions of large risks embedded in the tail of the asset distributions. We demonstrate that it is often possible to increase the portfolio return while decreasing the large risks as quantified by the fourth and higher-order cumulants. Exact theoretical formulas are validated by empirical tests.

  20. A study of transonic aerodynamic analysis methods for use with a hypersonic aircraft synthesis code

    NASA Technical Reports Server (NTRS)

    Sandlin, Doral R.; Davis, Paul Christopher

    1992-01-01

    A means of performing routine transonic lift, drag, and moment analyses on hypersonic all-body and wing-body configurations were studied. The analysis method is to be used in conjunction with the Hypersonic Vehicle Optimization Code (HAVOC). A review of existing techniques is presented, after which three methods, chosen to represent a spectrum of capabilities, are tested and the results are compared with experimental data. The three methods consist of a wave drag code, a full potential code, and a Navier-Stokes code. The wave drag code, representing the empirical approach, has very fast CPU times, but very limited and sporadic results. The full potential code provides results which compare favorably to the wind tunnel data, but with a dramatic increase in computational time. Even more extreme is the Navier-Stokes code, which provides the most favorable and complete results, but with a very large turnaround time. The full potential code, TRANAIR, is used for additional analyses, because of the superior results it can provide over empirical and semi-empirical methods, and because of its automated grid generation. TRANAIR analyses include an all body hypersonic cruise configuration and an oblique flying wing supersonic transport.

  1. Predicting stomatal responses to the environment from the optimization of photosynthetic gain and hydraulic cost.

    PubMed

    Sperry, John S; Venturas, Martin D; Anderegg, William R L; Mencuccini, Maurizio; Mackay, D Scott; Wang, Yujie; Love, David M

    2017-06-01

    Stomatal regulation presumably evolved to optimize CO 2 for H 2 O exchange in response to changing conditions. If the optimization criterion can be readily measured or calculated, then stomatal responses can be efficiently modelled without recourse to empirical models or underlying mechanism. Previous efforts have been challenged by the lack of a transparent index for the cost of losing water. Yet it is accepted that stomata control water loss to avoid excessive loss of hydraulic conductance from cavitation and soil drying. Proximity to hydraulic failure and desiccation can represent the cost of water loss. If at any given instant, the stomatal aperture adjusts to maximize the instantaneous difference between photosynthetic gain and hydraulic cost, then a model can predict the trajectory of stomatal responses to changes in environment across time. Results of this optimization model are consistent with the widely used Ball-Berry-Leuning empirical model (r 2  > 0.99) across a wide range of vapour pressure deficits and ambient CO 2 concentrations for wet soil. The advantage of the optimization approach is the absence of empirical coefficients, applicability to dry as well as wet soil and prediction of plant hydraulic status along with gas exchange. © 2016 John Wiley & Sons Ltd.

  2. Design conceptuel d'un avion blended wing body de 200 passagers

    NASA Astrophysics Data System (ADS)

    Ammar, Sami

    The Blended Wing Body is built based on the flying wing concept and performance improvements compared to conventional aircraft. Contrariwise, most studies have focused on large aircraft and it is not sure whether the gains are the same for smaller aircraft. The main of objective is to perform the conceptual design of a BWB of 200 passengers and compare the performance obtained with a conventional aircraft equivalent in terms of payload and range. The design of the Blended Wing Body was carried out under the CEASIOM environment. This platform design suitable for conventional aircraft design has been modified and additional tools have been integrated in order to achieve the aerodynamic analysis, performance and stability of the aircraft fuselage built. A plane model is obtained in the geometric module AcBuilder CEASIOM from the design variables of a wing. Estimates of mass are made from semi- empirical formulas adapted to the geometry of the BWB and calculations centering and inertia are possible through BWB model developed in CATIA. Low fidelity methods, such as TORNADO and semi- empirical formulas are used to analyze the aerodynamic performance and stability of the aircraft. The aerodynamic results are validated using a high-fidelity analysis using FLUENT CFD software. An optimization process is implemented in order to obtain improved while maintaining a feasible design performance. It is an optimization of the plan form of the aircraft fuselage integrated with a number of passengers and equivalent to that of a A320.Les performance wing aircraft merged optimized maximum range are compared to A320 also optimized. Significant gains were observed. An analysis of the dynamics of longitudinal and lateral flight is carried out on the aircraft optimized BWB finesse and mass. This study identified the stable and unstable modes of the aircraft. Thus, this analysis has highlighted the stability problems associated with the oscillation of incidence and the Dutch roll for the absence of stabilizers.

  3. Optimizing the fine lock performance of the Hubble Space Telescope fine guidance sensors

    NASA Technical Reports Server (NTRS)

    Eaton, David J.; Whittlesey, Richard; Abramowicz-Reed, Linda; Zarba, Robert

    1993-01-01

    This paper summarizes the on-orbit performance to date of the three Hubble Space Telescope Fine Guidance Sensors (FGS's) in Fine Lock mode, with respect to acquisition success rate, ability to maintain lock, and star brightness range. The process of optimizing Fine Lock performance, including the reasoning underlying the adjustment of uplink parameters, and the effects of optimization are described. The Fine Lock optimization process has combined theoretical and experimental approaches. Computer models of the FGS have improved understanding of the effects of uplink parameters and fine error averaging on the ability of the FGS to acquire stars and maintain lock. Empirical data have determined the variation of the interferometric error characteristics (so-called 's-curves') between FGS's and over each FGS field of view, identified binary stars, and quantified the systematic error in Coarse Track (the mode preceding Fine Lock). On the basis of these empirical data, the values of the uplink parameters can be selected more precisely. Since launch, optimization efforts have improved FGS Fine Lock performance, particularly acquisition, which now enjoys a nearly 100 percent success rate. More recent work has been directed towards improving FGS tolerance of two conditions that exceed its original design requirements. First, large amplitude spacecraft jitter is induced by solar panel vibrations following day/night transitions. This jitter is generally much greater than the FGS's were designed to track, and while the tracking ability of the FGS's has been shown to exceed design requirements, losses of Fine Lock after day/night transitions are frequent. Computer simulations have demonstrated a potential improvement in Fine Lock tracking of vehicle jitter near terminator crossings. Second, telescope spherical aberration degrades the interferometric error signal in Fine Lock, but use of the FGS two-thirds aperture stop restores the transfer function with a corresponding loss of throughput. This loss requires the minimum brightness of acquired stars to be about one magnitude brighter than originally planned.

  4. Selection biases in empirical p(z) methods for weak lensing

    DOE PAGES

    Gruen, D.; Brimioulle, F.

    2017-02-23

    To measure the mass of foreground objects with weak gravitational lensing, one needs to estimate the redshift distribution of lensed background sources. This is commonly done in an empirical fashion, i.e. with a reference sample of galaxies of known spectroscopic redshift, matched to the source population. In this paper, we develop a simple decision tree framework that, under the ideal conditions of a large, purely magnitude-limited reference sample, allows an unbiased recovery of the source redshift probability density function p(z), as a function of magnitude and colour. We use this framework to quantify biases in empirically estimated p(z) caused bymore » selection effects present in realistic reference and weak lensing source catalogues, namely (1) complex selection of reference objects by the targeting strategy and success rate of existing spectroscopic surveys and (2) selection of background sources by the success of object detection and shape measurement at low signal to noise. For intermediate-to-high redshift clusters, and for depths and filter combinations appropriate for ongoing lensing surveys, we find that (1) spectroscopic selection can cause biases above the 10 per cent level, which can be reduced to ≈5 per cent by optimal lensing weighting, while (2) selection effects in the shape catalogue bias mass estimates at or below the 2 per cent level. Finally, this illustrates the importance of completeness of the reference catalogues for empirical redshift estimation.« less

  5. Cationic lipids: molecular structure/ transfection activity relationships and interactions with biomembranes.

    PubMed

    Koynova, Rumiana; Tenchov, Boris

    2010-01-01

    Abstract Synthetic cationic lipids, which form complexes (lipoplexes) with polyanionic DNA, are presently the most widely used constituents of nonviral gene carriers. A large number of cationic amphiphiles have been synthesized and tested in transfection studies. However, due to the complexity of the transfection pathway, no general schemes have emerged for correlating the cationic lipid chemistry with their transfection efficacy and the approaches for optimizing their molecular structures are still largely empirical. Here we summarize data on the relationships between transfection activity and cationic lipid molecular structure and demonstrate that the transfection activity depends in a systematic way on the lipid hydrocarbon chain structure. A number of examples, including a large series of cationic phosphatidylcholine derivatives, show that optimum transfection is displayed by lipids with chain length of approximately 14 carbon atoms and that the transfection efficiency strongly increases with increase of chain unsaturation, specifically upon replacement of saturated with monounsaturated chains.

  6. Maximum plant height and the biophysical factors that limit it.

    PubMed

    Niklas, Karl J

    2007-03-01

    Basic engineering theory and empirically determined allometric relationships for the biomass partitioning patterns of extant tree-sized plants show that the mechanical requirements for vertical growth do not impose intrinsic limits on the maximum heights that can be reached by species with woody, self-supporting stems. This implies that maximum tree height is constrained by other factors, among which hydraulic constraints are plausible. A review of the available information on scaling relationships observed for large tree-sized plants, nevertheless, indicates that mechanical and hydraulic requirements impose dual restraints on plant height and thus, may play equally (but differentially) important roles during the growth of arborescent, large-sized species. It may be the case that adaptations to mechanical and hydraulic phenomena have optimized growth, survival and reproductive success rather than longevity and mature size.

  7. Positivity in healthcare: relation of optimism to performance.

    PubMed

    Luthans, Kyle W; Lebsack, Sandra A; Lebsack, Richard R

    2008-01-01

    The purpose of this paper is to explore the linkage between nurses' levels of optimism and performance outcomes. The study sample consisted of 78 nurses in all areas of a large healthcare facility (hospital) in the Midwestern United States. The participants completed surveys to determine their current state of optimism. Supervisory performance appraisal data were gathered in order to measure performance outcomes. Spearman correlations and a one-way ANOVA were used to analyze the data. The results indicated a highly significant positive relationship between the nurses' measured state of optimism and their supervisors' ratings of their commitment to the mission of the hospital, a measure of contribution to increasing customer satisfaction, and an overall measure of work performance. This was an exploratory study. Larger sample sizes and longitudinal data would be beneficial because it is probable that state optimism levels will vary and that it might be more accurate to measure state optimism at several points over time in order to better predict performance outcomes. Finally, the study design does not imply causation. Suggestions for effectively developing and managing nurses' optimism to positively impact their performance are provided. To date, there has been very little empirical evidence assessing the impact that positive psychological capacities such as optimism of key healthcare professionals may have on performance. This paper was designed to help begin to fill this void by examining the relationship between nurses' self-reported optimism and their supervisors' evaluations of their performance.

  8. Low-resolution simulations of vesicle suspensions in 2D

    NASA Astrophysics Data System (ADS)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  9. The disagreement between the ideal observer and human observers in hardware and software imaging system optimization: theoretical explanations and evidence

    NASA Astrophysics Data System (ADS)

    He, Xin

    2017-03-01

    The ideal observer is widely used in imaging system optimization. One practical question remains open: do the ideal and human observers have the same preference in system optimization and evaluation? Based on the ideal observer's mathematical properties proposed by Barrett et. al. and the empirical properties of human observers investigated by Myers et. al., I attempt to pursue the general rules regarding the applicability of the ideal observer in system optimization. Particularly, in software optimization, the ideal observer pursues data conservation while humans pursue data presentation or perception. In hardware optimization, the ideal observer pursues a system with the maximum total information, while humans pursue a system with the maximum selected (e.g., certain frequency bands) information. These different objectives may result in different system optimizations between human and the ideal observers. Thus, an ideal observer optimized system is not necessarily optimal for humans. I cite empirical evidence in search and detection tasks, in hardware and software evaluation, in X-ray CT, pinhole imaging, as well as emission computed tomography to corroborate the claims. (Disclaimer: the views expressed in this work do not necessarily represent those of the FDA)

  10. A Simple Principled Approach for Modeling and Understanding Uniform Color Metrics

    PubMed Central

    Smet, Kevin A.G.; Webster, Michael A.; Whitehead, Lorne A.

    2016-01-01

    An important goal in characterizing human color vision is to order color percepts in a way that captures their similarities and differences. This has resulted in the continuing evolution of “uniform color spaces,” in which the distances within the space represent the perceptual differences between the stimuli. While these metrics are now very successful in predicting how color percepts are scaled, they do so in largely empirical, ad hoc ways, with limited reference to actual mechanisms of color vision. In this article our aim is to instead begin with general and plausible assumptions about color coding, and then develop a model of color appearance that explicitly incorporates them. We show that many of the features of empirically-defined color order systems (such as those of Munsell, Pantone, NCS, and others) as well as many of the basic phenomena of color perception, emerge naturally from fairly simple principles of color information encoding in the visual system and how it can be optimized for the spectral characteristics of the environment. PMID:26974939

  11. Creating single-copy genetic circuits

    PubMed Central

    Lee, Jeong Wook; Gyorgy, Andras; Cameron, D. Ewen; Pyenson, Nora; Choi, Kyeong Rok; Way, Jeffrey C.; Silver, Pamela A.; Del Vecchio, Domitilla; Collins, James J.

    2017-01-01

    SUMMARY Synthetic biology is increasingly used to develop sophisticated living devices for basic and applied research. Many of these genetic devices are engineered using multi-copy plasmids, but as the field progresses from proof-of-principle demonstrations to practical applications, it is important to develop single-copy synthetic modules that minimize consumption of cellular resources and can be stably maintained as genomic integrants. Here we use empirical design, mathematical modeling and iterative construction and testing to build single-copy, bistable toggle switches with improved performance and reduced metabolic load that can be stably integrated into the host genome. Deterministic and stochastic models led us to focus on basal transcription to optimize circuit performance and helped to explain the resulting circuit robustness across a large range of component expression levels. The design parameters developed here provide important guidance for future efforts to convert functional multi-copy gene circuits into optimized single-copy circuits for practical, real-world use. PMID:27425413

  12. Exploring the patterns and evolution of self-organized urban street networks through modeling

    NASA Astrophysics Data System (ADS)

    Rui, Yikang; Ban, Yifang; Wang, Jiechen; Haas, Jan

    2013-03-01

    As one of the most important subsystems in cities, urban street networks have recently been well studied by using the approach of complex networks. This paper proposes a growing model for self-organized urban street networks. The model involves a competition among new centers with different values of attraction radius and a local optimal principle of both geometrical and topological factors. We find that with the model growth, the local optimization in the connection process and appropriate probability for the loop construction well reflect the evolution strategy in real-world cities. Moreover, different values of attraction radius in centers competition process lead to morphological change in patterns including urban network, polycentric and monocentric structures. The model succeeds in reproducing a large diversity of road network patterns by varying parameters. The similarity between the properties of our model and empirical results implies that a simple universal growth mechanism exists in self-organized cities.

  13. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  14. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  15. Reconciling long-term cultural diversity and short-term collective social behavior.

    PubMed

    Valori, Luca; Picciolo, Francesco; Allansdottir, Agnes; Garlaschelli, Diego

    2012-01-24

    An outstanding open problem is whether collective social phenomena occurring over short timescales can systematically reduce cultural heterogeneity in the long run, and whether offline and online human interactions contribute differently to the process. Theoretical models suggest that short-term collective behavior and long-term cultural diversity are mutually excluding, since they require very different levels of social influence. The latter jointly depends on two factors: the topology of the underlying social network and the overlap between individuals in multidimensional cultural space. However, while the empirical properties of social networks are intensively studied, little is known about the large-scale organization of real societies in cultural space, so that random input specifications are necessarily used in models. Here we use a large dataset to perform a high-dimensional analysis of the scientific beliefs of thousands of Europeans. We find that interopinion correlations determine a nontrivial ultrametric hierarchy of individuals in cultural space. When empirical data are used as inputs in models, ultrametricity has strong and counterintuitive effects. On short timescales, it facilitates a symmetry-breaking phase transition triggering coordinated social behavior. On long timescales, it suppresses cultural convergence by restricting it within disjoint groups. Moreover, ultrametricity implies that these results are surprisingly robust to modifications of the dynamical rules considered. Thus the empirical distribution of individuals in cultural space appears to systematically optimize the coexistence of short-term collective behavior and long-term cultural diversity, which can be realized simultaneously for the same moderate level of mutual influence in a diverse range of online and offline settings.

  16. Response surface methodology: A non-conventional statistical tool to maximize the throughput of Streptomyces species biomass and their bioactive metabolites.

    PubMed

    Latha, Selvanathan; Sivaranjani, Govindhan; Dhanasekaran, Dharumadurai

    2017-09-01

    Among diverse actinobacteria, Streptomyces is a renowned ongoing source for the production of a large number of secondary metabolites, furnishing immeasurable pharmacological and biological activities. Hence, to meet the demand of new lead compounds for human and animal use, research is constantly targeting the bioprospecting of Streptomyces. Optimization of media components and physicochemical parameters is a plausible approach for the exploration of intensified production of novel as well as existing bioactive metabolites from various microbes, which is usually achieved by a range of classical techniques including one factor at a time (OFAT). However, the major drawbacks of conventional optimization methods have directed the use of statistical optimization approaches in fermentation process development. Response surface methodology (RSM) is one of the empirical techniques extensively used for modeling, optimization and analysis of fermentation processes. To date, several researchers have implemented RSM in different bioprocess optimization accountable for the production of assorted natural substances from Streptomyces in which the results are very promising. This review summarizes some of the recent RSM adopted studies for the enhanced production of antibiotics, enzymes and probiotics using Streptomyces with the intention to highlight the significance of Streptomyces as well as RSM to the research community and industries.

  17. Idealized Experiments for Optimizing Model Parameters Using a 4D-Variational Method in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang

    2018-04-01

    Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  18. Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, C.; Zhang, R. H.

    2017-12-01

    Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  19. Phase space interrogation of the empirical response modes for seismically excited structures

    NASA Astrophysics Data System (ADS)

    Paul, Bibhas; George, Riya C.; Mishra, Sudib K.

    2017-07-01

    Conventional Phase Space Interrogation (PSI) for structural damage assessment relies on exciting the structure with low dimensional chaotic waveform, thereby, significantly limiting their applicability to large structures. The PSI technique is presently extended for structure subjected to seismic excitations. The high dimensionality of the phase space for seismic response(s) are overcome by the Empirical Mode Decomposition (EMD), decomposing the responses to a number of intrinsic low dimensional oscillatory modes, referred as Intrinsic Mode Functions (IMFs). Along with their low dimensionality, a few IMFs, retain sufficient information of the system dynamics to reflect the damage induced changes. The mutually conflicting nature of low-dimensionality and the sufficiency of dynamic information are taken care by the optimal choice of the IMF(s), which is shown to be the third/fourth IMFs. The optimal IMF(s) are employed for the reconstruction of the Phase space attractor following Taken's embedding theorem. The widely referred Changes in Phase Space Topology (CPST) feature is then employed on these Phase portrait(s) to derive the damage sensitive feature, referred as the CPST of the IMFs (CPST-IMF). The legitimacy of the CPST-IMF is established as a damage sensitive feature by assessing its variation with a number of damage scenarios benchmarked in the IASC-ASCE building. The damage localization capability, remarkable tolerance to noise contamination and the robustness under different seismic excitations of the feature are demonstrated.

  20. Learning about new products: an empirical study of physicians' behavior.

    PubMed

    Ferreyra, Maria Marta; Kosenok, Grigory

    2011-01-01

    We develop and estimate a model of market demand for a new pharmaceutical, whose quality is learned through prescriptions by forward-looking physicians. We use a panel of antiulcer prescriptions from Italian physicians between 1990 and 1992 and focus on a new molecule available since 1990. We solve the model by calculating physicians' optimal decision rules as functions of their beliefs about the new pharmaceutical. According to our counterfactuals, physicians' initial pessimism and uncertainty can have large, negative effects on their propensity to prescribe the new drug and on expected health outcomes. In contrast, subsidizing the new good can mitigate informational losses.

  1. The efficacy of using inventory data to develop optimal diameter increment models

    Treesearch

    Don C. Bragg

    2002-01-01

    Most optimal tree diameter growth models have arisen through either the conceptualization of physiological processes or the adaptation of empirical increment models. However, surprisingly little effort has been invested in the melding of these approaches even though it is possible to develop theoretically sound, computationally efficient optimal tree growth models...

  2. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.

  3. Optimization of a middle atmosphere diagnostic scheme

    NASA Astrophysics Data System (ADS)

    Akmaev, Rashid A.

    1997-06-01

    A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.

  4. Design of a secondary ionization target for direct production of a C- beam from CO2 pulses for online AMS.

    PubMed

    Salazar, Gary; Ognibene, Ted

    2013-01-01

    We designed and optimized a novel device "target" that directs a CO 2 gas pulse onto a Ti surface where a Cs + beam generates C - from the CO 2 . This secondary ionization target enables an accelerator mass spectrometer to ionize pulses of CO 2 in the negative mode to measure 14 C/ 12 C isotopic ratios in real time. The design of the targets were based on computational flow dynamics, ionization mechanism and empirical optimization. As part of the ionization mechanism, the adsorption of CO 2 on the Ti surface was fitted with the Jovanovic-Freundlich isotherm model using empirical and simulation data. The inferred adsorption constants were in good agreement with other works. The empirical optimization showed that amount of injected carbon and the flow speed of the helium carrier gas improve the ionization efficiency and the amount of 12 C - produced until reaching a saturation point. Linear dynamic range between 150 and 1000 ng of C and optimum carrier gas flow speed of around 0.1 mL/min were shown. It was also shown that the ionization depends on the area of the Ti surface and Cs + beam cross-section. A range of ionization efficiency of 1-2.5% was obtained by optimizing the described parameters.

  5. Procedures for Empirical Determination of En-Route Criterion Levels.

    ERIC Educational Resources Information Center

    Moncrief, Michael H.

    En-route Criterion Levels (ECLs) are defined as decision rules for predicting pupil readiness to advance through an instructional sequence. This study investigated the validity of present ELCs in an individualized mathematics program and tested procedures for empirically determining optimal ECLs. Retest scores and subsequent progress were…

  6. Multiobjective hyper heuristic scheme for system design and optimization

    NASA Astrophysics Data System (ADS)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  7. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  9. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  10. Business owners' optimism and business performance after a natural disaster.

    PubMed

    Bronson, James W; Faircloth, James B; Valentine, Sean R

    2006-12-01

    Previous work indicates that individuals' optimism is related to superior performance in adverse situations. This study examined correlations after flooding for measures of business recovery but found only weak support (very small common variance) for business owners' optimism scores and sales recovery. Using traditional measures of recovery, in this study was little empirical evidence that optimism would be of value in identifying businesses at risk after a natural disaster.

  11. Optimal monetary policy and oil price shocks

    NASA Astrophysics Data System (ADS)

    Kormilitsina, Anna

    This dissertation is comprised of two chapters. In the first chapter, I investigate the role of systematic U.S. monetary policy in the presence of oil price shocks. The second chapter is devoted to studying different approaches to modeling energy demand. In an influential paper, Bernanke, Gertler, and Watson (1997) and (2004) argue that systematic monetary policy exacerbated the recessions the U.S. economy experienced in the aftermath of post World War II oil price shocks. In the first chapter of this dissertation, I critically evaluate this claim in the context of an estimated medium-scale model of the U.S. business cycle. Specifically, I solve for the Ramsey optimal monetary policy in the medium-scale dynamic stochastic general equilibrium model (henceforth DSGE) of Schmitt-Grohe and Uribe (2005). To model the demand for oil, I use the approach of Finn (2000). According to this approach, the utilization of capital services requires oil usage. In the related literature on the macroeconomic effects of oil price shocks, it is common to calibrate structural parameters of the model. In contrast to this literature, I estimate the parameters of my DSGE model. The estimation strategy involves matching the impulse responses from the theoretical model to responses predicted by an empirical model. For estimation, I use the alternative to the classical Laplace type estimator proposed by Chernozhukov and Hong (2003). To obtain the empirical impulse responses, I identify an oil price shock in a structural VAR (SVAR) model of the U.S. business cycle. The SVAR model predicts that, in response to an oil price increase, GDP, investment, hours, capital utilization, and the real wage fall, while the nominal interest rate and inflation rise. These findings are economically intuitive and in line with the existing empirical evidence. Comparing the actual and the Ramsey optimal monetary policy response to an oil price shock, I find that the optimal policy allows for more inflation, a larger drop in wages, and a rise in hours compared to those actually observed. The central finding of this Chapter is that the optimal policy is associated with a smaller drop in GDP and other macroeconomic variables. The latter results therefore confirm the claim of Bernanke, Gertler and Watson that monetary policy was to a large extent responsible for the recessions that followed the oil price shocks. However, under the optimal policy, interest rates are tightened even more than what is predicted by the empirical model. This result contrasts sharply with the claim of Bernanke, Gertler, and Watson that the Federal Reserve exacerbated recessions by the excessive tightening of interest rates in response to the oil price increases. In contrast to related studies that focus on output stabilization, I find that eliminating the negative response of GDP to an oil price shock is not desirable. In the second chapter of this dissertation, I compare two approaches to modeling energy sector. Because the share of energy in GDP is small, models of energy have been criticized for their inability to explain sizeable effects of energy price increases on the economic activity. I find that if the price of energy is an exogenous AR(1) process, then the two modeling approaches produce the responses of GDP similar in size to responses observed in most empirical studies, but fail to produce the timing and the shape of the response. DSGE framework can solve the timing and the shape of impulse responses problem, however, fails to replicate the size of the impulse responses. Thus, in DSGE frameworks, amplifying mechanisms for the effect of the energy price shock and estimation based calibration of model parameters are needed to produce the size of the GDP response to the energy price shock.

  12. Volatility in financial markets: stochastic models and empirical results

    NASA Astrophysics Data System (ADS)

    Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.

    2002-11-01

    We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.

  13. Therapeutic Drug Monitoring Guides the Management of Crohn's Patients with Secondary Loss of Response to Adalimumab.

    PubMed

    Restellini, Sophie; Chao, Che-Yung; Lakatos, Peter L; Aruljothy, Achuthan; Aziz, Haya; Kherad, Omar; Bitton, Alain; Wild, Gary; Afif, Waqqas; Bessissow, Talat

    2018-04-13

    Managing loss of response (LOR) in Crohn's disase (CD) patients remains challenging. Compelling evidence supports therapeutic drug monitoring (TDM) to guide management in patients on infliximab, but data for other biologics are less robust. We aimed to asses if empiric dose escalation led to improved clinical outcome in addition to TDM-guided optimization in CD patients with LOR to adalimumab (ADA). Retrospective chart review of patients followed between 2014 and 2016 at McGill IBD Center with index TDM for LOR to ADA was performed. Primary outcomes were composite remission at 3, 6, and 12 months in those with empiric adjustments versus TDM-guided optimization. There were 104 patients (54.8% men) who were included in the study. Of this group, 81 patients (77.9%) had serum level (SL) ≥5µg/ml at index TDM with a median value of 12µg/ml (IQR 6.1-16.5). There were 10 patients (9.6%) who had undetectable SL with high anti-ADA antibodies and 48 (46.2%) received empiric escalation. TDM led to change in treatment in 58 patients (55.8%). Among them, 28 (48.3%) had discontinued ADA, 12 (21.7%) had addition of immunomodulator or steroid, and 18 (31%) had ADA dose escalation. Empiric dose escalation before TDM-based optimization was not associated with improved outcomes at 3, 6, and 12 months, irrespective of SL levels. Clear SL cutoff associated with composite remission was not identified. Our data do not support empiric dose adjustment beyond that based on the result of the TDM in patients with LOR to ADA. TDM limits unnecessary dose escalation and provides appropriate treatment strategy without compromising clinical outcomes. 10.1093/ibd/izy044_video1izy044.video15768828880001.

  14. Optimal design of the first stage of the plate-fin heat exchanger for the EAST cryogenic system

    NASA Astrophysics Data System (ADS)

    Qingfeng, JIANG; Zhigang, ZHU; Qiyong, ZHANG; Ming, ZHUANG; Xiaofei, LU

    2018-03-01

    The size of the heat exchanger is an important factor determining the dimensions of the cold box in helium cryogenic systems. In this paper, a counter-flow multi-stream plate-fin heat exchanger is optimized by means of a spatial interpolation method coupled with a hybrid genetic algorithm. Compared with empirical correlations, this spatial interpolation algorithm based on a kriging model can be adopted to more precisely predict the Colburn heat transfer factors and Fanning friction factors of offset-strip fins. Moreover, strict computational fluid dynamics simulations can be carried out to predict the heat transfer and friction performance in the absence of reliable experimental data. Within the constraints of heat exchange requirements, maximum allowable pressure drop, existing manufacturing techniques and structural strength, a mathematical model of an optimized design with discrete and continuous variables based on a hybrid genetic algorithm is established in order to minimize the volume. The results show that for the first-stage heat exchanger in the EAST refrigerator, the structural size could be decreased from the original 2.200 × 0.600 × 0.627 (m3) to the optimized 1.854 × 0.420 × 0.340 (m3), with a large reduction in volume. The current work demonstrates that the proposed method could be a useful tool to achieve optimization in an actual engineering project during the practical design process.

  15. The Alexandria library, a quantum-chemical database of molecular properties for force field development.

    PubMed

    Ghahremanpour, Mohammad M; van Maaren, Paul J; van der Spoel, David

    2018-04-10

    Data quality as well as library size are crucial issues for force field development. In order to predict molecular properties in a large chemical space, the foundation to build force fields on needs to encompass a large variety of chemical compounds. The tabulated molecular physicochemical properties also need to be accurate. Due to the limited transparency in data used for development of existing force fields it is hard to establish data quality and reusability is low. This paper presents the Alexandria library as an open and freely accessible database of optimized molecular geometries, frequencies, electrostatic moments up to the hexadecupole, electrostatic potential, polarizabilities, and thermochemistry, obtained from quantum chemistry calculations for 2704 compounds. Values are tabulated and where available compared to experimental data. This library can assist systematic development and training of empirical force fields for a broad range of molecules.

  16. The Alexandria library, a quantum-chemical database of molecular properties for force field development

    NASA Astrophysics Data System (ADS)

    Ghahremanpour, Mohammad M.; van Maaren, Paul J.; van der Spoel, David

    2018-04-01

    Data quality as well as library size are crucial issues for force field development. In order to predict molecular properties in a large chemical space, the foundation to build force fields on needs to encompass a large variety of chemical compounds. The tabulated molecular physicochemical properties also need to be accurate. Due to the limited transparency in data used for development of existing force fields it is hard to establish data quality and reusability is low. This paper presents the Alexandria library as an open and freely accessible database of optimized molecular geometries, frequencies, electrostatic moments up to the hexadecupole, electrostatic potential, polarizabilities, and thermochemistry, obtained from quantum chemistry calculations for 2704 compounds. Values are tabulated and where available compared to experimental data. This library can assist systematic development and training of empirical force fields for a broad range of molecules.

  17. Criticism of generally accepted fundamentals and methodologies of traffic and transportation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerner, Boris S.

    It is explained why the set of the fundamental empirical features of traffic breakdown (a transition from free flow to congested traffic) should be the empirical basis for any traffic and transportation theory that can be reliable used for control and optimization in traffic networks. It is shown that generally accepted fundamentals and methodologies of traffic and transportation theory are not consistent with the set of the fundamental empirical features of traffic breakdown at a highway bottleneck. To these fundamentals and methodologies of traffic and transportation theory belong (i) Lighthill-Whitham-Richards (LWR) theory, (ii) the General Motors (GM) model class (formore » example, Herman, Gazis et al. GM model, Gipps’s model, Payne’s model, Newell’s optimal velocity (OV) model, Wiedemann’s model, Bando et al. OV model, Treiber’s IDM, Krauß’s model), (iii) the understanding of highway capacity as a particular stochastic value, and (iv) principles for traffic and transportation network optimization and control (for example, Wardrop’s user equilibrium (UE) and system optimum (SO) principles). Alternatively to these generally accepted fundamentals and methodologies of traffic and transportation theory, we discuss three-phase traffic theory as the basis for traffic flow modeling as well as briefly consider the network breakdown minimization (BM) principle for the optimization of traffic and transportation networks with road bottlenecks.« less

  18. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    PubMed

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  19. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES

    PubMed Central

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    2016-01-01

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986

  20. Optimizing empiric therapy for Gram-negative bloodstream infections in children.

    PubMed

    Chao, Y; Reuter, C; Kociolek, L K; Patel, R; Zheng, X; Patel, S J

    2018-06-01

    Antimicrobial stewardship can be challenging in children with bloodstream infections (BSIs) caused by Gram-negative bacilli (GNB). This retrospective cohort study explored how data elements in the electronic health record could potentially optimize empiric antibiotic therapy for BSIs caused by GNB, via the construction of customized antibiograms for categorical GNB infections and identification of opportunities to minimize organism-drug mismatch and decrease time to effective therapy. Our results suggest potential strategies that could be implemented at key decision points in prescribing at initiation, modification, and targeting of therapy. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  1. Rational design of gene-based vaccines.

    PubMed

    Barouch, Dan H

    2006-01-01

    Vaccine development has traditionally been an empirical discipline. Classical vaccine strategies include the development of attenuated organisms, whole killed organisms, and protein subunits, followed by empirical optimization and iterative improvements. While these strategies have been remarkably successful for a wide variety of viruses and bacteria, these approaches have proven more limited for pathogens that require cellular immune responses for their control. In this review, current strategies to develop and optimize gene-based vaccines are described, with an emphasis on novel approaches to improve plasmid DNA vaccines and recombinant adenovirus vector-based vaccines. Copyright 2006 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.

  2. All Might Have Won, But Not All Have the Prize: Optimal Treatment for Substance Abuse Among Adolescents with Conduct Problems

    PubMed Central

    Spas, Jayson; Ramsey, Susan; Paiva, Andrea L.; Stein, L.A.R.

    2012-01-01

    Considerable evidence from the literature on treatment outcomes indicates that substance abuse treatment among adolescents with conduct problems varies widely. Treatments commonly used among this population are cognitive-behavioral therapy (CBT), 12-step facilitation, multisystemic therapy (MST), psychoeducation (PE), and motivational interviewing (MI). This manuscript thoroughly and systematically reviews the available literature to determine which treatment is optimal for substance-abusing adolescents with conduct problems. Results suggest that although there are several evidence-based and empirically supported treatments, those that incorporate family-based intervention consistently provide the most positive treatment outcomes. In particular, this review further reveals that although many interventions have gained empirical support over the years, only one holds the prize as being the optimal treatment of choice for substance abuse treatment among adolescents with conduct problems. PMID:23170066

  3. Optimal Sequential Rules for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  4. Optimization in Bilingual Language Use

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.

    2013-01-01

    Pieter Muysken's keynote paper, "Language contact outcomes as a result of bilingual optimization strategies", undertakes an ambitious project to theoretically unify different empirical outcomes of language contact, for instance, SLA, pidgins and Creoles, and code-switching. Muysken has dedicated a life-time to researching, rather…

  5. Empirically Guided Coordination of Multiple Evidence-Based Treatments: An Illustration of Relevance Mapping in Children's Mental Health Services

    ERIC Educational Resources Information Center

    Chorpita, Bruce F.; Bernstein, Adam; Daleiden, Eric L.

    2011-01-01

    Objective: Despite substantial progress in the development and identification of psychosocial evidence-based treatments (EBTs) in mental health, there is minimal empirical guidance for selecting an optimal "set" of EBTs maximally applicable and generalizable to a chosen service sample. Relevance mapping is a proposed methodology that…

  6. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  7. Novel functional hepatitis C virus glycoprotein isolates identified using an optimized viral pseudotype entry assay.

    PubMed

    Urbanowicz, Richard A; McClure, C Patrick; King, Barnabas; Mason, Christopher P; Ball, Jonathan K; Tarr, Alexander W

    2016-09-01

    Retrovirus pseudotypes are a highly tractable model used to study the entry pathways of enveloped viruses. This model has been extensively applied to the study of the hepatitis C virus (HCV) entry pathway, preclinical screening of antiviral antibodies and for assessing the phenotype of patient-derived viruses using HCV pseudoparticles (HCVpp) possessing the HCV E1 and E2 glycoproteins. However, not all patient-isolated clones produce particles that are infectious in this model. This study investigated factors that might limit phenotyping of patient-isolated HCV glycoproteins. Genetically related HCV glycoproteins from quasispecies in individual patients were discovered to behave very differently in this entry model. Empirical optimization of the ratio of packaging construct and glycoprotein-encoding plasmid was required for successful HCVpp genesis for different clones. The selection of retroviral packaging construct also influenced the function of HCV pseudoparticles. Some glycoprotein constructs tolerated a wide range of assay parameters, while others were much more sensitive to alterations. Furthermore, glycoproteins previously characterized as unable to mediate entry were found to be functional. These findings were validated using chimeric cell-cultured HCV bearing these glycoproteins. Using the same empirical approach we demonstrated that generation of infectious ebolavirus pseudoviruses (EBOVpv) was also sensitive to the amount and ratio of plasmids used, and that protocols for optimal production of these pseudoviruses are dependent on the exact virus glycoprotein construct. These findings demonstrate that it is crucial for studies utilizing pseudoviruses to conduct empirical optimization of pseudotype production for each specific glycoprotein sequence to achieve optimal titres and facilitate accurate phenotyping.

  8. Economic analysis of secondary and enhanced oil recovery techniques in Wyoming

    NASA Astrophysics Data System (ADS)

    Kara, Erdal

    This dissertation primarily aims to theoretically analyze a firm's optimization of enhanced oil recovery (EOR) and carbon dioxide sequestration under different social policies and empirically analyze the firm's optimization of enhanced oil recovery. The final part of the dissertation empirically analyzes how geological factors and water injection management influence oil recovery. The first chapter builds a theoretical model to analyze economic optimization of EOR and geological carbon sequestration under different social policies. Specifically, it analyzes how social policies on sequestration influence the extent of oil operations, optimal oil production and CO2 sequestration. The theoretical results show that the socially optimal policy is a subsidy on the net CO2 sequestration, assuming negative net emissions from EOR. Such a policy is expected to increase a firm's total carbon dioxide sequestration. The second chapter statistically estimates the theoretical oil production model and its different versions. Empirical results are not robust over different estimation techniques and not in line with the theoretical production model. The last part of the second chapter utilizes a simplified version of theoretical model and concludes that EOR via CO2 injection improves oil recovery. The final chapter analyzes how a contemporary oil recovery technology (water flooding of oil reservoirs) and various reservoir-specific geological factors influence oil recovery in Wyoming. The results show that there is a positive concave relationship between cumulative water injection and cumulative oil recovery and also show that certain geological factors affect the oil recovery. Moreover, the curvature of the concave functional relationship between cumulative water injection and oil recovery is reservoir-specific due to heterogeneities among different reservoirs.

  9. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  10. Bayesian accounts of covert selective attention: A tutorial review.

    PubMed

    Vincent, Benjamin T

    2015-05-01

    Decision making and optimal observer models offer an important theoretical approach to the study of covert selective attention. While their probabilistic formulation allows quantitative comparison to human performance, the models can be complex and their insights are not always immediately apparent. Part 1 establishes the theoretical appeal of the Bayesian approach, and introduces the way in which probabilistic approaches can be applied to covert search paradigms. Part 2 presents novel formulations of Bayesian models of 4 important covert attention paradigms, illustrating optimal observer predictions over a range of experimental manipulations. Graphical model notation is used to present models in an accessible way and Supplementary Code is provided to help bridge the gap between model theory and practical implementation. Part 3 reviews a large body of empirical and modelling evidence showing that many experimental phenomena in the domain of covert selective attention are a set of by-products. These effects emerge as the result of observers conducting Bayesian inference with noisy sensory observations, prior expectations, and knowledge of the generative structure of the stimulus environment.

  11. Microreactor-based mixing strategy suppresses product inhibition to enhance sugar yields in enzymatic hydrolysis for cellulosic biofuel production.

    PubMed

    Chakraborty, Saikat; Singh, Prasun Kumar; Paramashetti, Pawan

    2017-08-01

    A novel microreactor-based energy-efficient process of using complete convective mixing in a macroreactor till an optimal mixing time followed by no mixing in 200-400μl microreactors enhances glucose and reducing sugar yields by upto 35% and 29%, respectively, while saving 72-90% of the energy incurred on reactor mixing in the enzymatic hydrolysis of cellulose. Empirical exponential relations are provided for determining the optimal mixing time, during which convective mixing in the macroreactor promotes mass transport of the cellulase enzyme to the solid Avicel substrate, while the latter phase of no mixing in the microreactor suppresses product inhibition by preventing the inhibitors (glucose and cellobiose) from homogenizing across the reactor. Sugar yield increases linearly with liquid to solid height ratio (r h ), irrespective of substrate loading and microreactor size, since large r h allows the inhibitors to diffuse in the liquid away from the solids, thus reducing product inhibition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A project optimization for small watercourses restoration in the northern part of the Volga-Akhtuba floodplain by the geoinformation and hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Voronin, Alexander; Vasilchenko, Ann; Khoperskov, Alexander

    2018-03-01

    The project of small watercourses restoration in the northern part of the Volga-Akhtuba floodplain is considered together with the aim of increasing the watering of the territory during small and medium floods. The topography irregularity, the complex structure of the floodplain valley consisting of large number of small watercourses, the presence of urbanized and agricultural areas require careful preliminary analysis of the hydrological safety and efficiency of geographically distributed project activities. Using the digital terrain and watercourses structure models of the floodplain, the hydrodynamic flood model, the analysis of the hydrological safety and efficiency of several project implementation strategies has been conducted. The objective function values have been obtained from the hydrodynamic calculations of the floodplain territory flooding for virtual digital terrain models simulating alternatives for the geographically distributed project activities. The comparative efficiency of several empirical strategies for the geographically distributed project activities, as well as a two-stage exact solution method for the optimization problem has been studied.

  13. Efficiency in nonequilibrium molecular dynamics Monte Carlo simulations

    DOE PAGES

    Radak, Brian K.; Roux, Benoît

    2016-10-07

    Hybrid algorithms combining nonequilibrium molecular dynamics and Monte Carlo (neMD/MC) offer a powerful avenue for improving the sampling efficiency of computer simulations of complex systems. These neMD/MC algorithms are also increasingly finding use in applications where conventional approaches are impractical, such as constant-pH simulations with explicit solvent. However, selecting an optimal nonequilibrium protocol for maximum efficiency often represents a non-trivial challenge. This work evaluates the efficiency of a broad class of neMD/MC algorithms and protocols within the theoretical framework of linear response theory. The approximations are validated against constant pH-MD simulations and shown to provide accurate predictions of neMD/MC performance.more » An assessment of a large set of protocols confirms (both theoretically and empirically) that a linear work protocol gives the best neMD/MC performance. Lastly, a well-defined criterion for optimizing the time parameters of the protocol is proposed and demonstrated with an adaptive algorithm that improves the performance on-the-fly with minimal cost.« less

  14. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  15. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 1: comparison of uniresponse and multiresponse designs using PopDes.

    PubMed

    Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode

    2008-12-01

    To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.

  16. Optimal exploitation strategies for an animal population in a Markovian environment: A theory and an example

    USGS Publications Warehouse

    Anderson, D.R.

    1975-01-01

    Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.

  17. Factorial Based Response Surface Modeling with Confidence Intervals for Optimizing Thermal Optical Transmission Analysis of Atmospheric Black Carbon

    EPA Science Inventory

    We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...

  18. Optimal Admission to Higher Education

    ERIC Educational Resources Information Center

    Albaek, Karsten

    2017-01-01

    This paper analyses admission decisions when students from different high school tracks apply for admission to university programmes. I derive a criterion that is optimal in the sense that it maximizes the graduation rates of the university programmes. The paper contains an empirical analysis that documents the relevance of theory and illustrates…

  19. Optimization Techniques for College Financial Aid Managers

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.

    2010-01-01

    In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…

  20. Dealing with Multiple Solutions in Structural Vector Autoregressive Models.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2016-01-01

    Structural vector autoregressive models (VARs) hold great potential for psychological science, particularly for time series data analysis. They capture the magnitude, direction of influence, and temporal (lagged and contemporaneous) nature of relations among variables. Unified structural equation modeling (uSEM) is an optimal structural VAR instantiation, according to large-scale simulation studies, and it is implemented within an SEM framework. However, little is known about the uniqueness of uSEM results. Thus, the goal of this study was to investigate whether multiple solutions result from uSEM analysis and, if so, to demonstrate ways to select an optimal solution. This was accomplished with two simulated data sets, an empirical data set concerning children's dyadic play, and modifications to the group iterative multiple model estimation (GIMME) program, which implements uSEMs with group- and individual-level relations in a data-driven manner. Results revealed multiple solutions when there were large contemporaneous relations among variables. Results also verified several ways to select the correct solution when the complete solution set was generated, such as the use of cross-validation, maximum standardized residuals, and information criteria. This work has immediate and direct implications for the analysis of time series data and for the inferences drawn from those data concerning human behavior.

  1. Resource allocation to reproduction in animals.

    PubMed

    Kooijman, Sebastiaan A L M; Lika, Konstadia

    2014-11-01

    The standard Dynamic Energy Budget (DEB) model assumes that a fraction κ of mobilised reserve is allocated to somatic maintenance plus growth, while the rest is allocated to maturity maintenance plus maturation (in embryos and juveniles) or reproduction (in adults). All DEB parameters have been estimated for 276 animal species from most large phyla and all chordate classes. The goodness of fit is generally excellent. We compared the estimated values of κ with those that would maximise reproduction in fully grown adults with abundant food. Only 13% of these species show a reproduction rate close to the maximum possible (assuming that κ can be controlled), another 4% have κ lower than the optimal value, and 83% have κ higher than the optimal value. Strong empirical support hence exists for the conclusion that reproduction is generally not maximised. We also compared the parameters of the wild chicken with those of races selected for meat and egg production and found that the latter indeed maximise reproduction in terms of κ, while surface-specific assimilation was not affected by selection. We suggest that small values of κ relate to the down-regulation of maximum body size, and large values to the down-regulation of reproduction. We briefly discuss the ecological context for these findings. © 2014 The Authors. Biological Reviews © 2014 Cambridge Philosophical Society.

  2. Assessing pretreatment reactor scaling through empirical analysis

    DOE PAGES

    Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik; ...

    2016-10-10

    Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less

  3. Assessing pretreatment reactor scaling through empirical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lischeske, James J.; Crawford, Nathan C.; Kuhn, Erik

    Pretreatment is a critical step in the biochemical conversion of lignocellulosic biomass to fuels and chemicals. Due to the complexity of the physicochemical transformations involved, predictively scaling up technology from bench- to pilot-scale is difficult. This study examines how pretreatment effectiveness under nominally similar reaction conditions is influenced by pretreatment reactor design and scale using four different pretreatment reaction systems ranging from a 3 g batch reactor to a 10 dry-ton/d continuous reactor. The reactor systems examined were an Automated Solvent Extractor (ASE), Steam Explosion Reactor (SER), ZipperClave(R) reactor (ZCR), and Large Continuous Horizontal-Screw Reactor (LHR). To our knowledge, thismore » is the first such study performed on pretreatment reactors across a range of reaction conditions (time and temperature) and at different reactor scales. The comparative pretreatment performance results obtained for each reactor system were used to develop response surface models for total xylose yield after pretreatment and total sugar yield after pretreatment followed by enzymatic hydrolysis. Near- and very-near-optimal regions were defined as the set of conditions that the model identified as producing yields within one and two standard deviations of the optimum yield. Optimal conditions identified in the smallest-scale system (the ASE) were within the near-optimal region of the largest scale reactor system evaluated. A reaction severity factor modeling approach was shown to inadequately describe the optimal conditions in the ASE, incorrectly identifying a large set of sub-optimal conditions (as defined by the RSM) as optimal. The maximum total sugar yields for the ASE and LHR were 95%, while 89% was the optimum observed in the ZipperClave. The optimum condition identified using the automated and less costly to operate ASE system was within the very-near-optimal space for the total xylose yield of both the ZCR and the LHR, and was within the near-optimal space for total sugar yield for the LHR. This indicates that the ASE is a good tool for cost effectively finding near-optimal conditions for operating pilot-scale systems, which may be used as starting points for further optimization. Additionally, using a severity-factor approach to optimization was found to be inadequate compared to a multivariate optimization method. As a result, the ASE and the LHR were able to enable significantly higher total sugar yields after enzymatic hydrolysis relative to the ZCR, despite having similar optimal conditions and total xylose yields. This underscores the importance of incorporating mechanical disruption into pretreatment reactor designs to achieve high enzymatic digestibilities.« less

  4. Neural Meta-Memes Framework for Combinatorial Optimization

    NASA Astrophysics Data System (ADS)

    Song, Li Qin; Lim, Meng Hiot; Ong, Yew Soon

    In this paper, we present a Neural Meta-Memes Framework (NMMF) for combinatorial optimization. NMMF is a framework which models basic optimization algorithms as memes and manages them dynamically when solving combinatorial problems. NMMF encompasses neural networks which serve as the overall planner/coordinator to balance the workload between memes. We show the efficacy of the proposed NMMF through empirical study on a class of combinatorial problem, the quadratic assignment problem (QAP).

  5. Adopting epidemic model to optimize medication and surgical intervention of excess weight

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2017-01-01

    We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.

  6. Optimized retrievals of precipitable water from the VAS 'split window'

    NASA Technical Reports Server (NTRS)

    Chesters, Dennis; Robinson, Wayne D.; Uccellini, Louis W.

    1987-01-01

    Precipitable water fields have been retrieved from the VISSR Atmospheric Sounder (VAS) using a radiation transfer model for the differential water vapor absorption between the 11- and 12-micron 'split window' channels. Previous moisture retrievals using only the split window channels provided very good space-time continuity but poor absolute accuracy. This note describes how retrieval errors can be significantly reduced from plus or minus 0.9 to plus or minus 0.6 gm/sq cm by empirically optimizing the effective air temperature and absorption coefficients used in the two-channel model. The differential absorption between the VAS 11- and 12-micron channels, empirically estimated from 135 colocated VAS-RAOB observations, is found to be approximately 50 percent smaller than the theoretical estimates. Similar discrepancies have been noted previously between theoretical and empirical absorption coefficients applied to the retrieval of sea surface temperatures using radiances observed by VAS and polar-orbiting satellites. These discrepancies indicate that radiation transfer models for the 11-micron window appear to be less accurate than the satellite observations.

  7. Development of an Empirical Model for Optimization of Machining Parameters to Minimize Power Consumption

    NASA Astrophysics Data System (ADS)

    Kant Garg, Girish; Garg, Suman; Sangwan, K. S.

    2018-04-01

    The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.

  8. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  9. Consistent Chemical Mechanism from Collaborative Data Processing

    DOE PAGES

    Slavinskaya, Nadezda; Starcke, Jan-Hendrik; Abbasi, Mehdi; ...

    2016-04-01

    Numerical tool of Process Informatics Model (PrIMe) is mathematically rigorous and numerically efficient approach for analysis and optimization of chemical systems. It handles heterogeneous data and is scalable to a large number of parameters. The Boundto-Bound Data Collaboration module of the automated data-centric infrastructure of PrIMe was used for the systematic uncertainty and data consistency analyses of the H 2/CO reaction model (73/17) and 94 experimental targets (ignition delay times). The empirical rule for evaluation of the shock tube experimental data is proposed. The initial results demonstrate clear benefits of the PrIMe methods for an evaluation of the kinetic datamore » quality and data consistency and for developing predictive kinetic models.« less

  10. Addressing Climate Change in Long-Term Water Planning Using Robust Decisionmaking

    NASA Astrophysics Data System (ADS)

    Groves, D. G.; Lempert, R.

    2008-12-01

    Addressing climate change in long-term natural resource planning is difficult because future management conditions are deeply uncertain and the range of possible adaptation options are so extensive. These conditions pose challenges to standard optimization decision-support techniques. This talk will describe a methodology called Robust Decisionmaking (RDM) that can complement more traditional analytic approaches by utilizing screening-level water management models to evaluate large numbers of strategies against a wide range of plausible future scenarios. The presentation will describe a recent application of the methodology to evaluate climate adaptation strategies for the Inland Empire Utilities Agency in Southern California. This project found that RDM can provide a useful way for addressing climate change uncertainty and identify robust adaptation strategies.

  11. Empirically Derived Optimal Growth Equations For Hardwoods and Softwoods in Arkansas

    Treesearch

    Don C. Bragg

    2002-01-01

    Accurate growth projections are critical to reliable forest models, and ecologically based simulators can improve siivicultural predictions because of their sensitivity to change and their capacity to produce long-term forecasts. Potential relative increment (PRI) optimal diameter growth equations for loblolly pine, shortleaf pine, sweetgum, and white oak were fit to...

  12. The Relationship among Principals' Technology Leadership, Teaching Innovation, and Students' Academic Optimism in Elementary Schools

    ERIC Educational Resources Information Center

    Hsieh, Chuan-Chung; Yen, Hung-Chin; Kuan, Liu-Yen

    2014-01-01

    This study empirically investigates the relationships among principals' technology leadership, teaching innovations, and students' academic optimism by surveying elementary school educators across Taiwan. Of the total 1,080 questionnaires distributed, 755 valid surveys were returned for a 69.90% return rate. Teachers were asked to indicate the…

  13. Optimal Placement of Dynamic Var Sources by Using Empirical Controllability Covariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Huang, Weihong; Sun, Kai

    In this paper, the empirical controllability covariance (ECC), which is calculated around the considered operating condition of a power system, is applied to quantify the degree of controllability of system voltages under specific dynamic var source locations. An optimal dynamic var source placement method addressing fault-induced delayed voltage recovery (FIDVR) issues is further formulated as an optimization problem that maximizes the determinant of ECC. The optimization problem is effectively solved by the NOMAD solver, which implements the mesh adaptive direct search algorithm. The proposed method is tested on an NPCC 140-bus system and the results show that the proposed methodmore » with fault specified ECC can solve the FIDVR issue caused by the most severe contingency with fewer dynamic var sources than the voltage sensitivity index (VSI)-based method. The proposed method with fault unspecified ECC does not depend on the settings of the contingency and can address more FIDVR issues than the VSI method when placing the same number of SVCs under different fault durations. It is also shown that the proposed method can help mitigate voltage collapse.« less

  14. A globally optimal k-anonymity method for the de-identification of health data.

    PubMed

    El Emam, Khaled; Dankar, Fida Kamal; Issa, Romeo; Jonker, Elizabeth; Amyot, Daniel; Cogo, Elise; Corriveau, Jean-Pierre; Walker, Mark; Chowdhury, Sadrul; Vaillancourt, Regis; Roffey, Tyson; Bottomley, Jim

    2009-01-01

    Explicit patient consent requirements in privacy laws can have a negative impact on health research, leading to selection bias and reduced recruitment. Often legislative requirements to obtain consent are waived if the information collected or disclosed is de-identified. The authors developed and empirically evaluated a new globally optimal de-identification algorithm that satisfies the k-anonymity criterion and that is suitable for health datasets. Authors compared OLA (Optimal Lattice Anonymization) empirically to three existing k-anonymity algorithms, Datafly, Samarati, and Incognito, on six public, hospital, and registry datasets for different values of k and suppression limits. Measurement Three information loss metrics were used for the comparison: precision, discernability metric, and non-uniform entropy. Each algorithm's performance speed was also evaluated. The Datafly and Samarati algorithms had higher information loss than OLA and Incognito; OLA was consistently faster than Incognito in finding the globally optimal de-identification solution. For the de-identification of health datasets, OLA is an improvement on existing k-anonymity algorithms in terms of information loss and performance.

  15. Decision-support models for empiric antibiotic selection in Gram-negative bloodstream infections.

    PubMed

    MacFadden, D R; Coburn, B; Shah, N; Robicsek, A; Savage, R; Elligsen, M; Daneman, N

    2018-04-25

    Early empiric antibiotic therapy in patients can improve clinical outcomes in Gram-negative bacteraemia. However, the widespread prevalence of antibiotic-resistant pathogens compromises our ability to provide adequate therapy while minimizing use of broad antibiotics. We sought to determine whether readily available electronic medical record data could be used to develop predictive models for decision support in Gram-negative bacteraemia. We performed a multi-centre cohort study, in Canada and the USA, of hospitalized patients with Gram-negative bloodstream infection from April 2010 to March 2015. We analysed multivariable models for prediction of antibiotic susceptibility at two empiric windows: Gram-stain-guided and pathogen-guided treatment. Decision-support models for empiric antibiotic selection were developed based on three clinical decision thresholds of acceptable adequate coverage (80%, 90% and 95%). A total of 1832 patients with Gram-negative bacteraemia were evaluated. Multivariable models showed good discrimination across countries and at both Gram-stain-guided (12 models, areas under the curve (AUCs) 0.68-0.89, optimism-corrected AUCs 0.63-0.85) and pathogen-guided (12 models, AUCs 0.75-0.98, optimism-corrected AUCs 0.64-0.95) windows. Compared to antibiogram-guided therapy, decision-support models of antibiotic selection incorporating individual patient characteristics and prior culture results have the potential to increase use of narrower-spectrum antibiotics (in up to 78% of patients) while reducing inadequate therapy. Multivariable models using readily available epidemiologic factors can be used to predict antimicrobial susceptibility in infecting pathogens with reasonable discriminatory ability. Implementation of sequential predictive models for real-time individualized empiric antibiotic decision-making has the potential to both optimize adequate coverage for patients while minimizing overuse of broad-spectrum antibiotics, and therefore requires further prospective evaluation. Readily available epidemiologic risk factors can be used to predict susceptibility of Gram-negative organisms among patients with bacteraemia, using automated decision-making models. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  16. Phi Index: A New Metric to Test the Flush Early and Avoid the Rush Hypothesis

    PubMed Central

    Samia, Diogo S. M.; Blumstein, Daniel T.

    2014-01-01

    Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the “Flush Early and Avoid the Rush” (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1∶1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis. PMID:25405872

  17. Phi index: a new metric to test the flush early and avoid the rush hypothesis.

    PubMed

    Samia, Diogo S M; Blumstein, Daniel T

    2014-01-01

    Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.

  18. A Multi-Band Analytical Algorithm for Deriving Absorption and Backscattering Coefficients from Remote-Sensing Reflectance of Optically Deep Waters

    NASA Technical Reports Server (NTRS)

    Lee, Zhong-Ping; Carder, Kendall L.

    2001-01-01

    A multi-band analytical (MBA) algorithm is developed to retrieve absorption and backscattering coefficients for optically deep waters, which can be applied to data from past and current satellite sensors, as well as data from hyperspectral sensors. This MBA algorithm applies a remote-sensing reflectance model derived from the Radiative Transfer Equation, and values of absorption and backscattering coefficients are analytically calculated from values of remote-sensing reflectance. There are only limited empirical relationships involved in the algorithm, which implies that this MBA algorithm could be applied to a wide dynamic range of waters. Applying the algorithm to a simulated non-"Case 1" data set, which has no relation to the development of the algorithm, the percentage error for the total absorption coefficient at 440 nm a (sub 440) is approximately 12% for a range of 0.012 - 2.1 per meter (approximately 6% for a (sub 440) less than approximately 0.3 per meter), while a traditional band-ratio approach returns a percentage error of approximately 30%. Applying it to a field data set ranging from 0.025 to 2.0 per meter, the result for a (sub 440) is very close to that using a full spectrum optimization technique (9.6% difference). Compared to the optimization approach, the MBA algorithm cuts the computation time dramatically with only a small sacrifice in accuracy, making it suitable for processing large data sets such as satellite images. Significant improvements over empirical algorithms have also been achieved in retrieving the optical properties of optically deep waters.

  19. Bayesian just-so stories in psychology and neuroscience.

    PubMed

    Bowers, Jeffrey S; Davis, Colin J

    2012-05-01

    According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian "just so" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal. 2012 APA, all rights reserved.

  20. A Large Deviations Analysis of Certain Qualitative Properties of Parallel Tempering and Infinite Swapping Algorithms

    DOE PAGES

    Doll, J.; Dupuis, P.; Nyquist, P.

    2017-02-08

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less

  1. Improving the twilight model for polar cap absorption nowcasts

    NASA Astrophysics Data System (ADS)

    Rogers, N. C.; Kero, A.; Honary, F.; Verronen, P. T.; Warrington, E. M.; Danskin, D. W.

    2016-11-01

    During solar proton events (SPE), energetic protons ionize the polar mesosphere causing HF radio wave attenuation, more strongly on the dayside where the effective recombination coefficient, αeff, is low. Polar cap absorption models predict the 30 MHz cosmic noise absorption, A, measured by riometers, based on real-time measurements of the integrated proton flux-energy spectrum, J. However, empirical models in common use cannot account for regional and day-to-day variations in the daytime and nighttime profiles of αeff(z) or the related sensitivity parameter, m=A>/&sqrt;J. Large prediction errors occur during twilight when m changes rapidly, and due to errors locating the rigidity cutoff latitude. Modeling the twilight change in m as a linear or Gauss error-function transition over a range of solar-zenith angles (χl < χ < χu) provides a better fit to measurements than selecting day or night αeff profiles based on the Earth-shadow height. Optimal model parameters were determined for several polar cap riometers for large SPEs in 1998-2005. The optimal χl parameter was found to be most variable, with smaller values (as low as 60°) postsunrise compared with presunset and with positive correlation between riometers over a wide area. Day and night values of m exhibited higher correlation for closely spaced riometers. A nowcast simulation is presented in which rigidity boundary latitude and twilight model parameters are optimized by assimilating age-weighted measurements from 25 riometers. The technique reduces model bias, and root-mean-square errors are reduced by up to 30% compared with a model employing no riometer data assimilation.

  2. Predator bioenergetics and the prey size spectrum: do foraging costs determine fish production?

    PubMed

    Giacomini, Henrique C; Shuter, Brian J; Lester, Nigel P

    2013-09-07

    Most models of fish growth and predation dynamics assume that food ingestion rate is the major component of the energy budget affected by prey availability, while active metabolism is invariant (here called constant activity hypothesis). However, increasing empirical evidence supports an opposing view: fish tend to adjust their foraging activity to maintain reasonably constant ingestion levels in the face of varying prey density and/or quality (the constant satiation hypothesis). In this paper, we use a simple but flexible model of fish bioenergetics to show that constant satiation is likely to occur in fish that optimize both net production rate and life history. The model includes swimming speed as an explicit measure of foraging activity leading to both energy gains (through prey ingestion) and losses (through active metabolism). The fish is assumed to be a particulate feeder that has to swim between consecutive individual prey captures, and that shifts its diet ontogenetically from smaller to larger prey. The prey community is represented by a negative power-law size spectrum. From these rules, we derive the net production of fish as a function of the size spectrum, and this in turn establishes a formal link between the optimal life history (i.e. maximum body size) and prey community structure. In most cases with realistic parameter values, optimization of life history ensures that: (i) a constantly satiated fish preying on a steep size spectrum will stop growing and invest all its surplus energy in reproduction before satiation becomes too costly; (ii) conversely, a fish preying on a shallow size spectrum will grow large enough for satiation to be present throughout most of its ontogeny. These results provide a mechanistic basis for previous empirical findings, and call for the inclusion of active metabolism as a major factor limiting growth potential and the numerical response of predators in theoretical studies of food webs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Optimal growth entails risky localization in population dynamics

    NASA Astrophysics Data System (ADS)

    Gueudré, Thomas; Martin, David G.

    2018-03-01

    Essential to each other, growth and exploration are jointly observed in alive and inanimate entities, such as animals, cells or goods. But how the environment's structural and temporal properties weights in this balance remains elusive. We analyze a model of stochastic growth with time correlations and diffusive dynamics that sheds light on the way populations grow and spread over general networks. This model suggests natural explanations of empirical facts in econo-physics or ecology, such as the risk-return trade-off and the Zipf law. We conclude that optimal growth leads to a localized population distribution, but such risky position can be mitigated through the space geometry. These results have broad applicability and are subsequently illustrated over an empirical study of financial data.

  4. An Empirical Bayes Approach to Item Banking. Project Psychometric Aspects of Item Banking No. 6. Research Report 86-6.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Eggen, Theo J. H. M.

    A procedure for the sequential optimization of the calibration of an item bank is given. The procedure is based on an empirical Bayes approach to a reformulation of the Rasch model as a model for paired comparisons between the difficulties of test items in which ties are allowed to occur. First, it is indicated how a paired-comparisons design…

  5. The Need for Large-Scale, Longitudinal Empirical Studies in Middle Level Education Research

    ERIC Educational Resources Information Center

    Mertens, Steven B.; Caskey, Micki M.; Flowers, Nancy

    2016-01-01

    This essay describes and discusses the ongoing need for large-scale, longitudinal, empirical research studies focused on middle grades education. After a statement of the problem and concerns, the essay describes and critiques several prior middle grades efforts and research studies. Recommendations for future research efforts to inform policy…

  6. Moving from Student to Professional: Industry Mentors and Academic Internship Coordinators Supporting Intern Learning in the Workplace

    ERIC Educational Resources Information Center

    Kramer-Simpson, Elisabeth

    2018-01-01

    This article offers empirical data to explore ways that both industry mentors and academic internship coordinators support student interns in ways that optimize the workplace experience. Rich description of qualitative data from case studies and interviews shows that to optimize the internship, both the industry mentor and the academic internship…

  7. Stochastic Price Models and Optimal Tree Cutting: Results for Loblolly Pine

    Treesearch

    Robert G. Haight; Thomas P. Holmes

    1991-01-01

    An empirical investigation of stumpage price models and optimal harvest policies is conducted for loblolly pine plantations in the southeastern United States. The stationarity of monthly and quarterly series of sawtimber prices is analyzed using a unit root test. The statistical evidence supports stationary autoregressive models for the monthly series and for the...

  8. Reward Rate Optimization in Two-Alternative Decision Making: Empirical Tests of Theoretical Predictions

    ERIC Educational Resources Information Center

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.

    2009-01-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…

  9. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    PubMed

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.; Britt, J.; Birkmire, R.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less

  11. Evolution of viral virulence: empirical studies

    USGS Publications Warehouse

    Kurath, Gael; Wargo, Andrew R.

    2016-01-01

    The concept of virulence as a pathogen trait that can evolve in response to selection has led to a large body of virulence evolution theory developed in the 1980-1990s. Various aspects of this theory predict increased or decreased virulence in response to a complex array of selection pressures including mode of transmission, changes in host, mixed infection, vector-borne transmission, environmental changes, host vaccination, host resistance, and co-evolution of virus and host. A fundamental concept is prediction of trade-offs between the costs and benefits associated with higher virulence, leading to selection of optimal virulence levels. Through a combination of observational and experimental studies, including experimental evolution of viruses during serial passage, many of these predictions have now been explored in systems ranging from bacteriophage to viruses of plants, invertebrates, and vertebrate hosts. This chapter summarizes empirical studies of viral virulence evolution in numerous diverse systems, including the classic models myxomavirus in rabbits, Marek's disease virus in chickens, and HIV in humans. Collectively these studies support some aspects of virulence evolution theory, suggest modifications for other aspects, and show that predictions may apply in some virus:host interactions but not in others. Finally, we consider how virulence evolution theory applies to disease management in the field.

  12. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    PubMed Central

    Jørgensen, J. S.; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620

  13. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    PubMed

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  15. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  16. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  17. Windscapes and olfactory foraging in a large carnivore

    PubMed Central

    Togunov, Ron R.; Derocher, Andrew E.; Lunn, Nicholas J.

    2017-01-01

    The theoretical optimal olfactory search strategy is to move cross-wind. Empirical evidence supporting wind-associated directionality among carnivores, however, is sparse. We examined satellite-linked telemetry movement data of adult female polar bears (Ursus maritimus) from Hudson Bay, Canada, in relation to modelled winds, in an effort to understand olfactory search for prey. In our results, the predicted cross-wind movement occurred most frequently at night during winter, the time when most hunting occurs, while downwind movement dominated during fast winds, which impede olfaction. Migration during sea ice freeze-up and break-up was also correlated with wind. A lack of orientation during summer, a period with few food resources, likely reflected reduced cross-wind search. Our findings represent the first quantitative description of anemotaxis, orientation to wind, for cross-wind search in a large carnivore. The methods are widely applicable to olfactory predators and their prey. We suggest windscapes be included as a habitat feature in habitat selection models for olfactory animals when evaluating what is considered available habitat. PMID:28402340

  18. Regulatory networks and connected components of the neutral space. A look at functional islands

    NASA Astrophysics Data System (ADS)

    Boldhaus, G.; Klemm, K.

    2010-09-01

    The functioning of a living cell is largely determined by the structure of its regulatory network, comprising non-linear interactions between regulatory genes. An important factor for the stability and evolvability of such regulatory systems is neutrality - typically a large number of alternative network structures give rise to the necessary dynamics. Here we study the discretized regulatory dynamics of the yeast cell cycle [Li et al., PNAS, 2004] and the set of networks capable of reproducing it, which we call functional. Among these, the empirical yeast wildtype network is close to optimal with respect to sparse wiring. Under point mutations, which establish or delete single interactions, the neutral space of functional networks is fragmented into ≈ 4.7 × 108 components. One of the smaller ones contains the wildtype network. On average, functional networks reachable from the wildtype by mutations are sparser, have higher noise resilience and fewer fixed point attractors as compared with networks outside of this wildtype component.

  19. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  20. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers

    PubMed Central

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-01-01

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the “all parallel” shielding coils with a 45° starting position have the best shielding performance, whereas the “separated loop” shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same. PMID:28587137

  1. Modelling and Optimization of Four-Segment Shielding Coils of Current Transformers.

    PubMed

    Gao, Yucheng; Zhao, Wei; Wang, Qing; Qu, Kaifeng; Li, He; Shao, Haiming; Huang, Songling

    2017-05-26

    Applying shielding coils is a practical way to protect current transformers (CTs) for large-capacity generators from the intensive magnetic interference produced by adjacent bus-bars. The aim of this study is to build a simple analytical model for the shielding coils, from which the optimization of the shielding coils can be calculated effectively. Based on an existing stray flux model, a new analytical model for the leakage flux of partial coils is presented, and finite element method-based simulations are carried out to develop empirical equations for the core-pickup factors of the models. Using the flux models, a model of the common four-segment shielding coils is derived. Furthermore, a theoretical analysis is carried out on the optimal performance of the four-segment shielding coils in a typical six-bus-bars scenario. It turns out that the "all parallel" shielding coils with a 45° starting position have the best shielding performance, whereas the "separated loop" shielding coils with a 0° starting position feature the lowest heating value. Physical experiments were performed, which verified all the models and the conclusions proposed in the paper. In addition, for shielding coils with other than the four-segment configuration, the analysis process will generally be the same.

  2. Optimizing adherence to antiretroviral therapy

    PubMed Central

    Sahay, Seema; Reddy, K. Srikanth; Dhayarkar, Sampada

    2011-01-01

    HIV has now become a manageable chronic disease. However, the treatment outcomes may get hampered by suboptimal adherence to ART. Adherence optimization is a concrete reality in the wake of ‘universal access’ and it is imperative to learn lessons from various studies and programmes. This review examines current literature on ART scale up, treatment outcomes of the large scale programmes and the role of adherence therein. Social, behavioural, biological and programme related factors arise in the context of ART adherence optimization. While emphasis is laid on adherence, retention of patients under the care umbrella emerges as a major challenge. An in-depth understanding of patients’ health seeking behaviour and health care delivery system may be useful in improving adherence and retention of patients in care continuum and programme. A theoretical framework to address the barriers and facilitators has been articulated to identify problematic areas in order to intervene with specific strategies. Empirically tested objective adherence measurement tools and approaches to assess adherence in clinical/ programme settings are required. Strengthening of ART programmes would include appropriate policies for manpower and task sharing, integrating traditional health sector, innovations in counselling and community support. Implications for the use of theoretical model to guide research, clinical practice, community involvement and policy as part of a human rights approach to HIV disease is suggested. PMID:22310817

  3. Single Neuron Optimization as a Basis for Accurate Biophysical Modeling: The Case of Cerebellar Granule Cells.

    PubMed

    Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio

    2017-01-01

    In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (G i-max ) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of G i-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of G i-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental G i-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.

  4. Deriving high-resolution protein backbone structure propensities from all crystal data using the information maximization device.

    PubMed

    Solis, Armando D

    2014-01-01

    The most informative probability distribution functions (PDFs) describing the Ramachandran phi-psi dihedral angle pair, a fundamental descriptor of backbone conformation of protein molecules, are derived from high-resolution X-ray crystal structures using an information-theoretic approach. The Information Maximization Device (IMD) is established, based on fundamental information-theoretic concepts, and then applied specifically to derive highly resolved phi-psi maps for all 20 single amino acid and all 8000 triplet sequences at an optimal resolution determined by the volume of current data. The paper shows that utilizing the latent information contained in all viable high-resolution crystal structures found in the Protein Data Bank (PDB), totaling more than 77,000 chains, permits the derivation of a large number of optimized sequence-dependent PDFs. This work demonstrates the effectiveness of the IMD and the superiority of the resulting PDFs by extensive fold recognition experiments and rigorous comparisons with previously published triplet PDFs. Because it automatically optimizes PDFs, IMD results in improved performance of knowledge-based potentials, which rely on such PDFs. Furthermore, it provides an easy computational recipe for empirically deriving other kinds of sequence-dependent structural PDFs with greater detail and precision. The high-resolution phi-psi maps derived in this work are available for download.

  5. Application of response surface methodology to maximize the productivity of scalable automated human embryonic stem cell manufacture.

    PubMed

    Ratcliffe, Elizabeth; Hourd, Paul; Guijarro-Leach, Juan; Rayment, Erin; Williams, David J; Thomas, Robert J

    2013-01-01

    Commercial regenerative medicine will require large quantities of clinical-specification human cells. The cost and quality of manufacture is notoriously difficult to control due to highly complex processes with poorly defined tolerances. As a step to overcome this, we aimed to demonstrate the use of 'quality-by-design' tools to define the operating space for economic passage of a scalable human embryonic stem cell production method with minimal cell loss. Design of experiments response surface methodology was applied to generate empirical models to predict optimal operating conditions for a unit of manufacture of a previously developed automatable and scalable human embryonic stem cell production method. Two models were defined to predict cell yield and cell recovery rate postpassage, in terms of the predictor variables of media volume, cell seeding density, media exchange and length of passage. Predicted operating conditions for maximized productivity were successfully validated. Such 'quality-by-design' type approaches to process design and optimization will be essential to reduce the risk of product failure and patient harm, and to build regulatory confidence in cell therapy manufacturing processes.

  6. Shielded cables with optimal braided shields

    NASA Astrophysics Data System (ADS)

    Homann, E.

    1991-01-01

    Extensive tests were done in order to determine what factors govern the design of braids with good shielding effectiveness. The results are purely empirical and relate to the geometrical relationships between the braid parameters. The influence of various parameters on the shape of the transfer impedance versus frequency curve were investigated step by step. It was found that the optical coverage had been overestimated in the past. Good shielding effectiveness results not from high optical coverage as such, but from the proper type of coverage, which is a function of the braid angle and the element width. These dependences were measured for the ordinary range of braid angles (20 to 40 degrees). They apply to all plaiting machines and all gages of braid wire. The design rules are largely the same for bright, tinned, silver-plated and even lacquered copper wires. A new type of braid, which has marked advantages over the conventional design, was proposed. With the 'mixed-element' technique, an optimal braid design can be specified on any plaiting machine, for any possible cable diameter, and for any desired angle. This is not possible for the conventional type of braid.

  7. GreenVMAS: Virtual Organization Based Platform for Heating Greenhouses Using Waste Energy from Power Plants.

    PubMed

    González-Briones, Alfonso; Chamoso, Pablo; Yoe, Hyun; Corchado, Juan M

    2018-03-14

    The gradual depletion of energy resources makes it necessary to optimize their use and to reuse them. Although great advances have already been made in optimizing energy generation processes, many of these processes generate energy that inevitably gets wasted. A clear example of this are nuclear, thermal and carbon power plants, which lose a large amount of energy that could otherwise be used for different purposes, such as heating greenhouses. The role of GreenVMAS is to maintain the required temperature level in greenhouses by using the waste energy generated by power plants. It incorporates a case-based reasoning system, virtual organizations and algorithms for data analysis and for efficient interaction with sensors and actuators. The system is context aware and scalable as it incorporates an artificial neural network, this means that it can operate correctly even if the number and characteristics of the greenhouses participating in the case study change. The architecture was evaluated empirically and the results show that the user's energy bill is greatly reduced with the implemented system.

  8. Rational design of small molecules as vaccine adjuvants.

    PubMed

    Wu, Tom Y-H; Singh, Manmohan; Miller, Andrew T; De Gregorio, Ennio; Doro, Francesco; D'Oro, Ugo; Skibinski, David A G; Mbow, M Lamine; Bufali, Simone; Herman, Ann E; Cortez, Alex; Li, Yongkai; Nayak, Bishnu P; Tritto, Elaine; Filippi, Christophe M; Otten, Gillis R; Brito, Luis A; Monaci, Elisabetta; Li, Chun; Aprea, Susanna; Valentini, Sara; Calabrό, Samuele; Laera, Donatello; Brunelli, Brunella; Caproni, Elena; Malyala, Padma; Panchal, Rekha G; Warren, Travis K; Bavari, Sina; O'Hagan, Derek T; Cooke, Michael P; Valiante, Nicholas M

    2014-11-19

    Adjuvants increase vaccine potency largely by activating innate immunity and promoting inflammation. Limiting the side effects of this inflammation is a major hurdle for adjuvant use in vaccines for humans. It has been difficult to improve on adjuvant safety because of a poor understanding of adjuvant mechanism and the empirical nature of adjuvant discovery and development historically. We describe new principles for the rational optimization of small-molecule immune potentiators (SMIPs) targeting Toll-like receptor 7 as adjuvants with a predicted increase in their therapeutic indices. Unlike traditional drugs, SMIP-based adjuvants need to have limited bioavailability and remain localized for optimal efficacy. These features also lead to temporally and spatially restricted inflammation that should decrease side effects. Through medicinal and formulation chemistry and extensive immunopharmacology, we show that in vivo potency can be increased with little to no systemic exposure, localized innate immune activation and short in vivo residence times of SMIP-based adjuvants. This work provides a systematic and generalizable approach to engineering small molecules for use as vaccine adjuvants. Copyright © 2014, American Association for the Advancement of Science.

  9. Scale-dependent feedbacks between patch size and plant reproduction in desert grassland

    USGS Publications Warehouse

    Svejcar, Lauren N.; Bestelmeyer, Brandon T.; Duniway, Michael C.; James, Darren K.

    2015-01-01

    Theoretical models suggest that scale-dependent feedbacks between plant reproductive success and plant patch size govern transitions from highly to sparsely vegetated states in drylands, yet there is scant empirical evidence for these mechanisms. Scale-dependent feedback models suggest that an optimal patch size exists for growth and reproduction of plants and that a threshold patch organization exists below which positive feedbacks between vegetation and resources can break down, leading to critical transitions. We examined the relationship between patch size and plant reproduction using an experiment in a Chihuahuan Desert grassland. We tested the hypothesis that reproductive effort and success of a dominant grass (Bouteloua eriopoda) would vary predictably with patch size. We found that focal plants in medium-sized patches featured higher rates of grass reproductive success than when plants occupied either large patch interiors or small patches. These patterns support the existence of scale-dependent feedbacks in Chihuahuan Desert grasslands and indicate an optimal patch size for reproductive effort and success in B. eriopoda. We discuss the implications of these results for detecting ecological thresholds in desert grasslands.

  10. GreenVMAS: Virtual Organization Based Platform for Heating Greenhouses Using Waste Energy from Power Plants

    PubMed Central

    Yoe, Hyun

    2018-01-01

    The gradual depletion of energy resources makes it necessary to optimize their use and to reuse them. Although great advances have already been made in optimizing energy generation processes, many of these processes generate energy that inevitably gets wasted. A clear example of this are nuclear, thermal and carbon power plants, which lose a large amount of energy that could otherwise be used for different purposes, such as heating greenhouses. The role of GreenVMAS is to maintain the required temperature level in greenhouses by using the waste energy generated by power plants. It incorporates a case-based reasoning system, virtual organizations and algorithms for data analysis and for efficient interaction with sensors and actuators. The system is context aware and scalable as it incorporates an artificial neural network, this means that it can operate correctly even if the number and characteristics of the greenhouses participating in the case study change. The architecture was evaluated empirically and the results show that the user’s energy bill is greatly reduced with the implemented system. PMID:29538351

  11. Understanding Participation in E-Learning in Organizations: A Large-Scale Empirical Study of Employees

    ERIC Educational Resources Information Center

    Garavan, Thomas N.; Carbery, Ronan; O'Malley, Grace; O'Donnell, David

    2010-01-01

    Much remains unknown in the increasingly important field of e-learning in organizations. Drawing on a large-scale survey of employees (N = 557) who had opportunities to participate in voluntary e-learning activities, the factors influencing participation in e-learning are explored in this empirical paper. It is hypothesized that key variables…

  12. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    NASA Astrophysics Data System (ADS)

    Quick, J.; Dykes, K.; Graf, P.; Zahle, F.

    2016-09-01

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.

  13. Designing marine reserve networks for both conservation and fisheries management.

    PubMed

    Gaines, Steven D; White, Crow; Carr, Mark H; Palumbi, Stephen R

    2010-10-26

    Marine protected areas (MPAs) that exclude fishing have been shown repeatedly to enhance the abundance, size, and diversity of species. These benefits, however, mean little to most marine species, because individual protected areas typically are small. To meet the larger-scale conservation challenges facing ocean ecosystems, several nations are expanding the benefits of individual protected areas by building networks of protected areas. Doing so successfully requires a detailed understanding of the ecological and physical characteristics of ocean ecosystems and the responses of humans to spatial closures. There has been enormous scientific interest in these topics, and frameworks for the design of MPA networks for meeting conservation and fishery management goals are emerging. Persistent in the literature is the perception of an inherent tradeoff between achieving conservation and fishery goals. Through a synthetic analysis across these conservation and bioeconomic studies, we construct guidelines for MPA network design that reduce or eliminate this tradeoff. We present size, spacing, location, and configuration guidelines for designing networks that simultaneously can enhance biological conservation and reduce fishery costs or even increase fishery yields and profits. Indeed, in some settings, a well-designed MPA network is critical to the optimal harvest strategy. When reserves benefit fisheries, the optimal area in reserves is moderately large (mode ≈30%). Assessing network design principals is limited currently by the absence of empirical data from large-scale networks. Emerging networks will soon rectify this constraint.

  14. Weights and topology: a study of the effects of graph construction on 3D image segmentation.

    PubMed

    Grady, Leo; Jolly, Marie-Pierre

    2008-01-01

    Graph-based algorithms have become increasingly popular for medical image segmentation. The fundamental process for each of these algorithms is to use the image content to generate a set of weights for the graph and then set conditions for an optimal partition of the graph with respect to these weights. To date, the heuristics used for generating the weighted graphs from image intensities have largely been ignored, while the primary focus of attention has been on the details of providing the partitioning conditions. In this paper we empirically study the effects of graph connectivity and weighting function on the quality of the segmentation results. To control for algorithm-specific effects, we employ both the Graph Cuts and Random Walker algorithms in our experiments.

  15. Model of Fluidized Bed Containing Reacting Solids and Gases

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Lathouwers, Danny

    2003-01-01

    A mathematical model has been developed for describing the thermofluid dynamics of a dense, chemically reacting mixture of solid particles and gases. As used here, "dense" signifies having a large volume fraction of particles, as for example in a bubbling fluidized bed. The model is intended especially for application to fluidized beds that contain mixtures of carrier gases, biomass undergoing pyrolysis, and sand. So far, the design of fluidized beds and other gas/solid industrial processing equipment has been based on empirical correlations derived from laboratory- and pilot-scale units. The present mathematical model is a product of continuing efforts to develop a computational capability for optimizing the designs of fluidized beds and related equipment on the basis of first principles. Such a capability could eliminate the need for expensive, time-consuming predesign testing.

  16. Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Comiskey, J. G.

    1979-01-01

    This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.

  17. Risk Analysis for Resource Planning Optimization

    NASA Technical Reports Server (NTRS)

    Chueng, Kar-Ming

    2008-01-01

    The main purpose of this paper is to introduce a risk management approach that allows planners to quantify the risk and efficiency tradeoff in the presence of uncertainties, and to make forward-looking choices in the development and execution of the plan. Demonstrate a planning and risk analysis framework that tightly integrates mathematical optimization, empirical simulation, and theoretical analysis techniques to solve complex problems.

  18. Optimization of a Hybrid Magnetic Bearing for a Magnetically Levitated Blood Pump via 3-D FEA

    PubMed Central

    Cheng, Shanbao; Olles, Mark W.; Burger, Aaron F.; Day, Steven W.

    2011-01-01

    In order to improve the performance of a magnetically levitated (maglev) axial flow blood pump, three-dimensional (3-D) finite element analysis (FEA) was used to optimize the design of a hybrid magnetic bearing (HMB). Radial, axial, and current stiffness of multiple design variations of the HMB were calculated using a 3-D FEA package and verified by experimental results. As compared with the original design, the optimized HMB had twice the axial stiffness with the resulting increase of negative radial stiffness partially compensated for by increased current stiffness. Accordingly, the performance of the maglev axial flow blood pump with the optimized HMBs was improved: the maximum pump speed was increased from 6000 rpm to 9000 rpm (50%). The radial, axial and current stiffness of the HMB was found to be linear at nominal operational position from both 3-D FEA and empirical measurements. Stiffness values determined by FEA and empirical measurements agreed well with one another. The magnetic flux density distribution and flux loop of the HMB were also visualized via 3-D FEA which confirms the designers’ initial assumption about the function of this HMB. PMID:22065892

  19. Optimization of a Hybrid Magnetic Bearing for a Magnetically Levitated Blood Pump via 3-D FEA.

    PubMed

    Cheng, Shanbao; Olles, Mark W; Burger, Aaron F; Day, Steven W

    2011-10-01

    In order to improve the performance of a magnetically levitated (maglev) axial flow blood pump, three-dimensional (3-D) finite element analysis (FEA) was used to optimize the design of a hybrid magnetic bearing (HMB). Radial, axial, and current stiffness of multiple design variations of the HMB were calculated using a 3-D FEA package and verified by experimental results. As compared with the original design, the optimized HMB had twice the axial stiffness with the resulting increase of negative radial stiffness partially compensated for by increased current stiffness. Accordingly, the performance of the maglev axial flow blood pump with the optimized HMBs was improved: the maximum pump speed was increased from 6000 rpm to 9000 rpm (50%). The radial, axial and current stiffness of the HMB was found to be linear at nominal operational position from both 3-D FEA and empirical measurements. Stiffness values determined by FEA and empirical measurements agreed well with one another. The magnetic flux density distribution and flux loop of the HMB were also visualized via 3-D FEA which confirms the designers' initial assumption about the function of this HMB.

  20. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  1. Optimizing Discharge Capacity of Li-O 2 Batteries by Design of Air-Electrode Porous Structure: Multifidelity Modeling and Optimization

    DOE PAGES

    Pan, Wenxiao; Yang, Xiu; Bao, Jie; ...

    2017-01-01

    We develop a new mathematical framework to study the optimal design of air electrode microstructures for lithium-oxygen (Li-O2) batteries. It can eectively reduce the number of expensive experiments for testing dierent air-electrodes, thereby minimizing the cost in the design of Li-O2 batteries. The design parameters to characterize an air-electrode microstructure include the porosity, surface-to-volume ratio, and parameters associated with the pore-size distribution. A surrogate model (also known as response surface) for discharge capacity is rst constructed as a function of these design parameters. The surrogate model is accurate and easy to evaluate such that an optimization can be performed basedmore » on it. In particular, a Gaussian process regression method, co-kriging, is employed due to its accuracy and eciency in predicting high-dimensional responses from a combination of multidelity data. Specically, a small amount of data from high-delity simulations are combined with a large number of data obtained from computationally ecient low-delity simulations. The high-delity simulation is based on a multiscale modeling approach that couples the microscale (pore-scale) and macroscale (device-scale) models. Whereas, the low-delity simulation is based on an empirical macroscale model. The constructed response surface provides quantitative understanding and prediction about how air electrode microstructures aect the discharge performance of Li-O2 batteries. The succeeding sensitivity analysis via Sobol indices and optimization via genetic algorithm ultimately oer a reliable guidance on the optimal design of air electrode microstructures. The proposed mathematical framework can be generalized to investigate other new energy storage techniques and materials.« less

  2. Optimizing Discharge Capacity of Li-O 2 Batteries by Design of Air-Electrode Porous Structure: Multifidelity Modeling and Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Yang, Xiu; Bao, Jie

    We develop a new mathematical framework to study the optimal design of air electrode microstructures for lithium-oxygen (Li-O2) batteries. It can eectively reduce the number of expensive experiments for testing dierent air-electrodes, thereby minimizing the cost in the design of Li-O2 batteries. The design parameters to characterize an air-electrode microstructure include the porosity, surface-to-volume ratio, and parameters associated with the pore-size distribution. A surrogate model (also known as response surface) for discharge capacity is rst constructed as a function of these design parameters. The surrogate model is accurate and easy to evaluate such that an optimization can be performed basedmore » on it. In particular, a Gaussian process regression method, co-kriging, is employed due to its accuracy and eciency in predicting high-dimensional responses from a combination of multidelity data. Specically, a small amount of data from high-delity simulations are combined with a large number of data obtained from computationally ecient low-delity simulations. The high-delity simulation is based on a multiscale modeling approach that couples the microscale (pore-scale) and macroscale (device-scale) models. Whereas, the low-delity simulation is based on an empirical macroscale model. The constructed response surface provides quantitative understanding and prediction about how air electrode microstructures aect the discharge performance of Li-O2 batteries. The succeeding sensitivity analysis via Sobol indices and optimization via genetic algorithm ultimately oer a reliable guidance on the optimal design of air electrode microstructures. The proposed mathematical framework can be generalized to investigate other new energy storage techniques and materials.« less

  3. Effects of Active Learning Classrooms on Student Learning: A Two-Year Empirical Investigation on Student Perceptions and Academic Performance

    ERIC Educational Resources Information Center

    Chiu, Pit Ho Patrio; Cheng, Shuk Han

    2017-01-01

    Recent studies on active learning classrooms (ACLs) have demonstrated their positive influence on student learning. However, most of the research evidence is derived from a few subject-specific courses or limited student enrolment. Empirical studies on this topic involving large student populations are rare. The present work involved a large-scale…

  4. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  5. What is unrealistic optimism?

    PubMed

    Jefferson, Anneli; Bortolotti, Lisa; Kuzmanovic, Bojana

    2017-04-01

    Here we consider the nature of unrealistic optimism and other related positive illusions. We are interested in whether cognitive states that are unrealistically optimistic are belief states, whether they are false, and whether they are epistemically irrational. We also ask to what extent unrealistically optimistic cognitive states are fixed. Based on the classic and recent empirical literature on unrealistic optimism, we offer some preliminary answers to these questions, thereby laying the foundations for answering further questions about unrealistic optimism, such as whether it has biological, psychological, or epistemic benefits. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Empirical optimization of DFT  +  U and HSE for the band structure of ZnO.

    PubMed

    Bashyal, Keshab; Pyles, Christopher K; Afroosheh, Sajjad; Lamichhane, Aneer; Zayak, Alexey T

    2018-02-14

    ZnO is a well-known wide band gap semiconductor with promising potential for applications in optoelectronics, transparent electronics, and spintronics. Computational simulations based on the density functional theory (DFT) play an important role in the research of ZnO, but the standard functionals, like Perdew-Burke-Erzenhof, result in largely underestimated values of the band gap and the binding energies of the Zn 3d electrons. Methods like DFT  +  U and hybrid functionals are meant to remedy the weaknesses of plain DFT. However, both methods are not parameter-free. Direct comparison with experimental data is the best way to optimize the computational parameters. X-ray photoemission spectroscopy (XPS) is commonly considered as a benchmark for the computed electronic densities of states. In this work, both DFT  +  U and HSE methods were parametrized to fit almost exactly the binding energies of electrons in ZnO obtained by XPS. The optimized parameterizations of DFT  +  U and HSE lead to significantly worse results in reproducing the ion-clamped static dielectric tensor, compared to standard high-level calculations, including GW, which in turn yield a perfect match for the dielectric tensor. The failure of our XPS-based optimization reveals the fact that XPS does not report the ground state electronic structure for ZnO and should not be used for benchmarking ground state electronic structure calculations.

  7. Empirical optimization of DFT  +  U and HSE for the band structure of ZnO

    NASA Astrophysics Data System (ADS)

    Bashyal, Keshab; Pyles, Christopher K.; Afroosheh, Sajjad; Lamichhane, Aneer; Zayak, Alexey T.

    2018-02-01

    ZnO is a well-known wide band gap semiconductor with promising potential for applications in optoelectronics, transparent electronics, and spintronics. Computational simulations based on the density functional theory (DFT) play an important role in the research of ZnO, but the standard functionals, like Perdew-Burke-Erzenhof, result in largely underestimated values of the band gap and the binding energies of the Zn3d electrons. Methods like DFT  +  U and hybrid functionals are meant to remedy the weaknesses of plain DFT. However, both methods are not parameter-free. Direct comparison with experimental data is the best way to optimize the computational parameters. X-ray photoemission spectroscopy (XPS) is commonly considered as a benchmark for the computed electronic densities of states. In this work, both DFT  +  U and HSE methods were parametrized to fit almost exactly the binding energies of electrons in ZnO obtained by XPS. The optimized parameterizations of DFT  +  U and HSE lead to significantly worse results in reproducing the ion-clamped static dielectric tensor, compared to standard high-level calculations, including GW, which in turn yield a perfect match for the dielectric tensor. The failure of our XPS-based optimization reveals the fact that XPS does not report the ground state electronic structure for ZnO and should not be used for benchmarking ground state electronic structure calculations.

  8. WE-EF-BRA-07: High Performance Preclinical Irradiation Through Optimized Dual Focal Spot Dose Painting and Online Virtual Isocenter Radiation Field Targeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J; Princess Margaret Cancer Centre, University Health Network, Toronto, CA; Lindsay, P

    Purpose: Advances in radiotherapy practice facilitated by collimation systems to shape radiation fields and image guidance to target these conformal beams have motivated proposals for more complex dose patterns to improve the therapeutic ratio. Recent progress in small animal radiotherapy platforms has provided the foundation to validate the efficacy of such interventions, but robustly delivering heterogeneous dose distributions at the scale and accuracy demanded by preclinical studies remains challenging. This work proposes a dual focal spot optimization method to paint spatially heterogeneous dose regions and an online virtual isocenter targeting method to accurately target the dose distributions. Methods: Two-dimensional dosemore » kernels were empirically measured for the 1 mm diameter circular collimator with radiochromic film in a solid water phantom for the small and large x-ray focal spots on the X-RAD 225Cx microirradiator. These kernels were used in an optimization framework which determined a set of animal stage positions, beam-on times, and focal spot settings to optimally deliver a given desired dose distribution. An online method was developed which defined a virtual treatment isocenter based on a single image projection of the collimated radiation field. The method was demonstrated by optimization of a 6 mm circular 2 Gy target adjoining a 4 mm semicircular avoidance region. Results: The dual focal spot technique improved the optimized dose distribution with the proportion of avoidance region receiving more than 0.5 Gy reduced by 40% compared to the large focal spot technique. Targeting tests performed by irradiating ball bearing targets on radiochromic film pieced revealed the online targeting method improved the three-dimensional accuracy from 0.48 mm to 0.15 mm. Conclusion: The dual focal spot optimization and online virtual isocenter targeting framework is a robust option for delivering dose at the preclinical level and provides a new experimental option for unique radiobiological investigations This work is supported, in part, by the Natural Sciences and Engineering Research Council of Canada and a Mitacs-Accelerate fellowship. P.E. Lindsay, and D.A. Jaffray are listed as inventors of the system described herein. This system has been licensed to Precision X-Ray Inc. for commercial development.« less

  9. Structure and energy of non-canonical basepairs: comparison of various computational chemistry methods with crystallographic ensembles.

    PubMed

    Panigrahi, Swati; Pal, Rahul; Bhattacharyya, Dhananjay

    2011-12-01

    Different types of non-canonical basepairs, in addition to the Watson-Crick ones, are observed quite frequently in RNA. Their importance in the three dimensional structure is not fully understood, but their various roles have been proposed by different groups. We have analyzed the energetics and geometry of 32 most frequently observed basepairs in the functional RNA crystal structures using different popular empirical, semi-empirical and ab initio quantum chemical methods and compared their optimized geometry with the crystal data. These basepairs are classified into three categories: polar, non-polar and sugar-mediated, depending on the types of atoms involved in hydrogen bonding. In case of polar basepairs, most of the methods give rise to optimized structures close to their initial geometry. The interaction energies also follow similar trends, with the polar ones having more attractive interaction energies. Some of the C-H...O/N hydrogen bond mediated non-polar basepairs are also found to be significantly stable in terms of their interaction energy values. Few polar basepairs, having amino or carboxyl groups not hydrogen bonded to anything, such as G:G H:W C, show large flexibility. Most of the non-polar basepairs, except A:G s:s T and A:G w:s C, are found to be stable; indicating C-H...O/N interaction also plays a prominent role in stabilizing the basepairs. The sugar mediated basepairs show variability in their structures, due to the involvement of flexible ribose sugar. These presumably indicate that the most of the polar basepairs along with few non-polar ones act as seed for RNA folding while few may act as some conformational switch in the RNA.

  10. Empirical investigation of optimal severance taxation in Alabama. Volume II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leathers, C.G.; Zumpano, L.V.

    1980-10-01

    The research develops a theoretical and empirical foundation for the analysis of severance taxation in Alabama. Primary emphasis was directed to delineating an optimal severance tax structure for the state of Alabama and, in the process, assess the economic and fiscal consequences of current severance tax usage. The legal and economic basis and justification for severance taxation, the amounts and distribution of severance tax revenues currently generated, the administration of the tax, and severance tax practices prevailing in other states were compared in Volume I. These data, findings, and quantitative analyses were used to ascertain the fiscal and economic effectsmore » of changes in the structure and utilization of severance taxation in Alabama. The actual and potential productivity of severance taxation in Alamama is discussed. The analysis estimates the state's severance tax revenue capacity relative to the nation and to regional neighbors. The analysis is followed by an intrastate fiscal examination of the state and local tax system. In the process, the relative revenue contribution of severance taxes to state and local revenues is quantified, as well as comparing the revenue capacity and utilization of severance taxes to other state and local levies. An examination is made of the question of who actually pays the severance taxes by an analysis of the shifting and incidence characteristics of taxes on natural resources. Serious doubt is raised that states can, under normal economic circumstances, export a large portion of the severance tax burden to out-of-state users. According to the analytical results of the study, profit margins will be affected; therefore, higher severance taxes should only be imposed after rational assessment of the consequences on business incentives and employment in the extractive inudstries, especially coal.« less

  11. The hierarchical stability of the seven known large size ratio triple asteroids using the empirical stability parameters.

    PubMed

    Liu, Xiaodong; Baoyin, Hexi; Marchis, Franck

    In this study, the hierarchical stability of the seven known large size ratio triple asteroids is investigated. The effect of the solar gravity and primary's J 2 are considered. The force function is expanded in terms of mass ratios based on the Hill's approximation and the large size ratio property. The empirical stability parameters are used to examine the hierarchical stability of the triple asteroids. It is found that the all the known large size ratio triple asteroid systems are hierarchically stable. This study provides useful information for future evolutions of the triple asteroids.

  12. Optimization Techniques for Clustering,Connectivity, and Flow Problems in Complex Networks

    DTIC Science & Technology

    2012-10-01

    discrete optimization and for analysis of performance of algorithm portfolios; introducing a metaheuristic framework of variable objective search that...The results of empirical evaluation of the proposed algorithm are also included. 1.3 Theoretical analysis of heuristics and designing new metaheuristic ...analysis of heuristics for inapproximable problems and designing new metaheuristic approaches for the problems of interest; (IV) Developing new models

  13. An Empirical Analysis of Economic and Racial Bias in the Distribution of Educational Resources in Nine Large American Cities.

    ERIC Educational Resources Information Center

    Owen, John D.

    Empirical evidence is presented consistent with the hypothesis that instructional expenditures are distributed unequally, and that less is spent on non-white and poor students than on others in large American cities. The most experienced teachers are generally to be found in schools attended by the less poor white children. More important, the…

  14. Telemanipulator design and optimization software

    NASA Astrophysics Data System (ADS)

    Cote, Jean; Pelletier, Michel

    1995-12-01

    For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.

  15. Optimal rates for phylogenetic inference and experimental design in the era of genome-scale datasets.

    PubMed

    Dornburg, Alex; Su, Zhuo; Townsend, Jeffrey P

    2018-06-25

    With the rise of genome- scale datasets there has been a call for increased data scrutiny and careful selection of loci appropriate for attempting the resolution of a phylogenetic problem. Such loci are desired to maximize phylogenetic information content while minimizing the risk of homoplasy. Theory posits the existence of characters that evolve under such an optimum rate, and efforts to determine optimal rates of inference have been a cornerstone of phylogenetic experimental design for over two decades. However, both theoretical and empirical investigations of optimal rates have varied dramatically in their conclusions: spanning no relationship to a tight relationship between the rate of change and phylogenetic utility. Here we synthesize these apparently contradictory views, demonstrating both empirical and theoretical conditions under which each is correct. We find that optimal rates of characters-not genes-are generally robust to most experimental design decisions. Moreover, consideration of site rate heterogeneity within a given locus is critical to accurate predictions of utility. Factors such as taxon sampling or the targeted number of characters providing support for a topology are additionally critical to the predictions of phylogenetic utility based on the rate of character change. Further, optimality of rates and predictions of phylogenetic utility are not equivalent, demonstrating the need for further development of comprehensive theory of phylogenetic experimental design.

  16. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  17. Energetic Physiology Mediates Individual Optimization of Breeding Phenology in a Migratory Arctic Seabird.

    PubMed

    Hennin, Holly L; Bêty, Jöel; Legagneux, Pierre; Gilchrist, H Grant; Williams, Tony D; Love, Oliver P

    2016-10-01

    The influence of variation in individual state on key reproductive decisions impacting fitness is well appreciated in evolutionary ecology. Rowe et al. (1994) developed a condition-dependent individual optimization model predicting that three key factors impact the ability of migratory female birds to individually optimize breeding phenology to maximize fitness in seasonal environments: arrival condition, arrival date, and ability to gain in condition on the breeding grounds. While empirical studies have confirmed that greater arrival body mass and earlier arrival dates result in earlier laying, no study has assessed whether individual variation in energetic management of condition gain effects this key fitness-related decision. Using an 8-year data set from over 350 prebreeding female Arctic common eiders (Somateria mollissima), we tested this component of the model by examining whether individual variation in two physiological traits influencing energetic management (plasma triglycerides: physiological fattening rate; baseline corticosterone: energetic demand) predicted individual variation in breeding phenology after controlling for arrival date and body mass. As predicted by the optimization model, individuals with higher fattening rates and lower energetic demand had the earliest breeding phenology (shortest delays between arrival and laying; earliest laying dates). Our results are the first to empirically determine that individual flexibility in prebreeding energetic management influences key fitness-related reproductive decisions, suggesting that individuals have the capacity to optimally manage reproductive investment.

  18. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, J.; Dykes, K.; Graf, P.

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less

  19. Optimization under Uncertainty of Site-Specific Turbine Configurations: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Dykes, Katherine; Graf, Peter

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained withmore » increasing risk aversion on the part of the designer.« less

  20. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    DOE PAGES

    Quick, J.; Dykes, K.; Graf, P.; ...

    2016-10-03

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less

  1. A Motivational Theory of Life-Span Development

    PubMed Central

    Heckhausen, Jutta; Wrosch, Carsten; Schulz, Richard

    2010-01-01

    This article had four goals. First, the authors identified a set of general challenges and questions that a life-span theory of development should address. Second, they presented a comprehensive account of their Motivational Theory of Life-Span Development. They integrated the model of optimization in primary and secondary control and the action-phase model of developmental regulation with their original life-span theory of control to present a comprehensive theory of development. Third, they reviewed the relevant empirical literature testing key propositions of the Motivational Theory of Life-Span Development. Finally, because the conceptual reach of their theory goes far beyond the current empirical base, they pointed out areas that deserve further and more focused empirical inquiry. PMID:20063963

  2. Ruling out Legionella in community-acquired pneumonia.

    PubMed

    Haubitz, Sebastian; Hitz, Fabienne; Graedel, Lena; Batschwaroff, Marcus; Wiemken, Timothy Lee; Peyrani, Paula; Ramirez, Julio A; Fux, Christoph Andreas; Mueller, Beat; Schuetz, Philipp

    2014-10-01

    Assessing the likelihood for Legionella sp. in community-acquired pneumonia is important because of differences in treatment regimens. Currently used antigen tests and culture have limited sensitivity with important time delays, making empirical broad-spectrum coverage necessary. Therefore, a score with 6 variables recently has been proposed. We sought to validate these parameters in an independent cohort. We analyzed adult patients with community-acquired pneumonia from a large multinational database (Community Acquired Pneumonia Organization) who were treated between 2001 and 2012 with more than 4 of the 6 prespecified clinical variables available. Association and discrimination were assessed using logistic regression analysis and area under the curve (AUC). Of 1939 included patients, the infectious cause was known in 594 (28.9%), including Streptococcus pneumoniae in 264 (13.6%) and Legionella sp. in 37 (1.9%). The proposed clinical predictors fever, cough, hyponatremia, lactate dehydrogenase, C-reactive protein, and platelet count were all associated or tended to be associated with Legionella cause. A logistic regression analysis including all these predictors showed excellent discrimination with an AUC of 0.91 (95% confidence interval, 0.87-0.94). The original dichotomized score showed good discrimination (AUC, 0.73; 95% confidence interval, 0.65-0.81) and a high negative predictive value of 99% for patients with less than 2 parameters present. With the use of a large independent patient sample from an international database, this analysis validates previously proposed clinical variables to accurately rule out Legionella sp., which may help to optimize initial empiric therapy. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Fine structure of spectral properties for random correlation matrices: An application to financial markets

    NASA Astrophysics Data System (ADS)

    Livan, Giacomo; Alfarano, Simone; Scalas, Enrico

    2011-07-01

    We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.

  4. Optimal stomatal behavior with competition for water and risk of hydraulic impairment.

    PubMed

    Wolf, Adam; Anderegg, William R L; Pacala, Stephen W

    2016-11-15

    For over 40 y the dominant theory of stomatal behavior has been that plants should open stomates until the carbon gained by an infinitesimal additional opening balances the additional water lost times a water price that is constant at least over short periods. This theory has persisted because of its remarkable success in explaining strongly supported simple empirical models of stomatal conductance, even though we have also known for over 40 y that the theory is not consistent with competition among plants for water. We develop an alternative theory in which plants maximize carbon gain without pricing water loss and also add two features to both this and the classical theory, which are strongly supported by empirical evidence: (i) water flow through xylem that is progressively impaired as xylem water potential drops and (ii) fitness or carbon costs associated with low water potentials caused by a variety of mechanisms, including xylem damage repair. We show that our alternative carbon-maximization optimization is consistent with plant competition because it yields an evolutionary stable strategy (ESS)-species with the ESS stomatal behavior that will outcompete all others. We further show that, like the classical theory, the alternative theory also explains the functional forms of empirical stomatal models. We derive ways to test between the alternative optimization criteria by introducing a metric-the marginal xylem tension efficiency, which quantifies the amount of photosynthesis a plant will forego from opening stomatal an infinitesimal amount more to avoid a drop in water potential.

  5. High Velocity Jet Noise Source Location and Reduction. Task 3 - Experimental Investigation of Suppression Principles. Volume I. Suppressor Concepts Optimization

    DTIC Science & Technology

    1978-12-01

    multinational corporation in the 1960’s placed extreme emphasis on the need for effective and efficient noise suppression devices. Phase I of work...through model and engine testing applicable to an afterburning turbojet engine. Suppressor designs were based primarily on empirical methods. Phase II...using "ray" acoustics. This method is in contrast to the purely empirical method which consists of the curve -fitting of normalized data. In order to

  6. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.

  7. Optimal non-linear health insurance.

    PubMed

    Blomqvist, A

    1997-06-01

    Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.

  8. Ehrenfest model with large jumps in finance

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisanao

    2004-02-01

    Changes (returns) in stock index prices and exchange rates for currencies are argued, based on empirical data, to obey a stable distribution with characteristic exponent α<2 for short sampling intervals and a Gaussian distribution for long sampling intervals. In order to explain this phenomenon, an Ehrenfest model with large jumps (ELJ) is introduced to explain the empirical density function of price changes for both short and long sampling intervals.

  9. Empirical valence bond models for reactive potential energy surfaces: a parallel multilevel genetic program approach.

    PubMed

    Bellucci, Michael A; Coker, David F

    2011-07-28

    We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics

  10. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).

  11. Intracellular Delivery of Proteins with Cell-Penetrating Peptides for Therapeutic Uses in Human Disease.

    PubMed

    Dinca, Ana; Chien, Wei-Ming; Chin, Michael T

    2016-02-22

    Protein therapy exhibits several advantages over small molecule drugs and is increasingly being developed for the treatment of disorders ranging from single enzyme deficiencies to cancer. Cell-penetrating peptides (CPPs), a group of small peptides capable of promoting transport of molecular cargo across the plasma membrane, have become important tools in promoting the cellular uptake of exogenously delivered proteins. Although the molecular mechanisms of uptake are not firmly established, CPPs have been empirically shown to promote uptake of various molecules, including large proteins over 100 kiloDaltons (kDa). Recombinant proteins that include a CPP tag to promote intracellular delivery show promise as therapeutic agents with encouraging success rates in both animal and human trials. This review highlights recent advances in protein-CPP therapy and discusses optimization strategies and potential detrimental effects.

  12. Pharmacogenomics in the preclinical development of vaccines: evaluation of efficacy and systemic toxicity in the mouse using array technology.

    PubMed

    Regnström, Karin J

    2008-01-01

    The development of vaccines, conventional protein based as well as nucleic acid based vaccines, and their delivery systems has been largely empirical and ineffective. This is partly due to a lack of methodology, since traditionally only a few markers are studied. By introducing gene expression analysis and bioinformatics into the design of vaccines and their delivery systems, vaccine development can be improved and accelerated considerably. Each vaccine antigen and delivery system combination is characterized by a unique genomic profile, a "fingerprint" that will give information of not only immunological and toxicological responses but also other related cellular responses e.g. cell cycle, apoptosis and carcinogenic effects. The resulting unique genomic fingerprint facilitates the establishment of molecular structure--pharmacological activity relationships and therefore leads to optimization of vaccine development.

  13. Empirical simulations of materials

    NASA Astrophysics Data System (ADS)

    Jogireddy, Vasantha

    2011-12-01

    Molecular dynamics is a specialized discipline of molecular modelling and computer techniques. In this work, first we presented simulation results from a study carried out on silicon nanowires. In the second part of the work, we presented an electrostatic screened coulomb potential developed for studying metal alloys and metal oxides. In particular, we have studied aluminum-copper alloys, aluminum oxides and copper oxides. Parameter optimization for the potential is done using multiobjective optimization algorithms.

  14. Optimal Averages for Nonlinear Signal Decompositions - Another Alternative for Empirical Mode Decomposition

    DTIC Science & Technology

    2014-10-01

    nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically

  15. Medium Optimization and Fermentation Kinetics for κ-Carrageenase Production by Thalassospira sp. Fjfst-332.

    PubMed

    Guo, Juanjuan; Zhang, Longtao; Lu, Xu; Zeng, Shaoxiao; Zhang, Yi; Xu, Hui; Zheng, Baodong

    2016-11-05

    Effective degradation of κ-carrageenan by isolated Thalassospira sp. fjfst-332 is reported for the first time in this paper. It was identified by 16S rDNA sequencing and morphological observation using Transmission Electron Microscopy (TEM). Based on a Plackett-Burman design for significant variables, Box-Behnken experimental design and response surface methodology were used to optimize the culture conditions. Through statistical optimization, the optimum medium components were determined as follows: 2.0 g/L κ-carrageenan, 1.0 g/L yeast extract, 1.0 g/L FOS, 20.0 g/L NaCl, 2.0 g/L NaNO₃, 0.5 g/L MgSO₄·7H₂O, 0.1 g/L K₂HPO₄, and 0.1 g/L CaCl₂. The highest activity exhibited by Thalassospira sp. fjfst-332 was 267 U/mL, which makes it the most vigorous wild bacterium for κ-carrageenan production. In order to guide scaled-up production, two empirical models-the logistic equation and Luedeking-Piretequation-were proposed to predict the strain growth and enzyme production, respectively. Furthermore, we report the fermentation kinetics and every empirical equation of the coefficients (α, β, X ₀, X m and μ m ) for the two models, which could be used to design and optimize industrial processes.

  16. Climate change mitigation: comparative assessment of Malaysian and ASEAN scenarios.

    PubMed

    Rasiah, Rajah; Ahmed, Adeel; Al-Amin, Abul Quasem; Chenayah, Santha

    2017-01-01

    This paper analyses empirically the optimal climate change mitigation policy of Malaysia with the business as usual scenario of ASEAN to compare their environmental and economic consequences over the period 2010-2110. A downscaling empirical dynamic model is constructed using a dual multidisciplinary framework combining economic, earth science, and ecological variables to analyse the long-run consequences. The model takes account of climatic variables, including carbon cycle, carbon emission, climatic damage, carbon control, carbon concentration, and temperature. The results indicate that without optimal climate policy and action, the cumulative cost of climate damage for Malaysia and ASEAN as a whole over the period 2010-2110 would be MYR40.1 trillion and MYR151.0 trillion, respectively. Under the optimal policy, the cumulative cost of climatic damage for Malaysia would fall to MYR5.3 trillion over the 100 years. Also, the additional economic output of Malaysia will rise from MYR2.1 billion in 2010 to MYR3.6 billion in 2050 and MYR5.5 billion in 2110 under the optimal climate change mitigation scenario. The additional economic output for ASEAN would fall from MYR8.1 billion in 2010 to MYR3.2 billion in 2050 before rising again slightly to MYR4.7 billion in 2110 in the business as usual ASEAN scenario.

  17. Analytical Computation of the Epidemic Threshold on Temporal Networks

    NASA Astrophysics Data System (ADS)

    Valdano, Eugenio; Ferreri, Luca; Poletto, Chiara; Colizza, Vittoria

    2015-04-01

    The time variation of contacts in a networked system may fundamentally alter the properties of spreading processes and affect the condition for large-scale propagation, as encoded in the epidemic threshold. Despite the great interest in the problem for the physics, applied mathematics, computer science, and epidemiology communities, a full theoretical understanding is still missing and currently limited to the cases where the time-scale separation holds between spreading and network dynamics or to specific temporal network models. We consider a Markov chain description of the susceptible-infectious-susceptible process on an arbitrary temporal network. By adopting a multilayer perspective, we develop a general analytical derivation of the epidemic threshold in terms of the spectral radius of a matrix that encodes both network structure and disease dynamics. The accuracy of the approach is confirmed on a set of temporal models and empirical networks and against numerical results. In addition, we explore how the threshold changes when varying the overall time of observation of the temporal network, so as to provide insights on the optimal time window for data collection of empirical temporal networked systems. Our framework is of both fundamental and practical interest, as it offers novel understanding of the interplay between temporal networks and spreading dynamics.

  18. A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.

    PubMed

    Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W

    2005-01-01

    We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.

  19. Phenomenological theory of collective decision-making

    NASA Astrophysics Data System (ADS)

    Zafeiris, Anna; Koman, Zsombor; Mones, Enys; Vicsek, Tamás

    2017-08-01

    An essential task of groups is to provide efficient solutions for the complex problems they face. Indeed, considerable efforts have been devoted to the question of collective decision-making related to problems involving a single dominant feature. Here we introduce a quantitative formalism for finding the optimal distribution of the group members' competences in the more typical case when the underlying problem is complex, i.e., multidimensional. Thus, we consider teams that are aiming at obtaining the best possible answer to a problem having a number of independent sub-problems. Our approach is based on a generic scheme for the process of evaluating the proposed solutions (i.e., negotiation). We demonstrate that the best performing groups have at least one specialist for each sub-problem - but a far less intuitive result is that finding the optimal solution by the interacting group members requires that the specialists also have some insight into the sub-problems beyond their unique field(s). We present empirical results obtained by using a large-scale database of citations being in good agreement with the above theory. The framework we have developed can easily be adapted to a variety of realistic situations since taking into account the weights of the sub-problems, the opinions or the relations of the group is straightforward. Consequently, our method can be used in several contexts, especially when the optimal composition of a group of decision-makers is designed.

  20. Optimal water depth management on river-fed National Wildlife Refuges in a changing climate

    USGS Publications Warehouse

    Nicol, Samuel; Griffith, Brad; Austin, Jane; Hunter, Christine M.

    2014-01-01

    The prairie pothole region (PPR) in the north-central United States and south-central Canada constitutes the most important waterfowl breeding area in North America. Projected long-term changes in precipitation and temperature may alter the drivers of waterfowl abundance: wetland availability and emergent vegetation cover. Previous studies have focused on isolated wetland dynamics, but the implications of changing precipitation on managed, river-fed wetlands have not been addressed. Using a structured decision making (SDM) approach, we derived optimal water management actions for 20 years at four river-fed National Wildlife Refuges (NWRs) in North and South Dakota under contrasting increasing/decreasing (+/−0.4 %/year) inflow scenarios derived from empirical trends. Refuge pool depth is manipulated by control structures. Optimal management involves setting control structure heights that have the highest probability of providing a desired mix of waterfowl habitat, given refuge capacities and inflows. We found optimal seasonal control structure heights for each refuge were essentially the same under increasing and decreasing inflow trends of 0.4 %/year over the next 20 years. Results suggest managed pools in the NWRs receive large inflows relative to their capacities. Hence, water availability does not constrain management; pool bathymetry and management tactics can be greater constraints on attaining management objectives than climate-mediated inflow. We present time-dependent optimal seasonal control structure heights for each refuge, which are resilient to the non-stationary precipitation scenarios we examined. Managers can use this information to provide a desired mixture of wildlife habitats, and to re-assess management objectives in reserves where pool bathymetry prevents attaining the currently stated objectives.

  1. Metaheuristic optimization approaches to predict shear-wave velocity from conventional well logs in sandstone and carbonate case studies

    NASA Astrophysics Data System (ADS)

    Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi

    2018-06-01

    Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.

  2. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  3. High Speed Jet Noise Prediction Using Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Lele, Sanjiva K.

    2002-01-01

    Current methods for predicting the noise of high speed jets are largely empirical. These empirical methods are based on the jet noise data gathered by varying primarily the jet flow speed, and jet temperature for a fixed nozzle geometry. Efforts have been made to correlate the noise data of co-annular (multi-stream) jets and for the changes associated with the forward flight within these empirical correlations. But ultimately these emipirical methods fail to provide suitable guidance in the selection of new, low-noise nozzle designs. This motivates the development of a new class of prediction methods which are based on computational simulations, in an attempt to remove the empiricism of the present day noise predictions.

  4. Multiobjective optimization and multivariable control of the beer fermentation process with the use of evolutionary algorithms.

    PubMed

    Andrés-Toro, B; Girón-Sierra, J M; Fernández-Blanco, P; López-Orozco, J A; Besada-Portas, E

    2004-04-01

    This paper describes empirical research on the model, optimization and supervisory control of beer fermentation. Conditions in the laboratory were made as similar as possible to brewery industry conditions. Since mathematical models that consider realistic industrial conditions were not available, a new mathematical model design involving industrial conditions was first developed. Batch fermentations are multiobjective dynamic processes that must be guided along optimal paths to obtain good results. The paper describes a direct way to apply a Pareto set approach with multiobjective evolutionary algorithms (MOEAs). Successful finding of optimal ways to drive these processes were reported. Once obtained, the mathematical fermentation model was used to optimize the fermentation process by using an intelligent control based on certain rules.

  5. Safe Onboard Guidance and Control Under Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James

    2011-01-01

    An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.

  6. Noisy covariance matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, S.; Kondor, I.

    2002-05-01

    According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.

  7. Treatment Optimization Using Computed Tomography-Delineated Targets Should be Used for Supraclavicular Irradiation for Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liengsawangwong, Raweewan; Yu, T.-K.; Sun, T.-L.

    2007-11-01

    Background: The purpose of this study was to determine whether the use of optimized CT treatment planning offered better coverage of axillary level III (LIII)/supraclavicular (SC) targets than the empirically derived dose prescription that are commonly used. Materials/Methods: Thirty-two consecutive breast cancer patients who underwent CT treatment planning of a SC field were evaluated. Each patient was categorized according to body mass index (BMI) classes: normal, overweight, or obese. The SC and LIII nodal beds were contoured, and four treatment plans for each patient were generated. Three of the plans used empiric dose prescriptions, and these were compared with amore » CT-optimized plan. Each plan was evaluated by two criteria: whether 98% of target volume receive >90% of prescribed dose and whether < 5% of the irradiated volume received 105% of prescribed dose. Results: The mean depth of SC and LIII were 3.2 cm (range, 1.4-6.7 cm) and 3.1 (range, 1.7-5.8 cm). The depth of these targets varied according across BMI classes (p = 0.01). Among the four sets of plans, the CT-optimized plans were the most successful at achieving both of the dosimetry objectives for every BMI class (normal BMI, p = .003; overweight BMI, p < .0001; obese BMI, p < .001). Conclusions: Across all BMI classes, routine radiation prescriptions did not optimally cover intended targets for every patient. Optimized CT-based treatment planning generated the most successful plans; therefore, we recommend the use of routine CT simulation and treatment planning of SC fields in breast cancer.« less

  8. Measuring Treasury Bond Portfolio Risk and Portfolio Optimization with a Non-Gaussian Multivariate Model

    NASA Astrophysics Data System (ADS)

    Dong, Yijun

    The research about measuring the risk of a bond portfolio and the portfolio optimization was relatively rare previously, because the risk factors of bond portfolios are not very volatile. However, this condition has changed recently. The 2008 financial crisis brought high volatility to the risk factors and the related bond securities, even if the highly rated U.S. treasury bonds. Moreover, the risk factors of bond portfolios show properties of fat-tailness and asymmetry like risk factors of equity portfolios. Therefore, we need to use advanced techniques to measure and manage risk of bond portfolios. In our paper, we first apply autoregressive moving average generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) model with multivariate normal tempered stable (MNTS) distribution innovations to predict risk factors of U.S. treasury bonds and statistically demonstrate that MNTS distribution has the ability to capture the properties of risk factors based on the goodness-of-fit tests. Then based on empirical evidence, we find that the VaR and AVaR estimated by assuming normal tempered stable distribution are more realistic and reliable than those estimated by assuming normal distribution, especially for the financial crisis period. Finally, we use the mean-risk portfolio optimization to minimize portfolios' potential risks. The empirical study indicates that the optimized bond portfolios have better risk-adjusted performances than the benchmark portfolios for some periods. Moreover, the optimized bond portfolios obtained by assuming normal tempered stable distribution have improved performances in comparison to the optimized bond portfolios obtained by assuming normal distribution.

  9. Near-Earth Object Interception Using Nuclear Thermal Rock Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    X-L. Zhang; E. Ball; L. Kochmanski

    Planetary defense has drawn wide study: despite the low probability of a large-scale impact, its consequences would be disastrous. The study presented here evaluates available protection strategies to identify bottlenecks limiting the scale of near-Earth object that could be deflected, using cutting-edge and near-future technologies. It discusses the use of a nuclear thermal rocket (NTR) as a propulsion device for delivery of thermonuclear payloads to deflect or destroy a long-period comet on a collision course with Earth. A ‘worst plausible scenario’ for the available warning time (10 months) and comet approach trajectory are determined, and empirical data are used tomore » make an estimate of the payload necessary to deflect such a comet. Optimizing the tradeoff between early interception and large deflection payload establishes the ideal trajectory for an interception mission to follow. The study also examines the potential for multiple rocket launch dates. Comparison of propulsion technologies for this mission shows that NTR outperforms other options substantially. The discussion concludes with an estimate of the comet size (5 km) that could be deflected usingNTRpropulsion, given current launch capabilities.« less

  10. Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.

    PubMed

    Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A

    2017-01-01

    Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.

  11. Studies of the limit order book around large price changes

    NASA Astrophysics Data System (ADS)

    Tóth, B.; Kertész, J.; Farmer, J. D.

    2009-10-01

    We study the dynamics of the limit order book of liquid stocks after experiencing large intra-day price changes. In the data we find large variations in several microscopical measures, e.g., the volatility the bid-ask spread, the bid-ask imbalance, the number of queuing limit orders, the activity (number and volume) of limit orders placed and canceled, etc. The relaxation of the quantities is generally very slow that can be described by a power law of exponent ≈ 0.4. We introduce a numerical model in order to understand the empirical results better. We find that with a zero intelligence deposition model of the order flow the empirical results can be reproduced qualitatively. This suggests that the slow relaxations might not be results of agents' strategic behaviour. Studying the difference between the exponents found empirically and numerically helps us to better identify the role of strategic behaviour in the phenomena. in here

  12. Evolution of optimal Lévy-flight strategies in human mental searches

    NASA Astrophysics Data System (ADS)

    Radicchi, Filippo; Baronchelli, Andrea

    2012-06-01

    Recent analysis of empirical data [Radicchi, Baronchelli, and Amaral, PloS ONE1932-620310.1371/journal.pone.0029910 7, e029910 (2012)] showed that humans adopt Lévy-flight strategies when exploring the bid space in online auctions. A game theoretical model proved that the observed Lévy exponents are nearly optimal, being close to the exponent value that guarantees the maximal economical return to players. Here, we rationalize these findings by adopting an evolutionary perspective. We show that a simple evolutionary process is able to account for the empirical measurements with the only assumption that the reproductive fitness of the players is proportional to their search ability. Contrary to previous modeling, our approach describes the emergence of the observed exponent without resorting to any strong assumptions on the initial searching strategies. Our results generalize earlier research, and open novel questions in cognitive, behavioral, and evolutionary sciences.

  13. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  14. Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis

    NASA Astrophysics Data System (ADS)

    Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.

    2015-01-01

    Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.

  15. Risk and utility in portfolio optimization

    NASA Astrophysics Data System (ADS)

    Cohen, Morrel H.; Natoli, Vincent D.

    2003-06-01

    Modern portfolio theory (MPT) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets. In the original mean-variance approach of Markowitz, volatility is taken as a proxy for risk, conflating uncertainty with risk. There have been many subsequent attempts to alleviate that weakness which, typically, combine utility and risk. We present here a modification of MPT based on the inclusion of separate risk and utility criteria. We define risk as the probability of failure to meet a pre-established investment goal. We define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield. The emphasis throughout is on long investment horizons for which risk-free assets do not exist. Analytic results are presented for a Gaussian probability distribution. Risk-utility relations are explored via empirical stock-price data, and an illustrative portfolio is optimized using the empirical data.

  16. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    PubMed

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  17. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  18. A synthetic biology approach to the development of transcriptional regulatory models and custom enhancer design☆,☆☆

    PubMed Central

    Martinez, Carlos A.; Barr, Kenneth; Kim, Ah-Ram; Reinitz, John

    2013-01-01

    Synthetic biology offers novel opportunities for elucidating transcriptional regulatory mechanisms and enhancer logic. Complex cis-regulatory sequences—like the ones driving expression of the Drosophila even-skipped gene—have proven difficult to design from existing knowledge, presumably due to the large number of protein-protein interactions needed to drive the correct expression patterns of genes in multicellular organisms. This work discusses two novel computational methods for the custom design of enhancers that employ a sophisticated, empirically validated transcriptional model, optimization algorithms, and synthetic biology. These synthetic elements have both utilitarian and academic value, including improving existing regulatory models as well as evolutionary questions. The first method involves the use of simulated annealing to explore the sequence space for synthetic enhancers whose expression output fit a given search criterion. The second method uses a novel optimization algorithm to find functionally accessible pathways between two enhancer sequences. These paths describe a set of mutations wherein the predicted expression pattern does not significantly vary at any point along the path. Both methods rely on a predictive mathematical framework that maps the enhancer sequence space to functional output. PMID:23732772

  19. Modern drug discovery technologies: opportunities and challenges in lead discovery.

    PubMed

    Guido, Rafael V C; Oliva, Glaucius; Andricopulo, Adriano D

    2011-12-01

    The identification of promising hits and the generation of high quality leads are crucial steps in the early stages of drug discovery projects. The definition and assessment of both chemical and biological space have revitalized the screening process model and emphasized the importance of exploring the intrinsic complementary nature of classical and modern methods in drug research. In this context, the widespread use of combinatorial chemistry and sophisticated screening methods for the discovery of lead compounds has created a large demand for small organic molecules that act on specific drug targets. Modern drug discovery involves the employment of a wide variety of technologies and expertise in multidisciplinary research teams. The synergistic effects between experimental and computational approaches on the selection and optimization of bioactive compounds emphasize the importance of the integration of advanced technologies in drug discovery programs. These technologies (VS, HTS, SBDD, LBDD, QSAR, and so on) are complementary in the sense that they have mutual goals, thereby the combination of both empirical and in silico efforts is feasible at many different levels of lead optimization and new chemical entity (NCE) discovery. This paper provides a brief perspective on the evolution and use of key drug design technologies, highlighting opportunities and challenges.

  20. A thermodynamic description for water, hydrogen fluoride and hydrogen dissolutions in cryolite-base molten salts.

    PubMed

    Wang, Kun; Chartrand, Patrice

    2018-06-15

    This paper presents a quantitative thermodynamic description for water, hydrogen fluoride and hydrogen dissolutions in cryolite-base molten salts, which is of technological importance to the Hall-Héroult electrolytic aluminum extraction cell. The Modified Quasichemical Model in the Quadruplet Approximation (MQMQA), as used to treat a large variety of molten salt systems, was adopted to thermodynamically describe the present liquid phase; all solid solutions were modeled using the Compound Energy Formalism (CEF); the gas phase was thermodynamically treated as an ideal mixture of all possible species. The model parameters were mainly obtained by critical evaluations and optimizations of thermodynamic and phase equilibrium data available from relative experimental measurements and theoretical predictions (first-principles calculations and empirical estimations) for the lower-order subsystems. These optimized model parameters were thereafter merged within the Kohler/Toop interpolation scheme, facilitating the prediction of gas solubility (H2O, HF and H2) in multicomponent cryolite-base molten salts using the FactSage thermochemical software. Several interesting diagrams were finally obtained in order to provide useful information for the industrial partners dedicated to the Hall-Héroult electrolytic aluminum production or other molten-salt technologies (the purification process and electroslag refining).

  1. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  2. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  3. Inland empire logistics GIS mapping project.

    DOT National Transportation Integrated Search

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  4. Boiler tuning using SPO at Detroit Edison`s River Rouge plant for best economic performance while minimizing NO{sub x} emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haman, R.L.; Kerry, T.G.; Jarc, C.A.

    1996-12-31

    A technology provided by Ultramax Corporation and EPRI, based on sequential process optimization (SPO), is being used as a cost-effective tool to gain improvements prior to decisions for capital-intensive solutions. This empirical method of optimization, called the ULTRAMAX{reg_sign} Method, can determine the best boiler capabilities and help delay, or even avoid, expensive retrofits or repowering. SPO can serve as a least-cost way to attain the right degree of compliance with current and future phases of CAAA. Tuning ensures a staged strategy to stay ahead of emissions regulations, but not so far ahead as to cause regret for taking actions thatmore » ultimately are not mandated or warranted. One large utility investigating SPO as a tool to lower NO{sub x} emissions and to optimize boiler performance is Detroit Edison. The company has applied SPO to tune two coal-fired units at its River Rouge Power Plant to evaluate the technology for possible system-wide usage. Following the successful demonstration in reducing NO{sub x} from these units, SPO is being considered for use in other Detroit Edison fossil-fired plants. Tuning first will be used as a least-cost option to drive NO{sub x} to its lowest level with operating adjustment. In addition, optimization shows the true capability of the units and the margins available when the Phase 2 rules become effective in 2000. This paper includes a case study of the second tuning process and discusses the opportunities the technology affords.« less

  5. Microscopically based energy density functionals for nuclei using the density matrix expansion: Implementation and pre-optimization

    NASA Astrophysics Data System (ADS)

    Stoitsov, M.; Kortelainen, M.; Bogner, S. K.; Duguet, T.; Furnstahl, R. J.; Gebremariam, B.; Schunck, N.

    2010-11-01

    In a recent series of articles, Gebremariam, Bogner, and Duguet derived a microscopically based nuclear energy density functional by applying the density matrix expansion (DME) to the Hartree-Fock energy obtained from chiral effective field theory two- and three-nucleon interactions. Owing to the structure of the chiral interactions, each coupling in the DME functional is given as the sum of a coupling constant arising from zero-range contact interactions and a coupling function of the density arising from the finite-range pion exchanges. Because the contact contributions have essentially the same structure as those entering empirical Skyrme functionals, a microscopically guided Skyrme phenomenology has been suggested in which the contact terms in the DME functional are released for optimization to finite-density observables to capture short-range correlation energy contributions from beyond Hartree-Fock. The present article is the first attempt to assess the ability of the newly suggested DME functional, which has a much richer set of density dependencies than traditional Skyrme functionals, to generate sensible and stable results for nuclear applications. The results of the first proof-of-principle calculations are given, and numerous practical issues related to the implementation of the new functional in existing Skyrme codes are discussed. Using a restricted singular value decomposition optimization procedure, it is found that the new DME functional gives numerically stable results and exhibits a small but systematic reduction of our test χ2 function compared to standard Skyrme functionals, thus justifying its suitability for future global optimizations and large-scale calculations.

  6. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  7. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  8. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  9. Preliminary Work for Examining the Scalability of Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Clouse, Jeff

    1998-01-01

    Researchers began studying automated agents that learn to perform multiple-step tasks early in the history of artificial intelligence (Samuel, 1963; Samuel, 1967; Waterman, 1970; Fikes, Hart & Nilsonn, 1972). Multiple-step tasks are tasks that can only be solved via a sequence of decisions, such as control problems, robotics problems, classic problem-solving, and game-playing. The objective of agents attempting to learn such tasks is to use the resources they have available in order to become more proficient at the tasks. In particular, each agent attempts to develop a good policy, a mapping from states to actions, that allows it to select actions that optimize a measure of its performance on the task; for example, reducing the number of steps necessary to complete the task successfully. Our study focuses on reinforcement learning, a set of learning techniques where the learner performs trial-and-error experiments in the task and adapts its policy based on the outcome of those experiments. Much of the work in reinforcement learning has focused on a particular, simple representation, where every problem state is represented explicitly in a table, and associated with each state are the actions that can be chosen in that state. A major advantage of this table lookup representation is that one can prove that certain reinforcement learning techniques will develop an optimal policy for the current task. The drawback is that the representation limits the application of reinforcement learning to multiple-step tasks with relatively small state-spaces. There has been a little theoretical work that proves that convergence to optimal solutions can be obtained when using generalization structures, but the structures are quite simple. The theory says little about complex structures, such as multi-layer, feedforward artificial neural networks (Rumelhart & McClelland, 1986), but empirical results indicate that the use of reinforcement learning with such structures is promising. These empirical results make no theoretical claims, nor compare the policies produced to optimal policies. A goal of our work is to be able to make the comparison between an optimal policy and one stored in an artificial neural network. A difficulty of performing such a study is finding a multiple-step task that is small enough that one can find an optimal policy using table lookup, yet large enough that, for practical purposes, an artificial neural network is really required. We have identified a limited form of the game OTHELLO as satisfying these requirements. The work we report here is in the very preliminary stages of research, but this paper provides background for the problem being studied and a description of our initial approach to examining the problem. In the remainder of this paper, we first describe reinforcement learning in more detail. Next, we present the game OTHELLO. Finally we argue that a restricted form of the game meets the requirements of our study, and describe our preliminary approach to finding an optimal solution to the problem.

  10. An optimal repartitioning decision policy

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Reynolds, P. F., Jr.

    1986-01-01

    A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.

  11. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  12. Multiobjective Optimization of Rocket Engine Pumps Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2001-01-01

    A design optimization method for turbopumps of cryogenic rocket engines has been developed. Multiobjective Evolutionary Algorithm (MOEA) is used for multiobjective pump design optimizations. Performances of design candidates are evaluated by using the meanline pump flow modeling method based on the Euler turbine equation coupled with empirical correlations for rotor efficiency. To demonstrate the feasibility of the present approach, a single stage centrifugal pump design and multistage pump design optimizations are presented. In both cases, the present method obtains very reasonable Pareto-optimal solutions that include some designs outperforming the original design in total head while reducing input power by one percent. Detailed observation of the design results also reveals some important design criteria for turbopumps in cryogenic rocket engines. These results demonstrate the feasibility of the EA-based design optimization method in this field.

  13. Estimated correlation matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2004-11-01

    Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.

  14. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.

  15. Polarizable six-point water models from computational and empirical optimization.

    PubMed

    Tröster, Philipp; Lorenzen, Konstantin; Tavan, Paul

    2014-02-13

    Tröster et al. (J. Phys. Chem B 2013, 117, 9486-9500) recently suggested a mixed computational and empirical approach to the optimization of polarizable molecular mechanics (PMM) water models. In the empirical part the parameters of Buckingham potentials are optimized by PMM molecular dynamics (MD) simulations. The computational part applies hybrid calculations, which combine the quantum mechanical description of a H2O molecule by density functional theory (DFT) with a PMM model of its liquid phase environment generated by MD. While the static dipole moments and polarizabilities of the PMM water models are fixed at the experimental gas phase values, the DFT/PMM calculations are employed to optimize the remaining electrostatic properties. These properties cover the width of a Gaussian inducible dipole positioned at the oxygen and the locations of massless negative charge points within the molecule (the positive charges are attached to the hydrogens). The authors considered the cases of one and two negative charges rendering the PMM four- and five-point models TL4P and TL5P. Here we extend their approach to three negative charges, thus suggesting the PMM six-point model TL6P. As compared to the predecessors and to other PMM models, which also exhibit partial charges at fixed positions, TL6P turned out to predict all studied properties of liquid water at p0 = 1 bar and T0 = 300 K with a remarkable accuracy. These properties cover, for instance, the diffusion constant, viscosity, isobaric heat capacity, isothermal compressibility, dielectric constant, density, and the isobaric thermal expansion coefficient. This success concurrently provides a microscopic physical explanation of corresponding shortcomings of previous models. It uniquely assigns the failures of previous models to substantial inaccuracies in the description of the higher electrostatic multipole moments of liquid phase water molecules. Resulting favorable properties concerning the transferability to other temperatures and conditions like the melting of ice are also discussed.

  16. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    NASA Astrophysics Data System (ADS)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR < 10-4M⊙/yr should host no Type II supernova events. In addition, a specific list of initial stellar masses can be useful in numerical simulations of stellar systems. For the first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  17. EFFECTIVE USE OF SEDIMENT QUALITY GUIDELINES: WHICH GUIDELINE IS RIGHT FOR ME?

    EPA Science Inventory

    A bewildering array of sediment quality guidelines have been developed, but fortunately they mostly fall into two families: empirically-derived and theoretically-derived. The empirically-derived guidelines use large data bases of concurrent sediment chemistry and biological effe...

  18. Optimisation Of a Magnetostrictive Wave Energy Converter

    NASA Astrophysics Data System (ADS)

    Mundon, T. R.; Nair, B.

    2014-12-01

    Oscilla Power, Inc. (OPI) is developing a patented magnetostrictive wave energy converter aimed at reducing the cost of grid-scale electricity from ocean waves. Designed to operate cost-effectively across a wide range of wave conditions, this will be the first use of reverse magnetostriction for large-scale energy production. The device architecture is a straightforward two-body, point absorbing system that has been studied at length by various researchers. A large surface float is anchored to a submerged heave (reaction) plate by multiple taut tethers that are largely made up of discrete, robust power takeoff modules that house the magnetostrictive generators. The unique generators developed by OPI utilize the phenomenon of reverse magnetostriction, which through the application of load to a specific low cost alloy, can generate significant magnetic flux changes, and thus create power through electromagnetic induction. Unlike traditional generators, the mode of operation is low-displacement, high-force, high damping which in combination with the specific multi-tether configuration creates some unique effects and interesting optimization challenges. Using an empirical approach with a combination of numerical tools, such as ORCAFLEX, and physical models, we investigated the properties and sensitivities of this system arrangement, including various heave plate geometries, with the overall goal of identifying the mass and hydrodynamic parameters required for optimum performance. Furthermore, through a detailed physical model test program at the University of New Hampshire, we were able to study in more detail how the heave plate geometry affects the drag and added mass coefficients. In presenting this work we will discuss how alternate geometries could be used to optimize the hydrodynamic parameters of the heave plate, allowing maximum inertial forces in operational conditions, while simultaneously minimizing the forces generated in extreme waves. This presentation will cover the significant findings from this research, including physical model results and identified sensitivity parameters. In addition, we will discuss some preliminary results from our large-scale ocean trial conducted in August & September of this year.

  19. The effect of phenotypic traits and external cues on natal dispersal movements.

    PubMed

    Delgado, María del Mar; Penteriani, Vincenzo; Revilla, Eloy; Nams, Vilis O

    2010-05-01

    1. Natal dispersal has the potential to affect most ecological and evolutionary processes. However, despite its importance, this complex ecological process still represents a significant gap in our understanding of animal ecology due to both the lack of empirical data and the intrinsic complexity of dispersal dynamics. 2. By studying natal dispersal of 74 radiotagged juvenile eagle owls Bubo bubo (Linnaeus), in both the wandering and the settlement phases, we empirically addressed the complex interactions by which individual phenotypic traits and external cues jointly shape individual heterogeneity through the different phases of dispersal, both at nightly and weekly temporal scales. 3. Owls in poorer physical conditions travelled shorter total distances during the wandering phase, describing straighter paths and moving slower, especially when crossing heterogeneous habitats. In general, the owls in worse condition started dispersal later and took longer times to find further settlement areas. Net distances were also sex biased, with females settling at further distances. Dispersing individuals did not seem to explore wandering and settlement areas by using a search image of their natal surroundings. Eagle owls showed a heterogeneous pattern of patch occupancy, where few patches were highly visited by different owls whereas the majority were visited by just one individual. During dispersal, the routes followed by owls were an intermediate solution between optimized and randomized ones. Finally, dispersal direction had a marked directionality, largely influenced by dominant winds. These results suggest an asymmetric and anisotropic dispersal pattern, where not only the number of patches but also their functions can affect population viability. 4. The combination of the information coming from the relationships among a large set of factors acting and integrating at different spatial and temporal scales, under the perspective of heterogeneous life histories, are a fruitful ground for future understanding of natal dispersal.

  20. Transcranial direct current stimulation in obsessive-compulsive disorder: emerging clinical evidence and considerations for optimal montage of electrodes.

    PubMed

    Senço, Natasha M; Huang, Yu; D'Urso, Giordano; Parra, Lucas C; Bikson, Marom; Mantovani, Antonio; Shavitt, Roseli G; Hoexter, Marcelo Q; Miguel, Eurípedes C; Brunoni, André R

    2015-07-01

    Neuromodulation techniques for obsessive-compulsive disorder (OCD) treatment have expanded with greater understanding of the brain circuits involved. Transcranial direct current stimulation (tDCS) might be a potential new treatment for OCD, although the optimal montage is unclear. To perform a systematic review on meta-analyses of repetitive transcranianal magnetic stimulation (rTMS) and deep brain stimulation (DBS) trials for OCD, aiming to identify brain stimulation targets for future tDCS trials and to support the empirical evidence with computer head modeling analysis. Systematic reviews of rTMS and DBS trials on OCD in Pubmed/MEDLINE were searched. For the tDCS computational analysis, we employed head models with the goal of optimally targeting current delivery to structures of interest. Only three references matched our eligibility criteria. We simulated four different electrodes montages and analyzed current direction and intensity. Although DBS, rTMS and tDCS are not directly comparable and our theoretical model, based on DBS and rTMS targets, needs empirical validation, we found that the tDCS montage with the cathode over the pre-supplementary motor area and extra-cephalic anode seems to activate most of the areas related to OCD.

  1. Multilevel Hierarchical Kernel Spectral Clustering for Real-Life Large Scale Complex Networks

    PubMed Central

    Mall, Raghvendra; Langone, Rocco; Suykens, Johan A. K.

    2014-01-01

    Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks. PMID:24949877

  2. Understanding levels of best practice: An empirical validation.

    PubMed

    Phan, Huy P; Ngu, Bing H; Wang, Hui-Wen; Shih, Jen-Hwa; Shi, Sheng-Ying; Lin, Ruey-Yih

    2018-01-01

    Recent research has explored the nature of the theoretical concept of optimal best practice, which emphasizes the importance of personal resolve, inner strength, and the maximization of a person's development, whether it is mental, cognitive, social, or physical. In the context of academia, the study of optimal functioning places emphasis on a student's effort expenditure, positive outlook, and determination to strive for educational success and enriched subjective well-being. One major inquiry closely associated with optimal functioning is the process of optimization. Optimization, in brief, delves into the enactment of different psychological variables that could improve a person's internal state of functioning (e.g., cognitive functioning). From a social sciences point of view, very little empirical evidence exists to affirm and explain a person's achievement of optimal best practice. Over the past five years, we have made extensive progress in the area of optimal best practice by developing different quantitative measures to assess and evaluate the importance of this theoretical concept. The present study, which we collaborated with colleagues in Taiwan, involved the use of structural equation modeling (SEM) to analyze a cohort of Taiwanese university students' (N = 1010) responses to a series of Likert-scale measures that focused on three major entities: (i) the importance of optimal best practice, (ii) three major psychological variables (i.e., effective functioning, personal resolve, and emotional functioning) that could optimize student' optimal best levels in academic learning, and (iii) three comparable educational outcomes (i.e., motivation towards academic learning, interest in academic learning, and academic liking experience) that could positively associate with optimal best practice and the three mentioned psychological variables. Findings that we obtained, overall, fully supported our initial a priori model. This evidence, in its totality, has made substantive practical, theoretical, and methodological contributions. Foremost, from our point of view, is clarity into the psychological process of optimal best practice in the context of schooling. For example, in relation to subjective well-being experiences, how can educators optimize students' positive emotions? More importantly, aside from practical relevance, our affirmed research inquiry has produced insightful information for further advancement. One distinction, in this case, entails consideration of a more complex methodological design that could measure, assess, and evaluate the impact of optimization.

  3. Understanding levels of best practice: An empirical validation

    PubMed Central

    Wang, Hui-Wen; Shih, Jen-Hwa; Shi, Sheng-Ying; Lin, Ruey-Yih

    2018-01-01

    Recent research has explored the nature of the theoretical concept of optimal best practice, which emphasizes the importance of personal resolve, inner strength, and the maximization of a person’s development, whether it is mental, cognitive, social, or physical. In the context of academia, the study of optimal functioning places emphasis on a student’s effort expenditure, positive outlook, and determination to strive for educational success and enriched subjective well-being. One major inquiry closely associated with optimal functioning is the process of optimization. Optimization, in brief, delves into the enactment of different psychological variables that could improve a person’s internal state of functioning (e.g., cognitive functioning). From a social sciences point of view, very little empirical evidence exists to affirm and explain a person’s achievement of optimal best practice. Over the past five years, we have made extensive progress in the area of optimal best practice by developing different quantitative measures to assess and evaluate the importance of this theoretical concept. The present study, which we collaborated with colleagues in Taiwan, involved the use of structural equation modeling (SEM) to analyze a cohort of Taiwanese university students’ (N = 1010) responses to a series of Likert-scale measures that focused on three major entities: (i) the importance of optimal best practice, (ii) three major psychological variables (i.e., effective functioning, personal resolve, and emotional functioning) that could optimize student’ optimal best levels in academic learning, and (iii) three comparable educational outcomes (i.e., motivation towards academic learning, interest in academic learning, and academic liking experience) that could positively associate with optimal best practice and the three mentioned psychological variables. Findings that we obtained, overall, fully supported our initial a priori model. This evidence, in its totality, has made substantive practical, theoretical, and methodological contributions. Foremost, from our point of view, is clarity into the psychological process of optimal best practice in the context of schooling. For example, in relation to subjective well-being experiences, how can educators optimize students’ positive emotions? More importantly, aside from practical relevance, our affirmed research inquiry has produced insightful information for further advancement. One distinction, in this case, entails consideration of a more complex methodological design that could measure, assess, and evaluate the impact of optimization. PMID:29902278

  4. Accelerating atomistic calculations of quantum energy eigenstates on graphic cards

    NASA Astrophysics Data System (ADS)

    Rodrigues, Walter; Pecchia, A.; Lopez, M.; Auf der Maur, M.; Di Carlo, A.

    2014-10-01

    Electronic properties of nanoscale materials require the calculation of eigenvalues and eigenvectors of large matrices. This bottleneck can be overcome by parallel computing techniques or the introduction of faster algorithms. In this paper we report a custom implementation of the Lanczos algorithm with simple restart, optimized for graphical processing units (GPUs). The whole algorithm has been developed using CUDA and runs entirely on the GPU, with a specialized implementation that spares memory and reduces at most machine-to-device data transfers. Furthermore parallel distribution over several GPUs has been attained using the standard message passing interface (MPI). Benchmark calculations performed on a GaN/AlGaN wurtzite quantum dot with up to 600,000 atoms are presented. The empirical tight-binding (ETB) model with an sp3d5s∗+spin-orbit parametrization has been used to build the system Hamiltonian (H).

  5. Human connectome module pattern detection using a new multi-graph MinMax cut model.

    PubMed

    De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng

    2014-01-01

    Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.

  6. Isomers and energy landscapes of micro-hydrated sulfite and chlorate clusters

    NASA Astrophysics Data System (ADS)

    Hey, John C.; Doyle, Emily J.; Chen, Yuting; Johnston, Roy L.

    2018-03-01

    We present putative global minima for the micro-hydrated sulfite SO32-(H2O)N and chlorate ClO32(H2O)N systems in the range 3≤N≤15 found using basin-hopping global structure optimization with an empirical potential. We present a structural analysis of the hydration of a large number of minimized structures for hydrated sulfite and chlorate clusters in the range 3≤N≤50. We show that sulfite is a significantly stronger net acceptor of hydrogen bonding within water clusters than chlorate, completely suppressing the appearance of hydroxyl groups pointing out from the cluster surface (dangling OH bonds), in low-energy clusters. We also present a qualitative analysis of a highly explored energy landscape in the region of the global minimum of the eight water hydrated sulfite and chlorate systems. This article is part of the theme issue `Modern theoretical chemistry'.

  7. Applying information theory to small groups assessment: emotions and well-being at work.

    PubMed

    García-Izquierdo, Antonio León; Moreno, Blanca; García-Izquierdo, Mariano

    2010-05-01

    This paper explores and analyzes the relations between emotions and well-being in a sample of aviation personnel, passenger crew (flight attendants). There is an increasing interest in studying the influence of emotions and its role as psychosocial factors in the work environment as they are able to act as facilitators or shock absorbers. The contrast of the theoretical models by using traditional parametric techniques requires a large sample size to the efficient estimation of the coefficients that quantify the relations between variables. Since the available sample that we have is small, the most common size in European enterprises, we used the maximum entropy principle to explore the emotions that are involved in the psychosocial risks. The analyses show that this method takes advantage of the limited information available and guarantee an optimal estimation, the results of which are coherent with theoretical models and numerous empirical researches about emotions and well-being.

  8. Is White Light the Best Illumination for Palmprint Recognition?

    NASA Astrophysics Data System (ADS)

    Guo, Zhenhua; Zhang, David; Zhang, Lei

    Palmprint as a new biometric has received great research attention in the past decades. It owns many merits, such as robustness, low cost, user friendliness, and high accuracy. Most of the current palmprint recognition systems use an active light to acquire clear palmprint images. Thus, light source is a key component in the system to capture enough of discriminant information for palmprint recognition. To the best of our knowledge, white light is the most widely used light source. However, little work has been done on investigating whether white light is the best illumination for palmprint recognition. In this study, we empirically compared palmprint recognition accuracy using white light and other six different color lights. The experiments on a large database show that white light is not the optimal illumination for palmprint recognition. This finding will be useful to future palmprint recognition system design.

  9. Diagnosis and treatment of neuropathic pain.

    PubMed

    Chong, M Sam; Bajwa, Zahid H

    2003-05-01

    Currently, no consensus on the optimal management of neuropathic pain exists and practices vary greatly worldwide. Possible explanations for this include difficulties in developing agreed diagnostic protocols and the coexistence of neuropathic, nociceptive and, occasionally, idiopathic pain in the same patient. Also, neuropathic pain has historically been classified according to its etiology (e.g., painful diabetic neuropathy, trigeminal neuralgia, spinal cord injury) without regard for the presumed mechanism(s) underlying the specific symptoms. A combined etiologic/mechanistic classification might improve neuropathic pain management. The treatment of neuropathic pain is largely empirical, often relying heavily on data from small, generally poorly-designed clinical trials or anecdotal evidence. Consequently, diverse treatments are used, including non-invasive drug therapies (antidepressants, antiepileptic drugs and membrane stabilizing drugs), invasive therapies (nerve blocks, ablative surgery), and alternative therapies (e.g., acupuncture). This article reviews the current and historical practices in the diagnosis and treatment of neuropathic pain, and focuses on the USA, Europe and Japan.

  10. Network structure from rich but noisy data

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.

    2018-06-01

    Driven by growing interest across the sciences, a large number of empirical studies have been conducted in recent years of the structure of networks ranging from the Internet and the World Wide Web to biological networks and social networks. The data produced by these experiments are often rich and multimodal, yet at the same time they may contain substantial measurement error1-7. Accurate analysis and understanding of networked systems requires a way of estimating the true structure of networks from such rich but noisy data8-15. Here we describe a technique that allows us to make optimal estimates of network structure from complex data in arbitrary formats, including cases where there may be measurements of many different types, repeated observations, contradictory observations, annotations or metadata, or missing data. We give example applications to two different social networks, one derived from face-to-face interactions and one from self-reported friendships.

  11. Isomers and energy landscapes of micro-hydrated sulfite and chlorate clusters.

    PubMed

    Hey, John C; Doyle, Emily J; Chen, Yuting; Johnston, Roy L

    2018-03-13

    We present putative global minima for the micro-hydrated sulfite SO 3 2- (H 2 O) N and chlorate ClO 3 - (H 2 O) N systems in the range 3≤ N ≤15 found using basin-hopping global structure optimization with an empirical potential. We present a structural analysis of the hydration of a large number of minimized structures for hydrated sulfite and chlorate clusters in the range 3≤ N ≤50. We show that sulfite is a significantly stronger net acceptor of hydrogen bonding within water clusters than chlorate, completely suppressing the appearance of hydroxyl groups pointing out from the cluster surface (dangling OH bonds), in low-energy clusters. We also present a qualitative analysis of a highly explored energy landscape in the region of the global minimum of the eight water hydrated sulfite and chlorate systems.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Authors.

  12. An optimized strategy to measure protein stability highlights differences between cold and hot unfolded states

    NASA Astrophysics Data System (ADS)

    Alfano, Caterina; Sanfelice, Domenico; Martin, Stephen R.; Pastore, Annalisa; Temussi, Piero Andrea

    2017-05-01

    Macromolecular crowding ought to stabilize folded forms of proteins, through an excluded volume effect. This explanation has been questioned and observed effects attributed to weak interactions with other cell components. Here we show conclusively that protein stability is affected by volume exclusion and that the effect is more pronounced when the crowder's size is closer to that of the protein under study. Accurate evaluation of the volume exclusion effect is made possible by the choice of yeast frataxin, a protein that undergoes cold denaturation above zero degrees, because the unfolded form at low temperature is more expanded than the corresponding one at high temperature. To achieve optimum sensitivity to changes in stability we introduce an empirical parameter derived from the stability curve. The large effect of PEG 20 on cold denaturation can be explained by a change in water activity, according to Privalov's interpretation of cold denaturation.

  13. Optimal treatment of laryngopharyngeal reflux disease

    PubMed Central

    Martinucci, Irene; Savarino, Edoardo; Nacci, Andrea; Romeo, Salvatore Osvaldo; Bellini, Massimo; Savarino, Vincenzo; Fattori, Bruno; Marchi, Santino

    2013-01-01

    Laryngopharyngeal reflux is defined as the reflux of gastric content into larynx and pharynx. A large number of data suggest the growing prevalence of laryngopharyngeal symptoms in patients with gastroesophageal reflux disease. However, laryngopharyngeal reflux is a multifactorial syndrome and gastroesophageal reflux disease is not the only cause involved in its pathogenesis. Current critical issues in diagnosing laryngopharyngeal reflux are many nonspecific laryngeal symptoms and signs, and poor sensitivity and specificity of all currently available diagnostic tests. Although it is a pragmatic clinical strategy to start with empiric trials of proton pump inhibitors, many patients with suspected laryngopharyngeal reflux have persistent symptoms despite maximal acid suppression therapy. Overall, there are scant conflicting results to assess the effect of reflux treatments (including dietary and lifestyle modification, medical treatment, antireflux surgery) on laryngopharyngeal reflux. The present review is aimed at critically discussing the current treatment options in patients with laryngopharyngeal reflux, and provides a perspective on the development of new therapies. PMID:24179671

  14. Computer-aided design and experimental investigation of a hydrodynamic device: the microwire electrode

    PubMed

    Fulian; Gooch; Fisher; Stevens; Compton

    2000-08-01

    The development and application of a new electrochemical device using a computer-aided design strategy is reported. This novel design is based on the flow of electrolyte solution past a microwire electrode situated centrally within a large duct. In the design stage, finite element simulations were employed to evaluate feasible working geometries and mass transport rates. The computer-optimized designs were then exploited to construct experimental devices. Steady-state voltammetric measurements were performed for a reversible one-electron-transfer reaction to establish the experimental relationship between electrolysis current and solution velocity. The experimental results are compared to those predicted numerically, and good agreement is found. The numerical studies are also used to establish an empirical relationship between the mass transport limited current and the volume flow rate, providing a simple and quantitative alternative for workers who would prefer to exploit this device without the need to develop the numerical aspects.

  15. A test of reproductive power in snakes.

    PubMed

    Boback, Scott M; Guyer, Craig

    2008-05-01

    Reproductive power is a contentious concept among ecologists, and the model has been criticized on theoretical and empirical grounds. Despite these criticisms, the model has successfully predicted the modal (optimal) size in three large taxonomic groups and the shape of the body size distribution in two of these groups. We tested the reproductive power model on snakes, a group that differs markedly in physiology, foraging ecology, and body shape from the endothermic groups upon which the model was derived. Using detailed field data from the published literature, snake-specific constants associated with reproductive power were determined using allometric relationships of energy invested annually in egg production and population productivity. The resultant model accurately predicted the mode and left side of the size distribution for snakes but failed to predict the right side of that distribution. If the model correctly describes what is possible in snakes, observed size diversity is limited, especially in the largest size classes.

  16. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  17. Dust cyclone research in the 21st century

    USDA-ARS?s Scientific Manuscript database

    Research to meet the demand for ever more efficient dust cyclones continues after some eighty years. Recent trends emphasize design optimization through computational fluid dynamics (CFD) and testing design subtleties not modeled by semi-empirical equations. Improvements to current best available ...

  18. Design and analysis of electricity markets

    NASA Astrophysics Data System (ADS)

    Sioshansi, Ramteen Mehr

    Restructured competitive electricity markets rely on designing market-based mechanisms which can efficiently coordinate the power system and minimize the exercise of market power. This dissertation is a series of essays which develop and analyze models of restructured electricity markets. Chapter 2 studies the incentive properties of a co-optimized market for energy and reserves that pays reserved generators their implied opportunity cost---which is the difference between their stated energy cost and the market-clearing price for energy. By analyzing the market as a competitive direct revelation mechanism we examine the properties of efficient equilibria and demonstrate that generators have incentives to shade their stated costs below actual costs. We further demonstrate that the expected energy payments of our mechanism is less than that in a disjoint market for energy only. Chapter 3 is an empirical validation of a supply function equilibrium (SFE) model. By comparing theoretically optimal supply functions and actual generation offers into the Texas spot balancing market, we show the SFE to fit the actual behavior of the largest generators in market. This not only serves to validate the model, but also demonstrates the extent to which firms exercise market power. Chapters 4 and 5 examine equity, incentive, and efficiency issues in the design of non-convex commitment auctions. We demonstrate that different near-optimal solutions to a central unit commitment problem which have similar-sized optimality gaps will generally yield vastly different energy prices and payoffs to individual generators. Although solving the mixed integer program to optimality will overcome such issues, we show that this relies on achieving optimality of the commitment---which may not be tractable for large-scale problems within the allotted timeframe. We then simulate and compare a competitive benchmark for a market with centralized and self commitment in order to bound the efficiency losses stemming from coordination losses (cost of anarchy) in a decentralized market.

  19. Optimal exploitation strategies for an animal population in a stochastic serially correlated environment

    USGS Publications Warehouse

    Anderson, D.R.

    1974-01-01

    Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.

  20. Modeling stomatal conductance in the Earth system: linking leaf water-use efficiency and water transport along the soil-plant-atmosphere continuum

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Williams, M.; Fisher, R. A.; Oleson, K. W.

    2014-05-01

    The empirical Ball-Berry stomatal conductance model is commonly used in Earth system models to simulate biotic regulation of evapotranspiration. However, the dependence of stomatal conductance (gs) on vapor pressure deficit (Ds) and soil moisture must both be empirically parameterized. We evaluated the Ball-Berry model used in the Community Land Model version 4.5 (CLM4.5) and an alternative stomatal conductance model that links leaf gas exchange, plant hydraulic constraints, and the soil-plant-atmosphere continuum (SPA) to numerically optimize photosynthetic carbon gain per unit water loss while preventing leaf water potential dropping below a critical minimum level. We evaluated two alternative optimization algorithms: intrinsic water-use efficiency (Δ An/Δ gs, the marginal carbon gain of stomatal opening) and water-use efficiency (Δ An/Δ El, the marginal carbon gain of water loss). We implemented the stomatal models in a multi-layer plant canopy model, to resolve profiles of gas exchange, leaf water potential, and plant hydraulics within the canopy, and evaluated the simulations using: (1) leaf analyses; (2) canopy net radiation, sensible heat flux, latent heat flux, and gross primary production at six AmeriFlux sites spanning 51 site-years; and (3) parameter sensitivity analyses. Without soil moisture stress, the performance of the SPA stomatal conductance model was generally comparable to or somewhat better than the Ball-Berry model in flux tower simulations, but was significantly better than the Ball-Berry model when there was soil moisture stress. Functional dependence of gs on soil moisture emerged from the physiological theory linking leaf water-use efficiency and water flow to and from the leaf along the soil-to-leaf pathway rather than being imposed a priori, as in the Ball-Berry model. Similar functional dependence of gs on Ds emerged from the water-use efficiency optimization. Sensitivity analyses showed that two parameters (stomatal efficiency and root hydraulic conductivity) minimized errors with the SPA stomatal conductance model. The critical stomatal efficiency for optimization (ι) was estimated from leaf trait datasets and is related to the slope parameter (g1) of the Ball-Berry model. The optimized parameter value was consistent with this estimate. Optimized root hydraulic conductivity was consistent with estimates from literature surveys. The two central concepts embodied in the stomatal model, that plants account for both water-use efficiency and for hydraulic safety in regulating stomatal conductance, imply a notion of optimal plant strategies and provide testable model hypotheses, rather than empirical descriptions of plant behavior.

  1. Sanov and central limit theorems for output statistics of quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horssen, Merlijn van, E-mail: merlijn.vanhorssen@nottingham.ac.uk; Guţă, Mădălin, E-mail: madalin.guta@nottingham.ac.uk

    2015-02-15

    In this paper, we consider the statistics of repeated measurements on the output of a quantum Markov chain. We establish a large deviations result analogous to Sanov’s theorem for the multi-site empirical measure associated to finite sequences of consecutive outcomes of a classical stochastic process. Our result relies on the construction of an extended quantum transition operator (which keeps track of previous outcomes) in terms of which we compute moment generating functions, and whose spectral radius is related to the large deviations rate function. As a corollary to this, we obtain a central limit theorem for the empirical measure. Suchmore » higher level statistics may be used to uncover critical behaviour such as dynamical phase transitions, which are not captured by lower level statistics such as the sample mean. As a step in this direction, we give an example of a finite system whose level-1 (empirical mean) rate function is independent of a model parameter while the level-2 (empirical measure) rate is not.« less

  2. On the use of topology optimization for improving heat transfer in molding process

    NASA Astrophysics Data System (ADS)

    Agazzi, A.; LeGoff, R.; Truc-Vu, C.

    2016-10-01

    In the plastic industry, one of the key factor is to control heat transfer. One way to achieve that goal is to design an effective cooling system. But in some area of the mold, where it is not possible to design cooling system, the use of a highly conductive material, such as copper pin, is often used. Most of the time, the location, the size and the quantity of the copper pin are made by empirical considerations, without using optimization procedures. In this article, it is proposed to use topology optimization, in order to improve transient conductive heat transfer in an injection/blowing mold. Two methodologies are applied and compared. Finally, the optimal distribution of cooper pin in the mold is given.

  3. Efficient search of multiple types of targets

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-12-01

    Random searches often take place in fragmented landscapes. Also, in many instances like animal foraging, significant benefits to the searcher arise from visits to a large diversity of patches with a well-balanced distribution of targets found. Up to date, such aspects have been widely ignored in the usual single-objective analysis of search efficiency, in which one seeks to maximize just the number of targets found per distance traversed. Here we address the problem of determining the best strategies for the random search when these multiple-objective factors play a key role in the process. We consider a figure of merit (efficiency function), which properly "scores" the mentioned tasks. By considering random walk searchers with a power-law asymptotic Lévy distribution of step lengths, p (ℓ ) ˜ℓ-μ , with 1 <μ ≤3 , we show that the standard optimal strategy with μopt≈2 no longer holds universally. Instead, optimal searches with enhanced superdiffusivity emerge, including values as low as μopt≈1.3 (i.e., tending to the ballistic limit). For the general theory of random search optimization, our findings emphasize the necessity to correctly characterize the multitude of aims in any concrete metric to compare among possible candidates to efficient strategies. In the context of animal foraging, our results might explain some empirical data pointing to stronger superdiffusion (μ <2 ) in the search behavior of different animal species, conceivably associated to multiple goals to be achieved in fragmented landscapes.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less

  5. Life at the Common Denominator: Mechanistic and Quantitative Biology for the Earth and Space Sciences

    NASA Technical Reports Server (NTRS)

    Hoehler, Tori M.

    2010-01-01

    The remarkable challenges and possibilities of the coming few decades will compel the biogeochemical and astrobiological sciences to characterize the interactions between biology and its environment in a fundamental, mechanistic, and quantitative fashion. The clear need for integrative and scalable biology-environment models is exemplified in the Earth sciences by the challenge of effectively addressing anthropogenic global change, and in the space sciences by the challenge of mounting a well-constrained yet sufficiently adaptive and inclusive search for life beyond Earth. Our understanding of the life-planet interaction is still, however, largely empirical. A variety of approaches seek to move from empirical to mechanistic descriptions. One approach focuses on the relationship between biology and energy, which is at once universal (all life requires energy), unique (life manages energy flow in a fashion not seen in abiotic systems), and amenable to characterization and quantification in thermodynamic terms. Simultaneously, a focus on energy flow addresses a critical point of interface between life and its geological, chemical, and physical environment. Characterizing and quantifying this relationship for life on Earth will support the development of integrative and predictive models for biology-environment dynamics. Understanding this relationship at its most fundamental level holds potential for developing concepts of habitability and biosignatures that can optimize astrobiological exploration strategies and are extensible to all life.

  6. Temporal Planning for Compilation of Quantum Approximate Optimization Algorithm Circuits

    NASA Technical Reports Server (NTRS)

    Venturelli, Davide; Do, Minh Binh; Rieffel, Eleanor Gilbert; Frank, Jeremy David

    2017-01-01

    We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus our initial experiments on Quantum Approximate Optimization Algorithm (QAOA) circuits that have few ordering constraints and allow highly parallel plans. We report on experiments using several temporal planners to compile circuits of various sizes to a realistic hardware. This early empirical evaluation suggests that temporal planning is a viable approach to quantum circuit compilation.

  7. Airline Maintenance Manpower Optimization from the De Novo Perspective

    NASA Astrophysics Data System (ADS)

    Liou, James J. H.; Tzeng, Gwo-Hshiung

    Human resource management (HRM) is an important issue for today’s competitive airline marketing. In this paper, we discuss a multi-objective model designed from the De Novo perspective to help airlines optimize their maintenance manpower portfolio. The effectiveness of the model and solution algorithm is demonstrated in an empirical study of the optimization of the human resources needed for airline line maintenance. Both De Novo and traditional multiple objective programming (MOP) methods are analyzed. A comparison of the results with those of traditional MOP indicates that the proposed model and solution algorithm does provide better performance and an improved human resource portfolio.

  8. Using DFT methodology for more reliable predictive models: Design of inhibitors of Golgi α-Mannosidase II.

    PubMed

    Bobovská, Adela; Tvaroška, Igor; Kóňa, Juraj

    2016-05-01

    Human Golgi α-mannosidase II (GMII), a zinc ion co-factor dependent glycoside hydrolase (E.C.3.2.1.114), is a pharmaceutical target for the design of inhibitors with anti-cancer activity. The discovery of an effective inhibitor is complicated by the fact that all known potent inhibitors of GMII are involved in unwanted co-inhibition with lysosomal α-mannosidase (LMan, E.C.3.2.1.24), a relative to GMII. Routine empirical QSAR models for both GMII and LMan did not work with a required accuracy. Therefore, we have developed a fast computational protocol to build predictive models combining interaction energy descriptors from an empirical docking scoring function (Glide-Schrödinger), Linear Interaction Energy (LIE) method, and quantum mechanical density functional theory (QM-DFT) calculations. The QSAR models were built and validated with a library of structurally diverse GMII and LMan inhibitors and non-active compounds. A critical role of QM-DFT descriptors for the more accurate prediction abilities of the models is demonstrated. The predictive ability of the models was significantly improved when going from the empirical docking scoring function to mixed empirical-QM-DFT QSAR models (Q(2)=0.78-0.86 when cross-validation procedures were carried out; and R(2)=0.81-0.83 for a testing set). The average error for the predicted ΔGbind decreased to 0.8-1.1kcalmol(-1). Also, 76-80% of non-active compounds were successfully filtered out from GMII and LMan inhibitors. The QSAR models with the fragmented QM-DFT descriptors may find a useful application in structure-based drug design where pure empirical and force field methods reached their limits and where quantum mechanics effects are critical for ligand-receptor interactions. The optimized models will apply in lead optimization processes for GMII drug developments. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Empirical Investigation of Critical Transitions in Paleoclimate

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A.

    2016-12-01

    In this work we apply a new empirical method for the analysis of complex spatially distributed systems to the analysis of paleoclimate data. The method consists of two general parts: (i) revealing the optimal phase-space variables and (ii) construction the empirical prognostic model by observed time series. The method of phase space variables construction based on the data decomposition into nonlinear dynamical modes which was successfully applied to global SST field and allowed clearly separate time scales and reveal climate shift in the observed data interval [1]. The second part, the Bayesian approach to optimal evolution operator reconstruction by time series is based on representation of evolution operator in the form of nonlinear stochastic function represented by artificial neural networks [2,3]. In this work we are focused on the investigation of critical transitions - the abrupt changes in climate dynamics - in match longer time scale process. It is well known that there were number of critical transitions on different time scales in the past. In this work, we demonstrate the first results of applying our empirical methods to analysis of paleoclimate variability. In particular, we discuss the possibility of detecting, identifying and prediction such critical transitions by means of nonlinear empirical modeling using the paleoclimate record time series. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep155102. Ya. I. Molkov, D. N. Mukhin, E. M. Loskutov, A.M. Feigin, (2012) : Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.3. Mukhin, D., Kondrashov, D., Loskutov, E., Gavrilov, A., Feigin, A., & Ghil, M. (2015). Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models. Journal of Climate, 28(5), 1962-1976. http://doi.org/10.1175/JCLI-D-14-00240.1

  10. Transfer pricing in hospitals and efficiency of physicians: the case of anesthesia services.

    PubMed

    Kuntz, Ludwig; Vera, Antonio

    2005-01-01

    The objective is to investigate theoretically and empirically how the efficiency of the physicians involved in anesthesia and surgery can be optimized by the introduction of transfer pricing for anesthesia services. The anesthesiology data of approximately 57,000 operations carried out at the University Hospital Hamburg-Eppendorf (UKE) in Germany in the period from 2000 to 2002 are analyzed using parametric and non-parametric methods. The principal finding of the empirical analysis is that the efficiency of the physicians involved in anesthesia and surgery at the UKE improved after the introduction of transfer pricing.

  11. Construction Performance Optimization toward Green Building Premium Cost Based on Greenship Rating Tools Assessment with Value Engineering Method

    NASA Astrophysics Data System (ADS)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat

    2017-07-01

    Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.

  12. Neuro-genetic system for optimization of GMI samples sensitivity.

    PubMed

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Death of the (traveling) salesman: primates do not show clear evidence of multi-step route planning.

    PubMed

    Janson, Charles

    2014-05-01

    Several comparative studies have linked larger brain size to a fruit-eating diet in primates and other animals. The general explanation for this correlation is that fruit is a complex resource base, consisting of many discrete patches of many species, each with distinct nutritional traits, the production of which changes predictably both within and between seasons. Using this information to devise optimal spatial foraging strategies is among the most difficult problems to solve in all of mathematics, a version of the famous Traveling Salesman Problem. Several authors have suggested that primates might use their large brains and complex cognition to plan foraging strategies that approximate optimal solutions to this problem. Three empirical studies have examined how captive primates move when confronted with the simplest version of the problem: a spatial array of equally valuable goals. These studies have all concluded that the subjects remember many food source locations and show very efficient travel paths; some authors also inferred that the subjects may plan their movements based on considering combinations of three or more future goals at a time. This analysis re-examines critically the claims of planned movement sequences from the evidence presented. The efficiency of observed travel paths is largely consistent with use of the simplest of foraging rules, such as visiting the nearest unused "known" resource. Detailed movement sequences by test subjects are most consistent with a rule that mentally sums spatial information from all unused resources in a given trial into a single "gravity" measure that guides movements to one destination at a time. © 2013 Wiley Periodicals, Inc.

  14. Lexical Link Analysis Application: Improving Web Service to Acquisition Visibility Portal

    DTIC Science & Technology

    2013-09-30

    during the Empire Challenge 2008 and 2009 (EC08/09) field experiments and for numerous other field experiments of new technologies during Trident Warrior...Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -2000) (pp. 63–70). Retrieved from http://nlp.stanford.edu/manning

  15. Searching for a Common Ground--A Literature Review of Empirical Research on Scientific Inquiry Activities

    ERIC Educational Resources Information Center

    Rönnebeck, Silke; Bernholt, Sascha; Ropohl, Mathias

    2016-01-01

    Despite the importance of scientific inquiry in science education, researchers and educators disagree considerably regarding what features define this instructional approach. While a large body of literature addresses theoretical considerations, numerous empirical studies investigate scientific inquiry on quite different levels of detail and also…

  16. Cardiac surgery antibiotic prophylaxis and calculated empiric antibiotic therapy.

    PubMed

    Gorski, Armin; Hamouda, Khaled; Özkur, Mehmet; Leistner, Markus; Sommer, Sebastian-Patrick; Leyh, Rainer; Schimmer, Christoph

    2015-03-01

    Ongoing debate exists concerning the optimal choice and duration of antibiotic prophylaxis as well as the reasonable calculated empiric antibiotic therapy for hospital-acquired infections in critically ill cardiac surgery patients. A nationwide questionnaire was distributed to all German heart surgery centers concerning antibiotic prophylaxis and the calculated empiric antibiotic therapy. The response to the questionnaire was 87.3%. All clinics that responded use antibiotic prophylaxis, 79% perform it not longer than 24 h (single-shot: 23%; 2 doses: 29%; 3 doses: 27%; 4 doses: 13%; and >5 doses: 8%). Cephalosporin was used in 89% of clinics (46% second-generation, 43% first-generation cephalosporin). If sepsis is suspected, the following diagnostics are performed routinely: wound inspection 100%; white blood cell count 100%; radiography 99%; C-reactive protein 97%; microbiological testing of urine 91%, blood 81%, and bronchial secretion 81%; procalcitonin 74%; and echocardiography 75%. The calculated empiric antibiotic therapy (depending on the suspected focus) consists of a multidrug combination with broad-spectrum agents. This survey shows that existing national guidelines and recommendations concerning perioperative antibiotic prophylaxis and calculated empiric antibiotic therapy are well applied in almost all German heart centers. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  17. Optimal function explains forest responses to global change

    Treesearch

    Roderick Dewar; Oskar Franklin; Annikki Makela; Ross E. McMurtrie; Harry T. Valentine

    2009-01-01

    Plant responses to global changes in carbon dioxide (CO2), nitrogen, and water availability are critical to future atmospheric CO2 concentrations, hydrology, and hence climate. Our understanding of those responses is incomplete, however. Multiple-resource manipulation experiments and empirical observations have revealed a...

  18. Suppressing disease spreading by using information diffusion on multiplex networks.

    PubMed

    Wang, Wei; Liu, Quan-Hui; Cai, Shi-Min; Tang, Ming; Braunstein, Lidia A; Stanley, H Eugene

    2016-07-06

    Although there is always an interplay between the dynamics of information diffusion and disease spreading, the empirical research on the systemic coevolution mechanisms connecting these two spreading dynamics is still lacking. Here we investigate the coevolution mechanisms and dynamics between information and disease spreading by utilizing real data and a proposed spreading model on multiplex network. Our empirical analysis finds asymmetrical interactions between the information and disease spreading dynamics. Our results obtained from both the theoretical framework and extensive stochastic numerical simulations suggest that an information outbreak can be triggered in a communication network by its own spreading dynamics or by a disease outbreak on a contact network, but that the disease threshold is not affected by information spreading. Our key finding is that there is an optimal information transmission rate that markedly suppresses the disease spreading. We find that the time evolution of the dynamics in the proposed model qualitatively agrees with the real-world spreading processes at the optimal information transmission rate.

  19. Untangling complex networks: risk minimization in financial markets through accessible spin glass ground states

    PubMed Central

    Lisewski, Andreas Martin; Lichtarge, Olivier

    2010-01-01

    Recurrent international financial crises inflict significant damage to societies and stress the need for mechanisms or strategies to control risk and tamper market uncertainties. Unfortunately, the complex network of market interactions often confounds rational approaches to optimize financial risks. Here we show that investors can overcome this complexity and globally minimize risk in portfolio models for any given expected return, provided the relative margin requirement remains below a critical, empirically measurable value. In practice, for markets with centrally regulated margin requirements, a rational stabilization strategy would be keeping margins small enough. This result follows from ground states of the random field spin glass Ising model that can be calculated exactly through convex optimization when relative spin coupling is limited by the norm of the network's Laplacian matrix. In that regime, this novel approach is robust to noise in empirical data and may be also broadly relevant to complex networks with frustrated interactions that are studied throughout scientific fields. PMID:20625477

  20. Untangling complex networks: Risk minimization in financial markets through accessible spin glass ground states

    NASA Astrophysics Data System (ADS)

    Lisewski, Andreas Martin; Lichtarge, Olivier

    2010-08-01

    Recurrent international financial crises inflict significant damage to societies and stress the need for mechanisms or strategies to control risk and tamper market uncertainties. Unfortunately, the complex network of market interactions often confounds rational approaches to optimize financial risks. Here we show that investors can overcome this complexity and globally minimize risk in portfolio models for any given expected return, provided the margin requirement remains below a critical, empirically measurable value. In practice, for markets with centrally regulated margin requirements, a rational stabilization strategy would be keeping margins small enough. This result follows from ground states of the random field spin glass Ising model that can be calculated exactly through convex optimization when relative spin coupling is limited by the norm of the network’s Laplacian matrix. In that regime, this novel approach is robust to noise in empirical data and may be also broadly relevant to complex networks with frustrated interactions that are studied throughout scientific fields.

  1. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  2. Detection of the ice assertion on aircraft using empirical mode decomposition enhanced by multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Bagherzadeh, Seyed Amin; Asadi, Davood

    2017-05-01

    In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.

  3. Drift from the Use of Hand-Held Knapsack Pesticide Sprayers in Boyacá (Colombian Andes).

    PubMed

    García-Santos, Glenda; Feola, Giuseppe; Nuyttens, David; Diaz, Jaime

    2016-05-25

    Offsite pesticide losses in tropical mountainous regions have been little studied. One example is measuring pesticide drift soil deposition, which can support pesticide risk assessment for surface water, soil, bystanders, and off-target plants and fauna. This is considered a serious gap, given the evidence of pesticide-related poisoning in those regions. Empirical data of drift deposition of a pesticide surrogate, Uranine tracer, within one of the highest potato-producing regions in Colombia, characterized by small plots and mountain orography, is presented. High drift values encountered in this study reflect the actual spray conditions using hand-held knapsack sprayers. Comparison between measured and predicted drift values using three existing empirical equations showed important underestimation. However, after their optimization based on measured drift information, the equations showed a strong predictive power for this study area and the study conditions. The most suitable curve to assess mean relative drift was the IMAG calculator after optimization.

  4. Untangling complex networks: risk minimization in financial markets through accessible spin glass ground states.

    PubMed

    Lisewski, Andreas Martin; Lichtarge, Olivier

    2010-08-15

    Recurrent international financial crises inflict significant damage to societies and stress the need for mechanisms or strategies to control risk and tamper market uncertainties. Unfortunately, the complex network of market interactions often confounds rational approaches to optimize financial risks. Here we show that investors can overcome this complexity and globally minimize risk in portfolio models for any given expected return, provided the relative margin requirement remains below a critical, empirically measurable value. In practice, for markets with centrally regulated margin requirements, a rational stabilization strategy would be keeping margins small enough. This result follows from ground states of the random field spin glass Ising model that can be calculated exactly through convex optimization when relative spin coupling is limited by the norm of the network's Laplacian matrix. In that regime, this novel approach is robust to noise in empirical data and may be also broadly relevant to complex networks with frustrated interactions that are studied throughout scientific fields.

  5. Linking ecosystem services with state-and-transition models to evaluate rangeland management decisions

    NASA Astrophysics Data System (ADS)

    Lohani, S.; Heilman, P.; deSteiguer, J. E.; Guertin, D. P.; Wissler, C.; McClaran, M. P.

    2014-12-01

    Quantifying ecosystem services is a crucial topic for land management decision making. However, market prices are usually not able to capture all the ecosystem services and disservices. Ecosystem services from rangelands, that cover 70% of the world's land area, are even less well-understood since knowledge of rangelands is limited. This study generated a management framework for rangelands that uses remote sensing to generate state and transition models (STMs) for a large area and a linear programming (LP) model that uses ecosystem services to evaluate natural and/or management induced transitions as described in the STM. The LP optimization model determines the best management plan for a plot of semi-arid land in the Empire Ranch in southeastern Arizona. The model allocated land among management activities (do nothing, grazing, fire, and brush removal) to optimize net benefits and determined the impact of monetizing environmental services and disservices on net benefits, acreage allocation and production output. The ecosystem services under study were forage production (AUM/ac/yr), sediment (lbs/ac/yr), water runoff (inches/yr), soil loss (lbs/ac/yr) and recreation (thousands of number of visitors/ac/yr). The optimization model was run for three different scenarios - private rancher, public rancher including environmental services and excluding disservices, and public rancher including both services and disservices. The net benefit was the highest for the public rancher excluding the disservices. A result from the study is a constrained optimization model that incorporates ecosystem services to analyze investments on conservation and management activities. Rangeland managers can use this model to understand and explain, not prescribe, the tradeoffs of management investments.

  6. Econometrics of exhaustible resource supply: a theory and an application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epple, D.

    1983-01-01

    This report takes a major step toward developing a fruitful approach to empirical analysis of resource supply. It is the first empirical application of resource theory that has successfully integrated the effects of depletion of nonrenewable resources with the effects of uncertainty about future costs and prices on supply behavior. Thus, the model is a major improvement over traditional engineering-optimization models that assume complete certainty, and over traditional econometrics models that are only implicitly related to the theory of resource supply. The model is used to test hypotheses about interdependence of oil and natural gas discoveries, depletion, ultimate recovery, andmore » the role of price expectations. This paper demonstrates the feasibility of using exhaustible resource theory in the development of empirically testable models. 19 refs., 1 fig., 5 tabs.« less

  7. Experimental study on thrust and power of flapping-wing system based on rack-pinion mechanism.

    PubMed

    Nguyen, Tuan Anh; Vu Phan, Hoang; Au, Thi Kim Loan; Park, Hoon Cheol

    2016-06-20

    This experimental study investigates the effect of three parameters: wing aspect ratio (AR), wing offset, and flapping frequency, on thrust generation and power consumption of a flapping-wing system based on a rack-pinion mechanism. The new flapping-wing system is simple but robust, and is able to create a large flapping amplitude. The thrust measured by a load cell reveals that for a given power, the flapping-wing system using a higher wing AR produces larger thrust and higher flapping frequency at the wing offset of 0.15[Formula: see text] or 0.20[Formula: see text] ([Formula: see text] is the mean chord) than other wing offsets. Of the three parameters, the flapping frequency plays a more significant role on thrust generation than either the wing AR or the wing offset. Based on the measured thrusts, an empirical equation for thrust prediction is suggested, as a function of wing area, flapping frequency, flapping angle, and wing AR. The difference between the predicted and measured thrusts was less than 7%, which proved that the empirical equation for thrust prediction is reasonable. On average, the measured power consumption to flap the wings shows that 46.5% of the input power is spent to produce aerodynamic forces, 14.0% to overcome inertia force, 9.5% to drive the rack-pinion-based flapping mechanism, and 30.0% is wasted as the power loss of the installed motor. From the power analysis, it is found that the wing with an AR of 2.25 using a wing offset of 0.20[Formula: see text] showed the optimal power loading in the flapping-wing system. In addition, the flapping frequency of 25 Hz is recommended as the optimal frequency of the current flapping-wing system for high efficiency, which was 48.3%, using a wing with an AR of 2.25 and a wing offset of 0.20[Formula: see text] in the proposed design.

  8. On the origin of the Bangui magnetic anomaly, central African empire

    NASA Technical Reports Server (NTRS)

    Marsh, B. D.

    1977-01-01

    A large magnetic anomaly was recognized in satellite magnetometer data over the Central African Empire in central Africa. They named this anomaly the Bangui magnetic anomaly due to its location near the capital city of Bangui, C.A.E. Because large crustal magnetic anomalies are uncommon, the origin of this anomaly has provoked some interest. The area of the anomaly was visited to make ground magnetic measurements, geologic observations, and in-situ magnetic susceptibility measurements. Some rock samples were also collected and chemically analyzed. The results of these investigations are presented.

  9. Improving scanner wafer alignment performance by target optimization

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Jehoul, Christiane; Socha, Robert; Menchtchikov, Boris; Raghunathan, Sudhar; Kent, Eric; Schoonewelle, Hielke; Tinnemans, Patrick; Tuffy, Paul; Belen, Jun; Wise, Rich

    2016-03-01

    In the process nodes of 10nm and below, the patterning complexity along with the processing and materials required has resulted in a need to optimize alignment targets in order to achieve the required precision, accuracy and throughput performance. Recent industry publications on the metrology target optimization process have shown a move from the expensive and time consuming empirical methodologies, towards a faster computational approach. ASML's Design for Control (D4C) application, which is currently used to optimize YieldStar diffraction based overlay (DBO) metrology targets, has been extended to support the optimization of scanner wafer alignment targets. This allows the necessary process information and design methodology, used for DBO target designs, to be leveraged for the optimization of alignment targets. In this paper, we show how we applied this computational approach to wafer alignment target design. We verify the correlation between predictions and measurements for the key alignment performance metrics and finally show the potential alignment and overlay performance improvements that an optimized alignment target could achieve.

  10. Utilization of Titanium Particle Impact Location to Validate a 3D Multicomponent Model for Cold Spray Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Faizan-Ur-Rab, M.; Zahiri, S. H.; King, P. C.; Busch, C.; Masood, S. H.; Jahedi, M.; Nagarajah, R.; Gulizia, S.

    2017-12-01

    Cold spray is a solid-state rapid deposition technology in which metal powder is accelerated to supersonic speeds within a de Laval nozzle and then impacts onto the surface of a substrate. It is possible for cold spray to build thick structures, thus providing an opportunity for melt-less additive manufacturing. Image analysis of particle impact location and focused ion beam dissection of individual particles were utilized to validate a 3D multicomponent model of cold spray. Impact locations obtained using the 3D model were found to be in close agreement with the empirical data. Moreover, the 3D model revealed the particles' velocity and temperature just before impact—parameters which are paramount for developing a full understanding of the deposition process. Further, it was found that the temperature and velocity variations in large-size particles before impact were far less than for the small-size particles. Therefore, an optimal particle temperature and velocity were identified, which gave the highest deformation after impact. The trajectory of the particles from the injection point to the moment of deposition in relation to propellant gas is visualized. This detailed information is expected to assist with the optimization of the deposition process, contributing to improved mechanical properties for additively manufactured cold spray titanium parts.

  11. Adaptive Batch Mode Active Learning.

    PubMed

    Chakraborty, Shayok; Balasubramanian, Vineeth; Panchanathan, Sethuraman

    2015-08-01

    Active learning techniques have gained popularity to reduce human effort in labeling data instances for inducing a classifier. When faced with large amounts of unlabeled data, such algorithms automatically identify the exemplar and representative instances to be selected for manual annotation. More recently, there have been attempts toward a batch mode form of active learning, where a batch of data points is simultaneously selected from an unlabeled set. Real-world applications require adaptive approaches for batch selection in active learning, depending on the complexity of the data stream in question. However, the existing work in this field has primarily focused on static or heuristic batch size selection. In this paper, we propose two novel optimization-based frameworks for adaptive batch mode active learning (BMAL), where the batch size as well as the selection criteria are combined in a single formulation. We exploit gradient-descent-based optimization strategies as well as properties of submodular functions to derive the adaptive BMAL algorithms. The solution procedures have the same computational complexity as existing state-of-the-art static BMAL techniques. Our empirical results on the widely used VidTIMIT and the mobile biometric (MOBIO) data sets portray the efficacy of the proposed frameworks and also certify the potential of these approaches in being used for real-world biometric recognition applications.

  12. A microhydrodynamic rationale for selection of bead size in preparation of drug nanosuspensions via wet stirred media milling.

    PubMed

    Li, Meng; Alvarez, Paulina; Bilgili, Ecevit

    2017-05-30

    Although wet stirred media milling has proven to be a robust process for producing nanoparticle suspensions of poorly water-soluble drugs and thereby enhancing their bioavailability, selection of bead size has been largely empirical, lacking fundamental rationale. This study aims to establish such rationale by investigating the impact of bead size at various stirrer speeds on the drug breakage kinetics via a microhydrodynamic model. To this end, stable suspensions of griseofulvin, a model BCS Class II drug, were prepared using hydroxypropyl cellulose and sodium dodecyl sulfate. The suspensions were milled at four different stirrer speeds (1000-4000rpm) using various sizes (50-1500μm) of zirconia beads. Laser diffraction, SEM, and XRPD were used for characterization. Our results suggest that there is an optimal bead size that achieves fastest breakage at each stirrer speed and that it shifts to a smaller size at higher speed. Calculated microhydrodynamic parameters reveal two counteracting effects of bead size: more bead-bead collisions with less energy/force upon a decrease in bead size. The optimal bead size exhibits a negative power-law correlation with either specific energy consumption or the microhydrodynamic parameters. Overall, this study rationalizes the use of smaller beads for more energetic wet media milling. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Exact Algorithms for Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.

    PubMed

    Kordi, Misagh; Bansal, Mukul S

    2017-06-01

    Duplication-Transfer-Loss (DTL) reconciliation is a powerful method for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation seeks to reconcile gene trees with species trees by postulating speciation, duplication, transfer, and loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. In practice, however, gene trees are often non-binary due to uncertainty in the gene tree topologies, and DTL reconciliation with non-binary gene trees is known to be NP-hard. In this paper, we present the first exact algorithms for DTL reconciliation with non-binary gene trees. Specifically, we (i) show that the DTL reconciliation problem for non-binary gene trees is fixed-parameter tractable in the maximum degree of the gene tree, (ii) present an exponential-time, but in-practice efficient, algorithm to track and enumerate all optimal binary resolutions of a non-binary input gene tree, and (iii) apply our algorithms to a large empirical data set of over 4700 gene trees from 100 species to study the impact of gene tree uncertainty on DTL-reconciliation and to demonstrate the applicability and utility of our algorithms. The new techniques and algorithms introduced in this paper will help biologists avoid incorrect evolutionary inferences caused by gene tree uncertainty.

  14. Robustness of norm-driven cooperation in the commons

    PubMed Central

    2016-01-01

    Sustainable use of common-pool resources such as fish, water or forests depends on the cooperation of resource users that restrain their individual extraction to socially optimal levels. Empirical evidence has shown that under certain social and biophysical conditions, self-organized cooperation in the commons can evolve. Global change, however, may drastically alter these conditions. We assess the robustness of cooperation to environmental variability in a stylized model of a community that harvests a shared resource. Community members follow a norm of socially optimal resource extraction, which is enforced through social sanctioning. Our results indicate that both resource abundance and a small increase in resource variability can lead to collapse of cooperation observed in the no-variability case, while either scarcity or large variability have the potential to stabilize it. The combined effects of changes in amount and variability can reinforce or counteract each other depending on their size and the initial level of cooperation in the community. If two socially separate groups are ecologically connected through resource leakage, cooperation in one can destabilize the other. These findings provide insights into possible effects of global change and spatial connectivity, indicating that there is no simple answer as to their effects on cooperation and sustainable resource use. PMID:26740611

  15. Ab initio conformational analysis of N-formyl ?-alanine amide including electron correlation

    NASA Astrophysics Data System (ADS)

    Yu, Ching-Hsing; Norman, Mya A.; Schäfer, Lothar; Ramek, Michael; Peeters, Anik; van Alsenoy, Christian

    2001-06-01

    The conformational properties of N-formyl L-alanine amide (ALA) were investigated using RMP2/6-311G∗∗ ab initio gradient geometry optimization. One hundred forty four structures of ALA were optimized at 30° grid points in its φ(N-C(α)), ψ(C(α)-C‧) conformational space. Using cubic spline functions, the grid structures were then used to construct analytical representations of complete surfaces, in φ,ψ-space, of bond lengths, bond angles, torsional sensitivity and electrostatic atomic charges. Analyses show that, in agreement with previous studies, the right-handed helical conformation, αR, is not a local energy minimum of the potential energy surface of ALA. Comparisons with protein crystallographic data show that the characteristic differences between geometrical trends in dipeptides and proteins, previously found for ab initio dipeptide structures obtained without electron correlation, are also found in the electron-correlated geometries. In contrast to generally accepted features of force fields used in empirical molecular modeling, partial atomic charges obtained by the CHELPG method are found to be not constant, but to vary significantly throughout the φ,ψ-space. By comparing RHF and MP2 structures, the effects of dispersion forces on ALA were studied, revealing molecular contractions for those conformations, in which small adjustments of torsional angles entail large changes in non-bonded distances.

  16. Minimizing Spatial Variability of Healthcare Spatial Accessibility-The Case of a Dengue Fever Outbreak.

    PubMed

    Chu, Hone-Jay; Lin, Bo-Cheng; Yu, Ming-Run; Chan, Ta-Chien

    2016-12-13

    Outbreaks of infectious diseases or multi-casualty incidents have the potential to generate a large number of patients. It is a challenge for the healthcare system when demand for care suddenly surges. Traditionally, valuation of heath care spatial accessibility was based on static supply and demand information. In this study, we proposed an optimal model with the three-step floating catchment area (3SFCA) to account for the supply to minimize variability in spatial accessibility. We used empirical dengue fever outbreak data in Tainan City, Taiwan in 2015 to demonstrate the dynamic change in spatial accessibility based on the epidemic trend. The x and y coordinates of dengue-infected patients with precision loss were provided publicly by the Tainan City government, and were used as our model's demand. The spatial accessibility of heath care during the dengue outbreak from August to October 2015 was analyzed spatially and temporally by producing accessibility maps, and conducting capacity change analysis. This study also utilized the particle swarm optimization (PSO) model to decrease the spatial variation in accessibility and shortage areas of healthcare resources as the epidemic went on. The proposed method in this study can help decision makers reallocate healthcare resources spatially when the ratios of demand and supply surge too quickly and form clusters in some locations.

  17. High-throughput crystallization screening.

    PubMed

    Skarina, Tatiana; Xu, Xiaohui; Evdokimova, Elena; Savchenko, Alexei

    2014-01-01

    Protein structure determination by X-ray crystallography is dependent on obtaining a single protein crystal suitable for diffraction data collection. Due to this requirement, protein crystallization represents a key step in protein structure determination. The conditions for protein crystallization have to be determined empirically for each protein, making this step also a bottleneck in the structure determination process. Typical protein crystallization practice involves parallel setup and monitoring of a considerable number of individual protein crystallization experiments (also called crystallization trials). In these trials the aliquots of purified protein are mixed with a range of solutions composed of a precipitating agent, buffer, and sometimes an additive that have been previously successful in prompting protein crystallization. The individual chemical conditions in which a particular protein shows signs of crystallization are used as a starting point for further crystallization experiments. The goal is optimizing the formation of individual protein crystals of sufficient size and quality to make them suitable for diffraction data collection. Thus the composition of the primary crystallization screen is critical for successful crystallization.Systematic analysis of crystallization experiments carried out on several hundred proteins as part of large-scale structural genomics efforts allowed the optimization of the protein crystallization protocol and identification of a minimal set of 96 crystallization solutions (the "TRAP" screen) that, in our experience, led to crystallization of the maximum number of proteins.

  18. Prediction and Optimization of Phase Transformation Region After Spot Continual Induction Hardening Process Using Response Surface Method

    NASA Astrophysics Data System (ADS)

    Qin, Xunpeng; Gao, Kai; Zhu, Zhenhua; Chen, Xuliang; Wang, Zhou

    2017-09-01

    The spot continual induction hardening (SCIH) process, which is a modified induction hardening, can be assembled to a five-axis cooperating computer numerical control machine tool to strengthen more than one small area or relatively large area on complicated component surface. In this study, a response surface method was presented to optimize phase transformation region after the SCIH process. The effects of five process parameters including feed velocity, input power, gap, curvature and flow rate on temperature, microstructure, microhardness and phase transformation geometry were investigated. Central composition design, a second-order response surface design, was employed to systematically estimate the empirical models of temperature and phase transformation geometry. The analysis results indicated that feed velocity has a dominant effect on the uniformity of microstructure and microhardness, domain size, oxidized track width, phase transformation width and height in the SCIH process while curvature has the largest effect on center temperature in the design space. The optimum operating conditions with 0.817, 0.845 and 0.773 of desirability values are expected to be able to minimize ratio (tempering region) and maximize phase transformation width for concave, flat and convex surface workpieces, respectively. The verification result indicated that the process parameters obtained by the model were reliable.

  19. Monopoly models with time-varying demand function

    NASA Astrophysics Data System (ADS)

    Cavalli, Fausto; Naimzada, Ahmad

    2018-05-01

    We study a family of monopoly models for markets characterized by time-varying demand functions, in which a boundedly rational agent chooses output levels on the basis of a gradient adjustment mechanism. After presenting the model for a generic framework, we analytically study the case of cyclically alternating demand functions. We show that both the perturbation size and the agent's reactivity to profitability variation signals can have counterintuitive roles on the resulting period-2 cycles and on their stability. In particular, increasing the perturbation size can have both a destabilizing and a stabilizing effect on the resulting dynamics. Moreover, in contrast with the case of time-constant demand functions, the agent's reactivity is not just destabilizing, but can improve stability, too. This means that a less cautious behavior can provide better performance, both with respect to stability and to achieved profits. We show that, even if the decision mechanism is very simple and is not able to always provide the optimal production decisions, achieved profits are very close to those optimal. Finally, we show that in agreement with the existing empirical literature, the price series obtained simulating the proposed model exhibit a significant deviation from normality and large volatility, in particular when underlying deterministic dynamics become unstable and complex.

  20. Hybrid Power Management Program Evaluated Ultracapacitors for the Next Generation Launch Transportation Project

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.

    2005-01-01

    The NASA Glenn Research Center initiated baseline testing of ultracapacitors to obtain empirical data in determining the feasibility of using ultracapacitors for the Next Generation Launch Transportation (NGLT) Project. There are large transient loads associated with NGLT that require a very large primary energy source or an energy storage system. The primary power source used for this test was a proton-exchange-membrane (PEM) fuel cell. The energy storage system can consist of batteries, flywheels, or ultracapacitors. Ultracapacitors were used for these tests. NASA Glenn has a wealth of experience in ultracapacitor technology through the Hybrid Power Management (HPM) Program, which the Avionics, Power and Communications Branch of Glenn s Engineering Development Division initiated for the Technology Transfer and Partnership Office. HPM is the innovative integration of diverse, state-ofthe- art power devices in optimal configurations for space and terrestrial applications. The appropriate application and control of the various advanced power devices (such as ultracapacitors and fuel cells) significantly improves overall system performance and efficiency. HPM has extremely wide potential. Applications include power generation, transportation systems, biotechnology systems, and space power systems. HPM has the potential to significantly alleviate global energy concerns, improve the environment, and stimulate the economy.

  1. Using spatiotemporal source separation to identify prominent features in multichannel data without sinusoidal filters.

    PubMed

    Cohen, Michael X

    2017-09-27

    The number of simultaneously recorded electrodes in neuroscience is steadily increasing, providing new opportunities for understanding brain function, but also new challenges for appropriately dealing with the increase in dimensionality. Multivariate source separation analysis methods have been particularly effective at improving signal-to-noise ratio while reducing the dimensionality of the data and are widely used for cleaning, classifying and source-localizing multichannel neural time series data. Most source separation methods produce a spatial component (that is, a weighted combination of channels to produce one time series); here, this is extended to apply source separation to a time series, with the idea of obtaining a weighted combination of successive time points, such that the weights are optimized to satisfy some criteria. This is achieved via a two-stage source separation procedure, in which an optimal spatial filter is first constructed and then its optimal temporal basis function is computed. This second stage is achieved with a time-delay-embedding matrix, in which additional rows of a matrix are created from time-delayed versions of existing rows. The optimal spatial and temporal weights can be obtained by solving a generalized eigendecomposition of covariance matrices. The method is demonstrated in simulated data and in an empirical electroencephalogram study on theta-band activity during response conflict. Spatiotemporal source separation has several advantages, including defining empirical filters without the need to apply sinusoidal narrowband filters. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. The U.S. Earthquake Prediction Program

    USGS Publications Warehouse

    Wesson, R.L.; Filson, J.R.

    1981-01-01

    There are two distinct motivations for earthquake prediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquake prediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth. 

  3. What if Learning Analytics Were Based on Learning Science?

    ERIC Educational Resources Information Center

    Marzouk, Zahia; Rakovic, Mladen; Liaqat, Amna; Vytasek, Jovita; Samadi, Donya; Stewart-Alonso, Jason; Ram, Ilana; Woloshen, Sonya; Winne, Philip H.; Nesbit, John C.

    2016-01-01

    Learning analytics are often formatted as visualisations developed from traced data collected as students study in online learning environments. Optimal analytics inform and motivate students' decisions about adaptations that improve their learning. We observe that designs for learning often neglect theories and empirical findings in learning…

  4. "Does Degree of Asymmetry Relate to Performance?" A Critical Review

    ERIC Educational Resources Information Center

    Boles, David B.; Barth, Joan M.

    2011-01-01

    In a recent paper, Chiarello, Welcome, Halderman, and Leonard (2009) reported positive correlations between word-related visual field asymmetries and reading performance. They argued that strong word processing lateralization represents a more optimal brain organization for reading acquisition. Their empirical results contrasted sharply with those…

  5. Economic analysis of a Japanese air pollution regulation : an optimal retirement problem under vehicle type regulation in the NOx-particulate matter law

    DOT National Transportation Integrated Search

    2008-06-01

    This paper empirically examines the vehicle type regulation that was introduced under the : Automobile Nitrogen OxidesParticulate Matter Law to mitigate air pollution problems in Japanese metropolitan areas. The vehicle type regulation effectively...

  6. Identification of potential compensatory muscle strategies in a breast cancer survivor population: A combined computational and experimental approach.

    PubMed

    Chopp-Hurley, Jaclyn N; Brookham, Rebecca L; Dickerson, Clark R

    2016-12-01

    Biomechanical models are often used to estimate the muscular demands of various activities. However, specific muscle dysfunctions typical of unique clinical populations are rarely considered. Due to iatrogenic tissue damage, pectoralis major capability is markedly reduced in breast cancer population survivors, which could influence arm internal and external rotation muscular strategies. Accordingly, an optimization-based muscle force prediction model was systematically modified to emulate breast cancer population survivors through adjusting pectoralis capability and enforcing an empirical muscular co-activation relationship. Model permutations were evaluated through comparisons between predicted muscle forces and empirically measured muscle activations in survivors. Similarities between empirical data and model outputs were influenced by muscle type, hand force, pectoralis major capability and co-activation constraints. Differences in magnitude were lower when the co-activation constraint was enforced (-18.4% [31.9]) than unenforced (-23.5% [27.6]) (p<0.0001). This research demonstrates that muscle dysfunction in breast cancer population survivors can be reflected through including a capability constraint for pectoralis major. Further refinement of the co-activation constraint for survivors could improve its generalizability across this population and activities. Improving biomechanical models to more accurately represent clinical populations can provide novel information that can help in the development of optimal treatment programs for breast cancer population survivors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  8. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  9. Temperature impacts on economic growth warrant stringent mitigation policy

    NASA Astrophysics Data System (ADS)

    Moore, Frances C.; Diaz, Delavane B.

    2015-02-01

    Integrated assessment models compare the costs of greenhouse gas mitigation with damages from climate change to evaluate the social welfare implications of climate policy proposals and inform optimal emissions reduction trajectories. However, these models have been criticized for lacking a strong empirical basis for their damage functions, which do little to alter assumptions of sustained gross domestic product (GDP) growth, even under extreme temperature scenarios. We implement empirical estimates of temperature effects on GDP growth rates in the DICE model through two pathways, total factor productivity growth and capital depreciation. This damage specification, even under optimistic adaptation assumptions, substantially slows GDP growth in poor regions but has more modest effects in rich countries. Optimal climate policy in this model stabilizes global temperature change below 2 °C by eliminating emissions in the near future and implies a social cost of carbon several times larger than previous estimates. A sensitivity analysis shows that the magnitude of climate change impacts on economic growth, the rate of adaptation, and the dynamic interaction between damages and GDP are three critical uncertainties requiring further research. In particular, optimal mitigation rates are much lower if countries become less sensitive to climate change impacts as they develop, making this a major source of uncertainty and an important subject for future research.

  10. Comparison of multiobjective evolutionary algorithms: empirical results.

    PubMed

    Zitzler, E; Deb, K; Thiele, L

    2000-01-01

    In this paper, we provide a systematic comparison of various evolutionary approaches to multiobjective optimization using six carefully chosen test functions. Each test function involves a particular feature that is known to cause difficulty in the evolutionary optimization process, mainly in converging to the Pareto-optimal front (e.g., multimodality and deception). By investigating these different problem features separately, it is possible to predict the kind of problems to which a certain technique is or is not well suited. However, in contrast to what was suspected beforehand, the experimental results indicate a hierarchy of the algorithms under consideration. Furthermore, the emerging effects are evidence that the suggested test functions provide sufficient complexity to compare multiobjective optimizers. Finally, elitism is shown to be an important factor for improving evolutionary multiobjective search.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Zhenhong; Liu, Changzheng; Yin, Yafeng

    The objective of this study is to evaluate the opportunity for public charging for a subset of US cities by using available public parking lot data. The capacity of the parking lots weighted by the daily parking occupancy rate is used as a proxy for daily parking demand. The city s public charging opportunity is defined as the percentage of parking demand covered by chargers on the off-street parking network. We assess this opportunity under the scenario of optimal deployment of public chargers. We use the maximum coverage model to optimally locate those facilities on the public garage network. Wemore » compare the optimal results to the actual placement of chargers. These empirical findings are of great interest to policymakers as those showcase the potential of increasing opportunities for charging under optimal charging location planning.« less

  12. Optimal landing of a helicopter in autorotation

    NASA Technical Reports Server (NTRS)

    Lee, A. Y. N.

    1985-01-01

    Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.

  13. Comparing a Coevolutionary Genetic Algorithm for Multiobjective Optimization

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Kraus, William F.; Haith, Gary L.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present results from a study comparing a recently developed coevolutionary genetic algorithm (CGA) against a set of evolutionary algorithms using a suite of multiobjective optimization benchmarks. The CGA embodies competitive coevolution and employs a simple, straightforward target population representation and fitness calculation based on developmental theory of learning. Because of these properties, setting up the additional population is trivial making implementation no more difficult than using a standard GA. Empirical results using a suite of two-objective test functions indicate that this CGA performs well at finding solutions on convex, nonconvex, discrete, and deceptive Pareto-optimal fronts, while giving respectable results on a nonuniform optimization. On a multimodal Pareto front, the CGA finds a solution that dominates solutions produced by eight other algorithms, yet the CGA has poor coverage across the Pareto front.

  14. Explaining the Effects of Communities of Pastoral Care for Students

    ERIC Educational Resources Information Center

    Murphy, Joseph; Holste, Linda

    2016-01-01

    This article explains how communities of pastoral care work. It presents an empirically forged theory in action. We examined theoretical and empirical work across the targeted area of personalization for students. We also completed what Hallinger (2012) refers to as "exhaustive review" of the field of school improvement writ large. We…

  15. Systematic Design of a Learning Environment for Domain-Specific and Domain-General Critical Thinking Skills

    ERIC Educational Resources Information Center

    Tiruneh, Dawit Tibebu; Weldeslassie, Ataklti G.; Kassa, Abrham; Tefera, Zinaye; De Cock, Mieke; Elen, Jan

    2016-01-01

    Identifying effective instructional approaches that stimulate students' critical thinking (CT) has been the focus of a large body of empirical research. However, there is little agreement on the instructional principles and procedures that are theoretically sound and empirically valid to developing both domain-specific and domain-general CT…

  16. Effects of temperature on consumer-resource interactions.

    PubMed

    Amarasekare, Priyanga

    2015-05-01

    Understanding how temperature variation influences the negative (e.g. self-limitation) and positive (e.g. saturating functional responses) feedback processes that characterize consumer-resource interactions is an important research priority. Previous work on this topic has yielded conflicting outcomes with some studies predicting that warming should increase consumer-resource oscillations and others predicting that warming should decrease consumer-resource oscillations. Here, I develop a consumer-resource model that both synthesizes previous findings in a common framework and yields novel insights about temperature effects on consumer-resource dynamics. I report three key findings. First, when the resource species' birth rate exhibits a unimodal temperature response, as demonstrated by a large number of empirical studies, the temperature range over which the consumer-resource interaction can persist is determined by the lower and upper temperature limits to the resource species' reproduction. This contrasts with the predictions of previous studies, which assume that the birth rate exhibits a monotonic temperature response, that consumer extinction is determined by temperature effects on consumer species' traits, rather than the resource species' traits. Secondly, the comparative analysis I have conducted shows that whether warming leads to an increase or decrease in consumer-resource oscillations depends on the manner in which temperature affects intraspecific competition. When the strength of self-limitation increases monotonically with temperature, warming causes a decrease in consumer-resource oscillations. However, if self-limitation is strongest at temperatures physiologically optimal for reproduction, a scenario previously unanalysed by theory but amply substantiated by empirical data, warming can cause an increase in consumer-resource oscillations. Thirdly, the model yields testable comparative predictions about consumer-resource dynamics under alternative hypotheses for how temperature affects competitive and resource acquisition traits. Importantly, it does so through empirically quantifiable metrics for predicting temperature effects on consumer viability and consumer-resource oscillations, which obviates the need for parameterizing complex dynamical models. Tests of these metrics with empirical data on a host-parasitoid interaction yield realistic estimates of temperature limits for consumer persistence and the propensity for consumer-resource oscillations, highlighting their utility in predicting temperature effects, particularly warming, on consumer-resource interactions in both natural and agricultural settings. © 2014 The Author. Journal of Animal Ecology © 2014 British Ecological Society.

  17. Strong motions observed by K-NET and KiK-net during the 2016 Kumamoto earthquake sequence

    NASA Astrophysics Data System (ADS)

    Suzuki, Wataru; Aoi, Shin; Kunugi, Takashi; Kubo, Hisahiko; Morikawa, Nobuyuki; Nakamura, Hiromitsu; Kimura, Takeshi; Fujiwara, Hiroyuki

    2017-01-01

    The nationwide strong-motion seismograph network of K-NET and KiK-net in Japan successfully recorded the strong ground motions of the 2016 Kumamoto earthquake sequence, which show the several notable characteristics. For the first large earthquake with a JMA magnitude of 6.5 (21:26, April 14, 2016, JST), the large strong motions are concentrated near the epicenter and the strong-motion attenuations are well predicted by the empirical relation for crustal earthquakes with a moment magnitude of 6.1. For the largest earthquake of the sequence with a JMA magnitude of 7.3 (01:25, April 16, 2016, JST), the large peak ground accelerations and velocities extend from the epicentral area to the northeast direction. The attenuation feature of peak ground accelerations generally follows the empirical relation, whereas that for velocities deviates from the empirical relation for stations with the epicentral distance of greater than 200 km, which can be attributed to the large Love wave having a dominant period around 10 s. The large accelerations were observed at stations even in Oita region, more than 70 km northeast from the epicenter. They are attributed to the local induced earthquake in Oita region, whose moment magnitude is estimated to be 5.5 by matching the amplitudes of the corresponding phases with the empirical attenuation relation. The real-time strong-motion observation has a potential for contributing to the mitigation of the ongoing earthquake disasters. We test a methodology to forecast the regions to be exposed to the large shaking in real time, which has been developed based on the fact that the neighboring stations are already shaken, for the largest event of the Kumamoto earthquakes, and demonstrate that it is simple but effective to quickly make warning. We also shows that the interpolation of the strong motions in real time is feasible, which will be utilized for the real-time forecast of ground motions based on the observed shakings.[Figure not available: see fulltext.

  18. Developmant of a Reparametrized Semi-Empirical Force Field to Compute the Rovibrational Structure of Large PAHs

    NASA Astrophysics Data System (ADS)

    Fortenberry, Ryan

    The Spitzer Space Telescope observation of spectra most likely attributable to diverse and abundant populations of polycyclic aromatic hydrocarbons (PAHs) in space has led to tremendous interest in these molecules as tracers of the physical conditions in different astrophysical regions. A major challenge in using PAHs as molecular tracers is the complexity of the spectral features in the 3-20 μm region. The large number and vibrational similarity of the putative PAHs responsible for these spectra necessitate determination for the most accurate basis spectra possible for comparison. It is essential that these spectra be established in order for the regions explored with the newest generation of observatories such as SOFIA and JWST to be understood. Current strategies to develop these spectra for individual PAHs involve either matrixisolation IR measurements or quantum chemical calculations of harmonic vibrational frequencies. These strategies have been employed to develop the successful PAH IR spectral database as a repository of basis functions used to fit astronomically observed spectra, but they are limited in important ways. Both techniques provide an adequate description of the molecules in their electronic, vibrational, and rotational ground state, but these conditions do not represent energetically hot regions for PAHs near strong radiation fields of stars and are not direct representations of the gas phase. Some non-negligible matrix effects are known in condensed-phase studies, and the inclusion of anharmonicity in quantum chemical calculations is essential to generate physically-relevant results especially for hot bands. While scaling factors in either case can be useful, they are agnostic to the system studied and are not robustly predictive. One strategy that has emerged to calculate the molecular vibrational structure uses vibrational perturbation theory along with a quartic force field (QFF) to account for higher-order derivatives of the potential energy surface. QFFs can regularly predict the fundamental vibrational frequencies to within 5 cm-1 of experimentally measured values. This level of accuracy represents a reduction in discrepancies by an order of magnitude compared with harmonic frequencies calculated with density functional theory (DFT). The major limitation of the QFF strategy is that the level of electronic-structure theory required to develop a predictive force field is prohibitively time consuming for molecular systems larger than 5 atoms. Recent advances in QFF techniques utilizing informed DFT approaches have pushed the size of the systems studied up to 24 heavy atoms, but relevant PAHs can have up to hundreds of atoms. We have developed alternative electronic-structure methods that maintain the accuracy of the coupled-cluster calculations extrapolated to the complete basis set limit with relativistic and core correlation corrections applied: the CcCR QFF. These alternative methods are based on simplifications of Hartree—Fock theory in which the computationally intensive two-electron integrals are approximated using empirical parameters. These methods reduce computational time to orders of magnitude less than the CcCR calculations. We have derived a set of optimized empirical parameters to minimize the difference molecular ions of astrochemical significance. We have shown that it is possible to derive a set of empirical parameters that will produce RMS energy differences of less than 2 cm- 1 for our test systems. We are proposing to adopt this reparameterization strategy and some of the lessons learned from the informed DFT studies to create a semi-empirical method whose tremendous speed will allow us to study the rovibrational structure of large PAHs with up to 100s of carbon atoms.

  19. A novel method for overlapping community detection using Multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa

    2018-09-01

    The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.

  20. Intraprocedural yttrium-90 positron emission tomography/CT for treatment optimization of yttrium-90 radioembolization.

    PubMed

    Bourgeois, Austin C; Chang, Ted T; Bradley, Yong C; Acuff, Shelley N; Pasciak, Alexander S

    2014-02-01

    Radioembolization with yttrium-90 ((90)Y) microspheres relies on delivery of appropriate treatment activity to ensure patient safety and optimize treatment efficacy. We report a case in which (90)Y positron emission tomography (PET)/computed tomography (CT) was performed to optimize treatment planning during a same-day, three-part treatment session. This treatment consisted of (i) an initial (90)Y infusion with a dosage determined using an empiric treatment planning model, (ii) quantitative (90)Y PET/CT imaging, and (iii) a secondary infusion with treatment planning based on quantitative imaging data with the goal of delivering a specific total tumor absorbed dose. © 2014 SIR Published by SIR All rights reserved.

  1. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  2. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  3. The spectral basis of optimal error field correction on DIII-D

    DOE PAGES

    Paz-Soldan, Carlos A.; Buttery, Richard J.; Garofalo, Andrea M.; ...

    2014-04-28

    Here, experimental optimum error field correction (EFC) currents found in a wide breadth of dedicated experiments on DIII-D are shown to be consistent with the currents required to null the poloidal harmonics of the vacuum field which drive the kink mode near the plasma edge. This allows the identification of empirical metrics which predict optimal EFC currents with accuracy comparable to that of first- principles modeling which includes the ideal plasma response. While further metric refinements are desirable, this work suggests optimal EFC currents can be effectively fed-forward based purely on knowledge of the vacuum error field and basic equilibriummore » properties which are routinely calculated in real-time.« less

  4. Optimal Measurement Conditions for Spatiotemporal EEG/MEG Source Analysis.

    ERIC Educational Resources Information Center

    Huizenga, Hilde M.; Heslenfeld, Dirk J.; Molenaar, Peter C. M.

    2002-01-01

    Developed a method to determine the required number and position of sensors for human brain electromagnetic source analysis. Studied the method through a simulation study and an empirical study on visual evoked potentials in one adult male. Results indicate the method is fast and reliable and improves source precision. (SLD)

  5. Why IM Me? I'm Right Here!

    ERIC Educational Resources Information Center

    Marcus, Sara

    2007-01-01

    Although the relationship between styles of learning and reference service has been taken for granted within the profession, there has been little empirical research that directly links individual learning styles to optimal reference behaviors. This paper is a call for such research, and illustrates the importance of understanding the relationship…

  6. Teachers' Emotional Support Consistency Predicts Children's Achievement Gains and Social Skills

    ERIC Educational Resources Information Center

    Curby, Timothy W.; Brock, Laura L.; Hamre, Bridget K.

    2013-01-01

    Research Findings: It is widely acknowledged that consistent, high-quality teacher-student interactions promote optimal developmental outcomes for children. Previous research on the quality of teacher-student interactions provides empirical support for this premise. Little research has been conducted on the consistency of teacher-student…

  7. Manipulating the Gradient

    ERIC Educational Resources Information Center

    Gaze, Eric C.

    2005-01-01

    We introduce a cooperative learning, group lab for a Calculus III course to facilitate comprehension of the gradient vector and directional derivative concepts. The lab is a hands-on experience allowing students to manipulate a tangent plane and empirically measure the effect of partial derivatives on the direction of optimal ascent. (Contains 7…

  8. Another Fine MeSH: Clinical Medicine Meets Information Science.

    ERIC Educational Resources Information Center

    O'Rourke, Alan; Booth, Andrew; Ford, Nigel

    1999-01-01

    Discusses evidence-based medicine (EBM) and the need for systematic use of databases like MEDLINE with more sophisticated search strategies to optimize the retrieval of relevant papers. Describes an empirical study of hospital libraries that examined requests for information and search strategies using both structured and unstructured forms.…

  9. Creativity and Flow in Musical Composition: An Empirical Investigation

    ERIC Educational Resources Information Center

    MacDonald, Raymond; Byrne, Charles; Carlton, Lana

    2006-01-01

    Although an extensive literature exists on creativity and music, there is a lack of published research investigating possible links between musical creativity and Csikszentmihalyi's concept of flow or optimal experience. This article examines a group composition task to study the relationships between creativity, flow and the quality of the…

  10. An Investigation of Generalized Differential Evolution Metaheuristic for Multiobjective Optimal Crop-Mix Planning Decision

    PubMed Central

    Olugbara, Oludayo

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369

  11. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    PubMed

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  12. Enhancing predictive accuracy and reproducibility in clinical evaluation research: Commentary on the special section of the Journal of Evaluation in Clinical Practice.

    PubMed

    Bryant, Fred B

    2016-12-01

    This paper introduces a special section of the current issue of the Journal of Evaluation in Clinical Practice that includes a set of 6 empirical articles showcasing a versatile, new machine-learning statistical method, known as optimal data (or discriminant) analysis (ODA), specifically designed to produce statistical models that maximize predictive accuracy. As this set of papers clearly illustrates, ODA offers numerous important advantages over traditional statistical methods-advantages that enhance the validity and reproducibility of statistical conclusions in empirical research. This issue of the journal also includes a review of a recently published book that provides a comprehensive introduction to the logic, theory, and application of ODA in empirical research. It is argued that researchers have much to gain by using ODA to analyze their data. © 2016 John Wiley & Sons, Ltd.

  13. Large Devaluations and the Real Exchange Rate

    ERIC Educational Resources Information Center

    Burstein, Ariel; Eichenbaum, Martin; Rebelo, Sergio

    2005-01-01

    In this paper we argue that the primary force behind the large drop in real exchange rates that occurs after large devaluations is the slow adjustment in the prices of nontradable goods and services. Our empirical analysis uses data from five large devaluation episodes: Argentina (2002), Brazil (1999), Korea (1997), Mexico (1994), and Thailand…

  14. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of {sup 64}Cu and {sup 67}Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasrabadi, M. N., E-mail: mnnasrabadi@ast.ui.ac.ir; Sepiani, M.

    2015-03-30

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  15. Study of components and statistical reaction mechanism in simulation of nuclear process for optimized production of 64Cu and 67Ga medical radioisotopes using TALYS, EMPIRE and LISE++ nuclear reaction and evaporation codes

    NASA Astrophysics Data System (ADS)

    Nasrabadi, M. N.; Sepiani, M.

    2015-03-01

    Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.

  16. On the Hilbert-Huang Transform Theoretical Foundation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Huang, Norden E.

    2004-01-01

    The Hilbert-Huang Transform [HHT] is a novel empirical method for spectrum analysis of non-linear and non-stationary signals. The HHT is a recent development and much remains to be done to establish the theoretical foundation of the HHT algorithms. This paper develops the theoretical foundation for the convergence of the HHT sifting algorithm and it proves that the finest spectrum scale will always be the first generated by the HHT Empirical Mode Decomposition (EMD) algorithm. The theoretical foundation for cutting an extrema data points set into two parts is also developed. This then allows parallel signal processing for the HHT computationally complex sifting algorithm and its optimization in hardware.

  17. Word lengths are optimized for efficient communication.

    PubMed

    Piantadosi, Steven T; Tily, Harry; Gibson, Edward

    2011-03-01

    We demonstrate a substantial improvement on one of the most celebrated empirical laws in the study of language, Zipf's 75-y-old theory that word length is primarily determined by frequency of use. In accord with rational theories of communication, we show across 10 languages that average information content is a much better predictor of word length than frequency. This indicates that human lexicons are efficiently structured for communication by taking into account interword statistical dependencies. Lexical systems result from an optimization of communicative pressures, coding meanings efficiently given the complex statistics of natural language use.

  18. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  19. Sampling errors in the estimation of empirical orthogonal functions. [for climatology studies

    NASA Technical Reports Server (NTRS)

    North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.

    1982-01-01

    Empirical Orthogonal Functions (EOF's), eigenvectors of the spatial cross-covariance matrix of a meteorological field, are reviewed with special attention given to the necessary weighting factors for gridded data and the sampling errors incurred when too small a sample is available. The geographical shape of an EOF shows large intersample variability when its associated eigenvalue is 'close' to a neighboring one. A rule of thumb indicating when an EOF is likely to be subject to large sampling fluctuations is presented. An explicit example, based on the statistics of the 500 mb geopotential height field, displays large intersample variability in the EOF's for sample sizes of a few hundred independent realizations, a size seldom exceeded by meteorological data sets.

  20. Impact of Inadequate Empirical Therapy on the Mortality of Patients with Bloodstream Infections: a Propensity Score-Based Analysis

    PubMed Central

    Retamar, Pilar; Portillo, María M.; López-Prieto, María Dolores; Rodríguez-López, Fernando; de Cueto, Marina; García, María V.; Gómez, María J.; del Arco, Alfonso; Muñoz, Angel; Sánchez-Porto, Antonio; Torres-Tortosa, Manuel; Martín-Aspas, Andrés; Arroyo, Ascensión; García-Figueras, Carolina; Acosta, Federico; Corzo, Juan E.; León-Ruiz, Laura; Escobar-Lara, Trinidad

    2012-01-01

    The impact of the adequacy of empirical therapy on outcome for patients with bloodstream infections (BSI) is key for determining whether adequate empirical coverage should be prioritized over other, more conservative approaches. Recent systematic reviews outlined the need for new studies in the field, using improved methodologies. We assessed the impact of inadequate empirical treatment on the mortality of patients with BSI in the present-day context, incorporating recent methodological recommendations. A prospective multicenter cohort including all BSI episodes in adult patients was performed in 15 hospitals in Andalucía, Spain, over a 2-month period in 2006 to 2007. The main outcome variables were 14- and 30-day mortality. Adjusted analyses were performed by multivariate analysis and propensity score-based matching. Eight hundred one episodes were included. Inadequate empirical therapy was administered in 199 (24.8%) episodes; mortality at days 14 and 30 was 18.55% and 22.6%, respectively. After controlling for age, Charlson index, Pitt score, neutropenia, source, etiology, and presentation with severe sepsis or shock, inadequate empirical treatment was associated with increased mortality at days 14 and 30 (odds ratios [ORs], 2.12 and 1.56; 95% confidence intervals [95% CI], 1.34 to 3.34 and 1.01 to 2.40, respectively). The adjusted ORs after a propensity score-based matched analysis were 3.03 and 1.70 (95% CI, 1.60 to 5.74 and 0.98 to 2.98, respectively). In conclusion, inadequate empirical therapy is independently associated with increased mortality in patients with BSI. Programs to improve the quality of empirical therapy in patients with suspicion of BSI and optimization of definitive therapy should be implemented. PMID:22005999

  1. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  2. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  3. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  4. FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing

    2010-01-01

    Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.

  5. Designing, Building, and Connecting Networks to Support Distributed Collaborative Empirical Writing Research

    ERIC Educational Resources Information Center

    Brunk-Chavez, Beth; Pigg, Stacey; Moore, Jessie; Rosinski, Paula; Grabill, Jeffrey T.

    2018-01-01

    To speak to diverse audiences about how people learn to write and how writing works inside and outside the academy, we must conduct research across geographical, institutional, and cultural contexts as well as research that enables comparison when appropriate. Large-scale empirical research is useful for both of these moves; however, we must…

  6. From Camouflage to Classrooms: An Empirical Examination of Veterans at a Regional Midwestern University

    ERIC Educational Resources Information Center

    Oberweis, Trish; Bradford, Matthew

    2017-01-01

    Despite a large and growing population of former military service members entering colleges and universities, there is very little empirical work that examines veterans' perspectives on service needs or their perceptions and attitudes about their college experience. Much of what we know of student veterans' views is based on a different generation…

  7. An Examination of the Influences on "Green" Mobile Phone Purchases among Young Business Students: An Empirical Analysis

    ERIC Educational Resources Information Center

    Paladino, Angela; Ng, Serena

    2013-01-01

    This article examines the determinants of eco-friendly electronic good consumption among students at a large Australian university who have been exposed to a marketing campaign, Mobile Muster. Empirical research generally shows younger consumers to be less concerned about the environment. Similar studies demonstrate that peer pressure has a large…

  8. What is heartburn worth? A cost-utility analysis of management strategies.

    PubMed

    Heudebert, G R; Centor, R M; Klapow, J C; Marks, R; Johnson, L; Wilcox, C M

    2000-03-01

    To determine the best treatment strategy for the management of patients presenting with symptoms consistent with uncomplicated heartburn. We performed a cost-utility analysis of 4 alternatives: empirical proton pump inhibitor, empirical histamine2-receptor antagonist, and diagnostic strategies consisting of either esophagogastroduodenoscopy (EGD) or an upper gastrointestinal series before treatment. The time horizon of the model was 1 year. The base case analysis assumed a cohort of otherwise healthy 45-year-old individuals in a primary care practice. Empirical treatment with a proton pump inhibitor was projected to provide the greatest quality-adjusted survival for the cohort. Empirical treatment with a histamine2 receptor antagonist was projected to be the least costly of the alternatives. The marginal cost-effectiveness of using a proton pump inhibitor over a histamine2-receptor antagonist was approximately $10,400 per quality-adjusted life year (QALY) gained in the base case analysis and was less than $50,000 per QALY as long as the utility for heartburn was less than 0.95. Both diagnostic strategies were dominated by proton pump inhibitor alternative. Empirical treatment seems to be the optimal initial management strategy for patients with heartburn, but the choice between a proton pump inhibitor or histamine2-receptor antagonist depends on the impact of heartburn on quality of life.

  9. What Is Heartburn Worth?

    PubMed Central

    Heudebert, Gustavo R; Centor, Robert M; Klapow, Joshua C; Marks, Robert; Johnson, Lawrence; Wilcox, C Mel

    2000-01-01

    OBJECTIVE T o determine the best treatment strategy for the management of patients presenting with symptoms consistent with uncomplicated heartburn. METHODS We performed a cost-utility analysis of 4 alternatives: empirical proton pump inhibitor, empirical histamine2-receptor antagonist, and diagnostic strategies consisting of either esophagogastroduodenoscopy (EGD) or an upper gastrointestinal series before treatment. The time horizon of the model was 1 year. The base case analysis assumed a cohort of otherwise healthy 45-year-old individuals in a primary care practice. MAIN RESULTS Empirical treatment with a proton pump inhibitor was projected to provide the greatest quality-adjusted survival for the cohort. Empirical treatment with a histamine2receptor antagonist was projected to be the least costly of the alternatives. The marginal cost-effectiveness of using a proton pump inhibitor over a histamine2-receptor antagonist was approximately $10,400 per quality-adjusted life year (QALY) gained in the base case analysis and was less than $50,000 per QALY as long as the utility for heartburn was less than 0.95. Both diagnostic strategies were dominated by proton pump inhibitor alternative. CONCLUSIONS Empirical treatment seems to be the optimal initial management strategy for patients with heartburn, but the choice between a proton pump inhibitor or histamine2-receptor antagonist depends on the impact of heartburn on quality of life. PMID:10718898

  10. An analysis of the number of parking bays and checkout counters for a supermarket using SAS simulation studio

    NASA Astrophysics Data System (ADS)

    Kar, Leow Soo

    2014-07-01

    Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.

  11. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    NASA Astrophysics Data System (ADS)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.

  12. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  13. A Near-Optimal Distributed QoS Constrained Routing Algorithm for Multichannel Wireless Sensor Networks

    PubMed Central

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen

    2013-01-01

    One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.

  14. Delineating the joint hierarchical structure of clinical and personality disorders in an outpatient psychiatric sample.

    PubMed

    Forbes, Miriam K; Kotov, Roman; Ruggero, Camilo J; Watson, David; Zimmerman, Mark; Krueger, Robert F

    2017-11-01

    A large body of research has focused on identifying the optimal number of dimensions - or spectra - to model individual differences in psychopathology. Recently, it has become increasingly clear that ostensibly competing models with varying numbers of spectra can be synthesized in empirically derived hierarchical structures. We examined the convergence between top-down (bass-ackwards or sequential principal components analysis) and bottom-up (hierarchical agglomerative cluster analysis) statistical methods for elucidating hierarchies to explicate the joint hierarchical structure of clinical and personality disorders. Analyses examined 24 clinical and personality disorders based on semi-structured clinical interviews in an outpatient psychiatric sample (n=2900). The two methods of hierarchical analysis converged on a three-tier joint hierarchy of psychopathology. At the lowest tier, there were seven spectra - disinhibition, antagonism, core thought disorder, detachment, core internalizing, somatoform, and compulsivity - that emerged in both methods. These spectra were nested under the same three higher-order superspectra in both methods: externalizing, broad thought dysfunction, and broad internalizing. In turn, these three superspectra were nested under a single general psychopathology spectrum, which represented the top tier of the hierarchical structure. The hierarchical structure mirrors and extends upon past research, with the inclusion of a novel compulsivity spectrum, and the finding that psychopathology is organized in three superordinate domains. This hierarchy can thus be used as a flexible and integrative framework to facilitate psychopathology research with varying levels of specificity (i.e., focusing on the optimal level of detailed information, rather than the optimal number of factors). Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  16. Empirical development of ground acceleration, velocity, and displacement for accidental explosions at J5 or the proposed large altitude rocket cell at Arnold Engineering Development Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, B.C.

    This study is an assessment of the ground shock which may be generated in the event of an accidental explosion at J5 or the Proposed Large Altitude Rocket Cell (LARC) at the Arnold Engineering Development Center (AEDC). The assessment is accomplished by reviewing existing empirical relationships for predicting ground motion from ground shock. These relationships are compared with data for surface explosions at sites with similar geology and with yields similar to expected conditions at AEDC. Empirical relationships are developed from these data and a judgment made whether to use existing empirical relationships or the relationships developed in this study.more » An existing relationship (Lipner et al.) is used to predict velocity; the empirical relationships developed in the course of this study are used to predict acceleration and displacement. The ground motions are presented in table form and as contour plots. Included also is a discussion of damage criteria from blast and earthquake studies. This report recommends using velocity rather than acceleration as an indicator of structural blast damage. It is recommended that v = 2 ips (v = .167 fps) be used as the damage threshold value (no major damage for v less than or equal to 2 ips). 13 references, 25 figures, 6 tables.« less

  17. Optimizing correlation techniques for improved earthquake location

    USGS Publications Warehouse

    Schaff, D.P.; Bokelmann, G.H.R.; Ellsworth, W.L.; Zanzerkia, E.; Waldhauser, F.; Beroza, G.C.

    2004-01-01

    Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

  18. Dose Titration Algorithm Tuning (DTAT) should supersede 'the' Maximum Tolerated Dose (MTD) in oncology dose-finding trials.

    PubMed

    Norris, David C

    2017-01-01

    Background . Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA) -finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.

  19. Fisher's geometrical model emerges as a property of complex integrated phenotypic networks.

    PubMed

    Martin, Guillaume

    2014-05-01

    Models relating phenotype space to fitness (phenotype-fitness landscapes) have seen important developments recently. They can roughly be divided into mechanistic models (e.g., metabolic networks) and more heuristic models like Fisher's geometrical model. Each has its own drawbacks, but both yield testable predictions on how the context (genomic background or environment) affects the distribution of mutation effects on fitness and thus adaptation. Both have received some empirical validation. This article aims at bridging the gap between these approaches. A derivation of the Fisher model "from first principles" is proposed, where the basic assumptions emerge from a more general model, inspired by mechanistic networks. I start from a general phenotypic network relating unspecified phenotypic traits and fitness. A limited set of qualitative assumptions is then imposed, mostly corresponding to known features of phenotypic networks: a large set of traits is pleiotropically affected by mutations and determines a much smaller set of traits under optimizing selection. Otherwise, the model remains fairly general regarding the phenotypic processes involved or the distribution of mutation effects affecting the network. A statistical treatment and a local approximation close to a fitness optimum yield a landscape that is effectively the isotropic Fisher model or its extension with a single dominant phenotypic direction. The fit of the resulting alternative distributions is illustrated in an empirical data set. These results bear implications on the validity of Fisher's model's assumptions and on which features of mutation fitness effects may vary (or not) across genomic or environmental contexts.

  20. Combined magnetic and kinetic control of advanced tokamak steady state scenarios based on semi-empirical modelling

    NASA Astrophysics Data System (ADS)

    Moreau, D.; Artaud, J. F.; Ferron, J. R.; Holcomb, C. T.; Humphreys, D. A.; Liu, F.; Luce, T. C.; Park, J. M.; Prater, R.; Turco, F.; Walker, M. L.

    2015-06-01

    This paper shows that semi-empirical data-driven models based on a two-time-scale approximation for the magnetic and kinetic control of advanced tokamak (AT) scenarios can be advantageously identified from simulated rather than real data, and used for control design. The method is applied to the combined control of the safety factor profile, q(x), and normalized pressure parameter, βN, using DIII-D parameters and actuators (on-axis co-current neutral beam injection (NBI) power, off-axis co-current NBI power, electron cyclotron current drive power, and ohmic coil). The approximate plasma response model was identified from simulated open-loop data obtained using a rapidly converging plasma transport code, METIS, which includes an MHD equilibrium and current diffusion solver, and combines plasma transport nonlinearity with 0D scaling laws and 1.5D ordinary differential equations. The paper discusses the results of closed-loop METIS simulations, using the near-optimal ARTAEMIS control algorithm (Moreau D et al 2013 Nucl. Fusion 53 063020) for steady state AT operation. With feedforward plus feedback control, the steady state target q-profile and βN are satisfactorily tracked with a time scale of about 10 s, despite large disturbances applied to the feedforward powers and plasma parameters. The robustness of the control algorithm with respect to disturbances of the H&CD actuators and of plasma parameters such as the H-factor, plasma density and effective charge, is also shown.

  1. Optimization as a Dispositive in the Production of Differences in Denmark Schools

    ERIC Educational Resources Information Center

    Hamre, Bjørn

    2014-01-01

    The theoretical framework of this paper is inspired by governmentality studies in education. The key concepts are problematization, formatting technologies, and dispositive. The paper begins with an empirical study conducted in Denmark of forty-four files from educational psychologists and articles from journals concerning schools and education.…

  2. Optimizing L2 Curriculum for China Post-Secondary Education

    ERIC Educational Resources Information Center

    Guadagni, Donald

    2015-01-01

    This instructional paper examines the lack of L2 English skills demonstrated by Chinese post-secondary education students and the results of empiric testing to determine what key language functions were missing from a student's tool box when exiting their primary education phase.The identification of these skills and ability gaps allowed for…

  3. The Impact of Business School Students' Psychological Capital on Academic Performance

    ERIC Educational Resources Information Center

    Luthans, Brett Carl; Luthans, Kyle William; Jensen, Susan M.

    2012-01-01

    Psychological capital (PsyCap) consisting of the psychological resources of hope, efficacy, resiliency, and optimism has been empirically demonstrated in the published literature to be related to manager and employee positive organizational outcomes and to be open to development. However, to date, little attention has been devoted to the impact of…

  4. Early Experience and the Development of Cognitive Competence: Some Theoretical and Methodological Issues.

    ERIC Educational Resources Information Center

    Ulvund, Stein Erik

    1982-01-01

    Argues that in analyzing effects of early experience on development of cognitive competence, theoretical analyses as well as empirical investigations should be based on a transactional model of development. Shows optimal stimulation hypothesis, particularly the enhancement prediction, seems to represent a transactional approach to the study of…

  5. Exploring Instructors' Technology Readiness, Attitudes and Behavioral Intentions towards E-Learning Technologies in Egypt and United Arab Emirates

    ERIC Educational Resources Information Center

    El Alfy, Shahira; Gómez, Jorge Marx; Ivanov, Danail

    2017-01-01

    This paper explores the association between technology readiness, (a meta-construct consisting of optimism, innovativeness, discomfort, and insecurity), attitude, and behavioral intention towards e-learning technologies adoption within an education institution context. The empirical study data is collected at two private universities located in…

  6. Use of Missing Data Methods in Longitudinal Studies: The Persistence of Bad Practices in Developmental Psychology

    ERIC Educational Resources Information Center

    Jelicic, Helena; Phelps, Erin; Lerner, Richard M.

    2009-01-01

    Developmental science rests on describing, explaining, and optimizing intraindividual changes and, hence, empirically requires longitudinal research. Problems of missing data arise in most longitudinal studies, thus creating challenges for interpreting the substance and structure of intraindividual change. Using a sample of reports of longitudinal…

  7. Creative Self-Efficacy and Innovative Behavior in a Service Setting: Optimism as a Moderator

    ERIC Educational Resources Information Center

    Hsu, Michael L. A.; Hou, Sheng-Tsung; Fan, Hsueh-Liang

    2011-01-01

    Creativity research on the personality approach has focused on the relationship between individual attributes and innovative behavior. However, few studies have empirically examined the effects of positive psychological traits on innovative behavior in an organizational setting. This study examines the relationships among creative self-efficacy,…

  8. Supervision and Satisfaction among School Psychologists: An Empirical Study of Professionals in Victoria, Australia

    ERIC Educational Resources Information Center

    Thielking, Monica; Moore, Susan; Jimerson, Shane R.

    2006-01-01

    This study examined the supervision arrangements and job satisfaction among school psychologists in Victoria, Australia. Participation in professional supervision was explored in relation to the type of employment and job satisfaction. The results revealed that the frequency of participation in supervision activities was less than optimal, with…

  9. Boosting the Potency of Resistance: Combining the Motivational Forces of Inoculation and Psychological Reactance

    ERIC Educational Resources Information Center

    Miller, Claude H.; Ivanov, Bobi; Sims, Jeanetta; Compton, Josh; Harrison, Kylie J.; Parker, Kimberly A.; Parker, James L.; Averbeck, Joshua M.

    2013-01-01

    The efficacy of inoculation theory has been confirmed by decades of empirical research, yet optimizing its effectiveness remains a vibrant line of investigation. The present research turns to psychological reactance theory for a means of enhancing the core mechanisms of inoculation--threat and refutational preemption. Findings from a multisite…

  10. Art Therapy and Flow: A Review of the Literature and Applications

    ERIC Educational Resources Information Center

    Chilton, Gioia

    2013-01-01

    Flow is a construct developed by Mihaly Csikszentmihalyi that describes a psychological state of optimal attention and engagement. Creativity and improved well-being have been empirically linked to the flow experience; therefore, the study of flow has implications for art therapy research and practice. Art therapists may facilitate personal growth…

  11. An Empirical Approach to Determining Advertising Spending Level.

    ERIC Educational Resources Information Center

    Sunoo, D. H.; Lin, Lynn Y. S.

    To assess the relationship between advertising and consumer promotion and to determine the optimal short-term advertising spending level for a product, a research project was undertaken by a major food manufacturer. One thousand homes subscribing to a dual-system cable television service received either no advertising exposure to the product or…

  12. A Comparison of Flexible Prompt Fading and Constant Time Delay for Five Children with Autism

    ERIC Educational Resources Information Center

    Soluaga, Doris; Leaf, Justin B.; Taubman, Mitchell; McEachin, John; Leaf, Ron

    2008-01-01

    Given the increasing rates of autism, identifying prompting procedures that can assist in the development of more optimal learning opportunities for this population is critical. Extensive empirical research exists supporting the effectiveness of various prompting strategies. Constant time delay (CTD) is a highly implemented prompting procedure…

  13. Stringent Mitigation Policy Implied By Temperature Impacts on Economic Growth

    NASA Astrophysics Data System (ADS)

    Moore, F.; Turner, D.

    2014-12-01

    Integrated assessment models (IAMs) compare the costs of greenhouse gas mitigation with damages from climate change in order to evaluate the social welfare implications of climate policy proposals and inform optimal emissions reduction trajectories. However, these models have been criticized for lacking a strong empirical basis for their damage functions, which do little to alter assumptions of sustained GDP growth, even under extreme temperature scenarios. We implement empirical estimates of temperature effects on GDP growth-rates in the Dynamic Integrated Climate and Economy (DICE) model via two pathways, total factor productivity (TFP) growth and capital depreciation. Even under optimistic adaptation assumptions, this damage specification implies that optimal climate policy involves the elimination of emissions in the near future, the stabilization of global temperature change below 2°C, and a social cost of carbon (SCC) an order of magnitude larger than previous estimates. A sensitivity analysis shows that the magnitude of growth effects, the rate of adaptation, and the dynamic interaction between damages from warming and GDP are three critical uncertainties and an important focus for future research.

  14. High Performance Graphene Nano-ribbon Thermoelectric Devices by Incorporation and Dimensional Tuning of Nanopores

    PubMed Central

    Sharafat Hossain, Md; Al-Dirini, Feras; Hossain, Faruque M.; Skafidas, Efstratios

    2015-01-01

    Thermoelectric properties of Graphene nano-ribbons (GNRs) with nanopores (NPs) are explored for a range of pore dimensions in order to achieve a high performance two-dimensional nano-scale thermoelectric device. We reduce thermal conductivity of GNRs by introducing pores in them in order to enhance their thermoelectric performance. The electrical properties (Seebeck coefficient and conductivity) of the device usually degrade with pore inclusion; however, we tune the pore to its optimal dimension in order to minimize this degradation, enhancing the overall thermoelectric performance (high ZT value) of our device. We observe that the side channel width plays an important role to achieve optimal performance while the effect of pore length is less pronounced. This result is consistent with the fact that electronic conduction in GNRs is dominated along its edges. Ballistic transport regime is assumed and a semi-empirical method using Huckel basis set is used to obtain the electrical properties, while the phononic system is characterized by Tersoff empirical potential model. The proposed device structure has potential applications as a nanoscale local cooler and as a thermoelectric power generator. PMID:26083450

  15. Empirical scoring functions for advanced protein-ligand docking with PLANTS.

    PubMed

    Korb, Oliver; Stützle, Thomas; Exner, Thomas E

    2009-01-01

    In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.

  16. Impact of Coverage-Dependent Marginal Costs on Optimal HPV Vaccination Strategies

    PubMed Central

    Ryser, Marc D.; McGoff, Kevin; Herzog, David P.; Sivakoff, David J.; Myers, Evan R.

    2015-01-01

    The effectiveness of vaccinating males against the human papillomavirus (HPV) remains a controversial subject. Many existing studies conclude that increasing female coverage is more effective than diverting resources into male vaccination. Recently, several empirical studies on HPV immunization have been published, providing evidence of the fact that marginal vaccination costs increase with coverage. In this study, we use a stochastic agent-based modeling framework to revisit the male vaccination debate in light of these new findings. Within this framework, we assess the impact of coverage-dependent marginal costs of vaccine distribution on optimal immunization strategies against HPV. Focusing on the two scenarios of ongoing and new vaccination programs, we analyze different resource allocation policies and their effects on overall disease burden. Our results suggest that if the costs associated with vaccinating males are relatively close to those associated with vaccinating females, then coverage-dependent, increasing marginal costs may favor vaccination strategies that entail immunization of both genders. In particular, this study emphasizes the necessity for further empirical research on the nature of coverage-dependent vaccination costs. PMID:25979280

  17. Combined Exact-Repeat and Geodetic Mission Altimetry for High-Resolution Empirical Tide Mapping

    NASA Astrophysics Data System (ADS)

    Zaron, E. D.

    2014-12-01

    The configuration of present and historical exact-repeat mission (ERM) altimeter ground tracks determines the maximum resolution of empirical tidal maps obtained with ERM data. Although the mode-1 baroclinic tide is resolvable at mid-latitudes in the open ocean, the ability to detect baroclinic and barotropic tides near islands and complex coastlines is limited, in part, by ERM track density. In order to obtain higher resolution maps, the possibility of combining ERM and geodetic mission (GM) altimetry is considered, using a combination of spatial thin-plate splines and temporal harmonic analysis. Given the present spatial and temporal distribution of GM missions, it is found that GM data can contribute to resolving tidal features smaller than 75 km, provided the signal amplitude is greater than about 1 cm. Uncertainties in the mean sea surface and environmental corrections are significant components of the GM error budget, and methods to optimize data selection and along-track filtering are still being optimized. Application to two regions, Monterey Bay and Luzon Strait, finds evidence for complex tidal fields in agreement with independent observations and modeling studies.

  18. Multi Objective Optimization of Multi Wall Carbon Nanotube Based Nanogrinding Wheel Using Grey Relational and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Sethuramalingam, Prabhu; Vinayagam, Babu Kupusamy

    2016-07-01

    Carbon nanotube mixed grinding wheel is used in the grinding process to analyze the surface characteristics of AISI D2 tool steel material. Till now no work has been carried out using carbon nanotube based grinding wheel. Carbon nanotube based grinding wheel has excellent thermal conductivity and good mechanical properties which are used to improve the surface finish of the workpiece. In the present study, the multi response optimization of process parameters like surface roughness and metal removal rate of grinding process of single wall carbon nanotube (CNT) in mixed cutting fluids is undertaken using orthogonal array with grey relational analysis. Experiments are performed with designated grinding conditions obtained using the L9 orthogonal array. Based on the results of the grey relational analysis, a set of optimum grinding parameters is obtained. Using the analysis of variance approach the significant machining parameters are found. Empirical model for the prediction of output parameters has been developed using regression analysis and the results are compared empirically, for conditions of with and without CNT grinding wheel in grinding process.

  19. High Performance Graphene Nano-ribbon Thermoelectric Devices by Incorporation and Dimensional Tuning of Nanopores.

    PubMed

    Hossain, Md Sharafat; Al-Dirini, Feras; Hossain, Faruque M; Skafidas, Efstratios

    2015-06-17

    Thermoelectric properties of Graphene nano-ribbons (GNRs) with nanopores (NPs) are explored for a range of pore dimensions in order to achieve a high performance two-dimensional nano-scale thermoelectric device. We reduce thermal conductivity of GNRs by introducing pores in them in order to enhance their thermoelectric performance. The electrical properties (Seebeck coefficient and conductivity) of the device usually degrade with pore inclusion; however, we tune the pore to its optimal dimension in order to minimize this degradation, enhancing the overall thermoelectric performance (high ZT value) of our device. We observe that the side channel width plays an important role to achieve optimal performance while the effect of pore length is less pronounced. This result is consistent with the fact that electronic conduction in GNRs is dominated along its edges. Ballistic transport regime is assumed and a semi-empirical method using Huckel basis set is used to obtain the electrical properties, while the phononic system is characterized by Tersoff empirical potential model. The proposed device structure has potential applications as a nanoscale local cooler and as a thermoelectric power generator.

  20. Benchmarking test of empirical root water uptake models

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman

    2017-01-01

    Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.

  1. An Evaluation of Financial Institutions: Impact on Consumption and Investment Using Panel Data and the Theory of Risk-Bearing*

    PubMed Central

    Alem, Mauro; Townsend, Robert M.

    2013-01-01

    The theory of the optimal allocation of risk and the Townsend Thai panel data on financial transactions are used to assess the impact of the major formal and informal financial institutions of an emerging market economy. We link financial institution assessment to the actual impact on clients, rather than ratios and non-performing loans. We derive both consumption and investment equations from a common core theory with both risk and productive activities. The empirical specification follows closely from this theory and allows both OLS and IV estimation. We thus quantify the consumption and investment smoothing impact of financial institutions on households including those running farms and small businesses. A government development bank (BAAC) is shown to be particularly helpful in smoothing consumption and investment, in no small part through credit, consistent with its own operating system, which embeds an implicit insurance operation. Commercial banks are smoothing investment, largely through formal savings accounts. Other institutions seem ineffective by these metrics. PMID:25400319

  2. An optical to IR sky brightness model for the LSST

    NASA Astrophysics Data System (ADS)

    Yoachim, Peter; Coughlin, Michael; Angeli, George Z.; Claver, Charles F.; Connolly, Andrew J.; Cook, Kem; Daniel, Scott; Ivezić, Željko; Jones, R. Lynne; Petry, Catherine; Reuter, Michael; Stubbs, Christopher; Xin, Bo

    2016-07-01

    To optimize the observing strategy of a large survey such as the LSST, one needs an accurate model of the night sky emission spectrum across a range of atmospheric conditions and from the near-UV to the near-IR. We have used the ESO SkyCalc Sky Model Calculator1, 2 to construct a library of template spectra for the Chilean night sky. The ESO model includes emission from the upper and lower atmosphere, scattered starlight, scattered moonlight, and zodiacal light. We have then extended the ESO templates with an empirical fit to the twilight sky emission as measured by a Canon all-sky camera installed at the LSST site. With the ESO templates and our twilight model we can quickly interpolate to any arbitrary sky position and date and return the full sky spectrum or surface brightness magnitudes in the LSST filter system. Comparing our model to all-sky observations, we find typical residual RMS values of +/-0.2-0.3 magnitudes per square arcsecond.

  3. Substituent and ring effects on enthalpies of formation: 2-methyl- and 2-ethylbenzimidazoles versus benzene- and imidazole-derivatives

    NASA Astrophysics Data System (ADS)

    Jiménez, Pilar; Roux, María Victoria; Dávalos, Juan Z.; Temprado, Manuel; Ribeiro da Silva, Manuel A. V.; Ribeiro da Silva, Maria Das Dores M. C.; Amaral, Luísa M. P. F.; Cabildo, Pilar; Claramunt, Rosa M.; Mó, Otilia; Yáñez, Manuel; Elguero, José

    The enthalpies of combustion, heat capacities, enthalpies of sublimation and enthalpies of formation of 2-methylbenzimidazole (2MeBIM) and 2-ethylbenzimidazole (2EtBIM) are reported and the results compared with those of benzimidazole itself (BIM). Theoretical estimates of the enthalpies of formation were obtained through the use of atom equivalent schemes. The necessary energies were obtained in single-point calculations at the B3LYP/6-311+G(d,p) on B3LYP/6-31G* optimized geometries. The comparison of experimental and calculated values of benzenes, imidazoles and benzimidazoles bearing H (unsubstituted), methyl and ethyl groups shows remarkable homogeneity. The energetic group contribution transferability is not followed, but either using it or adding an empirical interaction term, it is possible to generate an enormous collection of reasonably accurate data for different substituted heterocycles (pyrazole-derivatives, pyridine-derivatives, etc.) from the large amount of values available for substituted benzenes and those of the parent (pyrazole, pyridine) heterocycles.

  4. Big data to smart data in Alzheimer's disease: The brain health modeling initiative to foster actionable knowledge.

    PubMed

    Geerts, Hugo; Dacks, Penny A; Devanarayan, Viswanath; Haas, Magali; Khachaturian, Zaven S; Gordon, Mark Forrest; Maudsley, Stuart; Romero, Klaus; Stephenson, Diane

    2016-09-01

    Massive investment and technological advances in the collection of extensive and longitudinal information on thousands of Alzheimer patients results in large amounts of data. These "big-data" databases can potentially advance CNS research and drug development. However, although necessary, they are not sufficient, and we posit that they must be matched with analytical methods that go beyond retrospective data-driven associations with various clinical phenotypes. Although these empirically derived associations can generate novel and useful hypotheses, they need to be organically integrated in a quantitative understanding of the pathology that can be actionable for drug discovery and development. We argue that mechanism-based modeling and simulation approaches, where existing domain knowledge is formally integrated using complexity science and quantitative systems pharmacology can be combined with data-driven analytics to generate predictive actionable knowledge for drug discovery programs, target validation, and optimization of clinical development. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Maintaining Situation Awareness with Autonomous Airborne Observation Platforms

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Fitzgerald, Will

    2005-01-01

    Unmanned Aerial Vehicles (UAVs) offer tremendous potential as intelligence, surveillance and reconnaissance (ISR) platforms for early detection of security threats and for acquisition and maintenance of situation awareness in crisis conditions. However, using their capabilities effectively requires addressing a range of practical and theoretical problems. The paper will describe progress by the "Autonomous Rotorcraft Project," a collaborative effort between NASA and the U.S. Army to develop a practical, flexible capability for UAV-based ISR. Important facets of the project include optimization methods for allocating scarce aircraft resources to observe numerous, distinct sites of interest; intelligent flight automation software than integrates high-level plan generation capabilities with executive control, failure response and flight control functions; a system architecture supporting reconfiguration of onboard sensors to address different kinds of threats; and an advanced prototype vehicle designed to allow large-scale production at low cost. The paper will also address human interaction issues including an empirical method for determining how to allocate roles and responsibilities between flight automation and human operations.

  6. An Evaluation of Financial Institutions: Impact on Consumption and Investment Using Panel Data and the Theory of Risk-Bearing.

    PubMed

    Alem, Mauro; Townsend, Robert M

    2014-11-01

    The theory of the optimal allocation of risk and the Townsend Thai panel data on financial transactions are used to assess the impact of the major formal and informal financial institutions of an emerging market economy. We link financial institution assessment to the actual impact on clients, rather than ratios and non-performing loans. We derive both consumption and investment equations from a common core theory with both risk and productive activities. The empirical specification follows closely from this theory and allows both OLS and IV estimation. We thus quantify the consumption and investment smoothing impact of financial institutions on households including those running farms and small businesses. A government development bank (BAAC) is shown to be particularly helpful in smoothing consumption and investment, in no small part through credit, consistent with its own operating system, which embeds an implicit insurance operation. Commercial banks are smoothing investment, largely through formal savings accounts. Other institutions seem ineffective by these metrics.

  7. A compact D-band monolithic APDP-based sub-harmonic mixer

    NASA Astrophysics Data System (ADS)

    Zhang, Shengzhou; Sun, Lingling; Wang, Xiang; Wen, Jincai; Liu, Jun

    2017-11-01

    The paper presents a compact D-band monolithic sub-harmonic mixer (SHM) with 3 μm planar hyperabrupt schottky-varactor diodes offered by 70 nm GaAs mHEMT technology. According to empirical equivalent-circuit models, a wide-band large signal equivalent circuit model of the diode is proposed. Based on the extracted model, the mixer is implemented and optimized with a shunt-mounted anti-parallel diode pair (APDP) to fulfill the sub-harmonic mixing mechanism. Furthermore, a modified asymmetric three-transmission-line coupler is devised to achieve high-level coupling and minimize the chip size. The measured results show that the conversion gain varies between -13.9 dB and -17.5 dB from 110 GHz to 145 GHz, with a local oscillator (LO) power level of 14 dBm and an intermediate frequency (IF) of 1 GHz. The total chip size including probe GSG pads is 0.57 × 0.68mm2. In conclusion, the mixer exhibits outstanding figure-of-merits.

  8. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  9. The prevalence of terraced treescapes in analyses of phylogenetic data sets.

    PubMed

    Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J

    2018-04-04

    The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.

  10. Item Selection and Pre-equating with Empirical Item Characteristic Curves.

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    An empirical item characteristic curve shows the probability of a correct response as a function of the student's total test score. These curves can be estimated from large-scale pretest data. They enable test developers to select items that discriminate well in the score region where decisions are made. A similar set of curves can be used to…

  11. Complex dynamics and empirical evidence (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Delli Gatti, Domenico; Gaffeo, Edoardo; Giulioni, Gianfranco; Gallegati, Mauro; Kirman, Alan; Palestrini, Antonio; Russo, Alberto

    2005-05-01

    Standard macroeconomics, based on a reductionist approach centered on the representative agent, is badly equipped to explain the empirical evidence where heterogeneity and industrial dynamics are the rule. In this paper we show that a simple agent-based model of heterogeneous financially fragile agents is able to replicate a large number of scaling type stylized facts with a remarkable degree of statistical precision.

  12. The scale-dependent market trend: Empirical evidences using the lagged DFA method

    NASA Astrophysics Data System (ADS)

    Li, Daye; Kou, Zhun; Sun, Qiankun

    2015-09-01

    In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.

  13. [Mes differ by positioning: empirical testing of decentralized dynamics of the self].

    PubMed

    Mizokami, Shinichi

    2013-10-01

    The present study empirically tested the conceptualization of the decentralized dynamics of the self proposed by Hermans & Kempen (1993), which they developed theoretically and from clinical cases, not from large samples of empirical data. They posited that worldviews and images of the self could vary by positioning even in the same individual, and denied that the ego was an omniscient entity that knew and controlled all aspects of the self (centralized ego). Study 1 tested their conceptualization empirically with 47 university students in an experimental group and 17 as a control group. The results showed that the scores on the Rosenberg's self-esteem scale and images of the Mes in the experimental group significantly varied by positioning, but those in the control group did not. Similar results were found in Study 2 with a sample of 120 university students. These results empirically supported the conceptualization of the decentralized dynamics of the self.

  14. Empirical relations between large wood transport and catchment characteristics

    NASA Astrophysics Data System (ADS)

    Steeb, Nicolas; Rickenmann, Dieter; Rickli, Christian; Badoux, Alexandre

    2017-04-01

    The transport of vast amounts of large wood (LW) in water courses can considerably aggravate hazardous situations during flood events, and often strongly affects resulting flood damage. Large wood recruitment and transport are controlled by various factors which are difficult to assess and the prediction of transported LW volumes is difficult. Such information are, however, important for engineers and river managers to adequately dimension retention structures or to identify critical stream cross-sections. In this context, empirical formulas have been developed to estimate the volume of transported LW during a flood event (Rickenmann, 1997; Steeb et al., 2017). The data base of existing empirical wood load equations is, however, limited. The objective of the present study is to test and refine existing empirical equations, and to derive new relationships to reveal trends in wood loading. Data have been collected for flood events with LW occurrence in Swiss catchments of various sizes. This extended data set allows us to derive statistically more significant results. LW volumes were found to be related to catchment and transport characteristics, such as catchment size, forested area, forested stream length, water discharge, sediment load, or Melton ratio. Both the potential wood load and the fraction that is effectively mobilized during a flood event (effective wood load) are estimated. The difference of potential and effective wood load allows us to derive typical reduction coefficients that can be used to refine spatially explicit GIS models for potential LW recruitment.

  15. [Antibiotics in the critically ill].

    PubMed

    Kolak, Radmila R

    2010-01-01

    Antibiotics are one the most common therapies administered in the intensive care unit setting. This review outlines the strategy for optimal use of antimicrobial agents in the critically ill. In severely ill patients, empirical antimicrobial therapy should be used when a suspected infection may impair the outcome. It is necessary to collect microbiological documentation before initiating empirical antimicrobial therapy. In addition to antimicrobial therapy, it is recommended to control a focus of infection and to modify factors that promote microbial growth or impair the host's antimicrobial defence. A judicious choice of antimicrobial therapy should be based on the host characteristics, the site of injection, the local ecology, and the pharmacokinetics/pharmacodynamics of antibiotics. This means treating empirically with broad-spectrum antimicrobials as soon as possible and narrowing the spectrum once the organism is identified (de-escalation), and limiting duration of therapy to the minimum effective period. Despite theoretical advantages, a combined antibiotic therapy is nor more effective than a mono-therapy in curing infections in most clinical trials involving intensive care patients. Nevertheless, textbooks and guidelines recommend a combination for specific pathogens and for infections commonly caused by these pathogens. Avoiding unnecessary antibiotic use and optimizing the administration of antimicrobial agents will improve patient outcomes while minimizing risks for the development of bacterial resistance. It is important to note that each intensive care unit should have a program in place which monitors antibiotic utilisation and its effectiveness. Only in this way can the impact of interventions aimed at improving antibiotic use be evaluated at the local level.

  16. Scaling laws between population and facility densities.

    PubMed

    Um, Jaegon; Son, Seung-Woo; Lee, Sung-Ik; Jeong, Hawoong; Kim, Beom Jun

    2009-08-25

    When a new facility like a grocery store, a school, or a fire station is planned, its location should ideally be determined by the necessities of people who live nearby. Empirically, it has been found that there exists a positive correlation between facility and population densities. In the present work, we investigate the ideal relation between the population and the facility densities within the framework of an economic mechanism governing microdynamics. In previous studies based on the global optimization of facility positions in minimizing the overall travel distance between people and facilities, it was shown that the density of facility D and that of population rho should follow a simple power law D approximately rho(2/3). In our empirical analysis, on the other hand, the power-law exponent alpha in D approximately rho(alpha) is not a fixed value but spreads in a broad range depending on facility types. To explain this discrepancy in alpha, we propose a model based on economic mechanisms that mimic the competitive balance between the profit of the facilities and the social opportunity cost for populations. Through our simple, microscopically driven model, we show that commercial facilities driven by the profit of the facilities have alpha = 1, whereas public facilities driven by the social opportunity cost have alpha = 2/3. We simulate this model to find the optimal positions of facilities on a real U.S. map and show that the results are consistent with the empirical data.

  17. Parameter Optimization of PAL-XFEL Injector

    NASA Astrophysics Data System (ADS)

    Lee, Jaehyun; Ko, In Soo; Han, Jang-Hui; Hong, Juho; Yang, Haeryong; Min, Chang Ki; Kang, Heung-Sik

    2018-05-01

    A photoinjector is used as the electron source to generate a high peak current and low emittance beam for an X-ray free electron laser (FEL). The beam emittance is one of the critical parameters to determine the FEL performance together with the slice energy spread and the peak current. The Pohang Accelerator Laboratory X-ray Free Electron Laser (PAL-XFEL) was constructed in 2015, and the beam commissioning was carried out in spring 2016. The injector is running routinely for PAL-XFEL user operation. The operational parameters of the injector have been optimized experimentally, and these are somewhat different from the originally designed ones. Therefore, we study numerically the injector parameters based on the empirically optimized parameters and review the present operating condition.

  18. Control strategy optimization of HVAC plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio

    In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less

  19. Stability of cosmetic emulsion containing different amount of hemp oil.

    PubMed

    Kowalska, M; Ziomek, M; Żbikowska, A

    2015-08-01

    The aim of the study was to determine the optimal conditions, that is the content of hemp oil and time of homogenization to obtain stable dispersion systems. For this purpose, six emulsions were prepared, their stability was examined empirically and the most correctly formulated emulsion composition was determined using a computer simulation. Variable parameters (oil content and homogenization time) were indicated by the optimization software based on Kleeman's method. Physical properties of the synthesized emulsions were studied by numerous techniques involving particle size analysis, optical microscopy, Turbiscan test and viscosity of emulsions. The emulsion containing 50 g of oil and being homogenized for 6 min had the highest stability. Empirically determined parameters proved to be consistent with the results obtained using the computer software. The computer simulation showed that the most stable emulsion should contain from 30 to 50 g of oil and should be homogenized for 2.5-6 min. The computer software based on Kleeman's method proved to be useful for quick optimization of the composition and production parameters of stable emulsion systems. Moreover, obtaining an emulsion system with proper stability justifies further research extended with sensory analysis, which will allow the application of such systems (containing hemp oil, beneficial for skin) in the cosmetic industry. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  20. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  1. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  2. CRISM Multispectral and Hyperspectral Mapping Data - A Global Data Set for Hydrated Mineral Mapping

    NASA Astrophysics Data System (ADS)

    Seelos, F. P.; Hash, C. D.; Murchie, S. L.; Lim, H.

    2017-12-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) is a visible through short-wave infrared hyperspectral imaging spectrometer (VNIR S-detector: 364-1055 nm; IR L-detector: 1001-3936 nm; 6.55 nm sampling) that has been in operation on the Mars Reconnaissance Orbiter (MRO) since 2006. Over the course of the MRO mission, CRISM has acquired 290,000 individual mapping observation segments (mapping strips) with a variety of observing modes and data characteristics (VNIR/IR; 100/200 m/pxl; multi-/hyper-spectral band selection) over a wide range of observing conditions (atmospheric state, observation geometry, instrument state). CRISM mapping data coverage density varies primarily with latitude and secondarily due to seasonal and operational considerations. The aggregate global IR mapping data coverage currently stands at 85% ( 80% at the equator with 40% repeat sampling), which is sufficient spatial sampling density to support the assembly of empirically optimized radiometrically consistent mapping mosaic products. The CRISM project has defined a number of mapping mosaic data products (e.g. Multispectral Reduced Data Record (MRDR) map tiles) with varying degrees of observation-specific processing and correction applied prior to mosaic assembly. A commonality among the mosaic products is the presence of inter-observation radiometric discrepancies which are traceable to variable observation circumstances or associated atmospheric/photometric correction residuals. The empirical approach to radiometric reconciliation leverages inter-observation spatial overlaps and proximal relationships to construct a graph that encodes the mosaic structure and radiometric discrepancies. The graph theory abstraction allows the underling structure of the msaic to be evaluated and the corresponding optimization problem configured so it is well-posed. Linear and non-linear least squares optimization is then employed to derive a set of observation- and wavelength- specific model parameters for a series of transform functions that minimize the total radiometric discrepancy across the mosaic. This empirical approach to CRISM data radiometric reconciliation and the utility of the resulting mapping data mosaic products for hydrated mineral mapping will be presented.

  3. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  4. The Stochastic Multi-strain Dengue Model: Analysis of the Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2011-09-01

    Dengue dynamics is well known to be particularly complex with large fluctuations of disease incidences. An epidemic multi-strain model motivated by dengue fever epidemiology shows deterministic chaos in wide parameter regions. The addition of seasonal forcing, mimicking the vectorial dynamics, and a low import of infected individuals, which is realistic in the dynamics of infectious diseases epidemics show complex dynamics and qualitatively a good agreement between empirical DHF monitoring data and the obtained model simulation. The addition of noise can explain the fluctuations observed in the empirical data and for large enough population size, the stochastic system can be well described by the deterministic skeleton.

  5. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment Using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David; Johnson, Kenneth

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  6. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  7. Multi-Phase Equilibrium and Solubilities of Aromatic Compounds and Inorganic Compounds in Sub- and Supercritical Water: A Review.

    PubMed

    Liu, Qinli; Ding, Xin; Du, Bowen; Fang, Tao

    2017-11-02

    Supercritical water oxidation (SCWO), as a novel and efficient technology, has been applied to wastewater treatment processes. The use of phase equilibrium data to optimize process parameters can offer a theoretical guidance for designing SCWO processes and reducing the equipment and operating costs. In this work, high-pressure phase equilibrium data for aromatic compounds+water systems and inorganic compounds+water systems are given. Moreover, thermodynamic models, equations of state (EOS) and empirical and semi-empirical approaches are summarized and evaluated. This paper also lists the existing problems of multi-phase equilibria and solubility studies on aromatic compounds and inorganic compounds in sub- and supercritical water.

  8. Prediction of light aircraft interior noise

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.; Morales, D. A.

    1976-01-01

    At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.

  9. Bacterial meningitis - principles of antimicrobial treatment.

    PubMed

    Jawień, Miroslaw; Garlicki, Aleksander M

    2013-01-01

    Bacterial meningitis is associated with significant morbidity and mortality despite the availability of effective antimicrobial therapy. The management approach to patients with suspected or proven bacterial meningitis includes emergent cerebrospinal fluid analysis and initiation of appropriate antimicrobial and adjunctive therapies. The choice of empirical antimicrobial therapy is based on the patient's age and underlying disease status; once the infecting pathogen is isolated, antimicrobial therapy can be modified for optimal treatment. Successful treatment of bacterial meningitis requires the knowledge on epidemiology including prevalence of antimicrobial resistant pathogens, pathogenesis of meningitis, pharmacokinetics and pharmacodynamics of antimicrobial agents. The emergence of antibiotic-resistant bacterial strains in recent years has necessitated the development of new strategies for empiric antimicrobial therapy for bacterial meningitis.

  10. The Physics of Superconducting Microwave Resonators

    NASA Astrophysics Data System (ADS)

    Gao, Jiansong

    Over the past decade, low temperature detectors have brought astronomers revolutionary new observational capabilities and led to many great discoveries. Although a single low temperature detector has very impressive sensitivity, a large detector array would be much more powerful and are highly demanded for the study of more difficult and fundamental problems in astronomy. However, current detector technologies, such as transition edge sensors and superconducting tunnel junction detectors, are difficult to integrate into a large array. The microwave kinetic inductance detector (MKID) is a promising new detector technology invented at Caltech and JPL which provides both high sensitivity and an easy solution to the detector integration. It senses the change in the surface impedance of a superconductor as incoming photons break Cooper pairs, by using high-Q superconducting microwave resonators capacitively coupled to a common feedline. This architecture allows thousands of detectors to be easily integrated through passive frequency domain multiplexing. In this thesis, we explore the rich and interesting physics behind these superconducting microwave resonators. The first part of the thesis discusses the surface impedance of a superconductor, the kinetic inductance of a superconducting coplanar waveguide, and the circuit response of a resonator. These topics are related with the responsivity of MKIDs. The second part presents the study of the excess frequency noise that is universally observed in these resonators. The properties of the excess noise, including power, temperature, material, and geometry dependence, have been quantified. The noise source has been identified to be the two-level systems in the dielectric material on the surface of the resonator. A semi-empirical noise model has been developed to explain the power and geometry dependence of the noise, which is useful to predict the noise for a specified resonator geometry. The detailed physical noise mechanism, however, is still not clear. With the theoretical results of the responsivity and the semi-empirical noise model established in this thesis, a prediction of the detector sensitivity (noise equivalent power) and an optimization of the detector design are now possible.

  11. Social support, acculturation, and optimism: understanding positive health practices in Asian American college students.

    PubMed

    Ayres, Cynthia G; Mahat, Ganga

    2012-07-01

    This study developed and tested a theory to better understand positive health practices (PHP) among Asian Americans aged 18 to 21 years. It tested theoretical relationships postulated between PHP and (a) social support (SS), (b) optimism, and (c) acculturation, and between SS and optimism and acculturation. Optimism and acculturation were also tested as possible mediators in the relationship between SS and PHP. A correlational study design was used. A convenience sample of 163 Asian college students in an urban setting completed four questionnaires assessing SS, PHP, optimism, and acculturation and one demographic questionnaire. There were statistically significant positive relationships between SS and optimism with PHP, between acculturation and PHP, and between optimism and SS. Optimism mediated the relationship between SS and PHP, whereas acculturation did not. Findings extend knowledge regarding these relationships to a defined population of Asian Americans aged 18 to 21 years. Findings contribute to a more comprehensive knowledge base regarding health practices among Asian Americans. The theoretical and empirical findings of this study provide the direction for future research as well. Further studies need to be conducted to identify and test other mediators in order to better understand the relationship between these two variables.

  12. Signatures of active and passive optimized Lévy searching in jellyfish

    PubMed Central

    Reynolds, Andy M.

    2014-01-01

    Some of the strongest empirical support for Lévy search theory has come from telemetry data for the dive patterns of marine predators (sharks, bony fishes, sea turtles and penguins). The dive patterns of the unusually large jellyfish Rhizostoma octopus do, however, sit outside of current Lévy search theory which predicts that a single search strategy is optimal. When searching the water column, the movement patterns of these jellyfish change over time. Movement bouts can be approximated by a variety of Lévy and Brownian (exponential) walks. The adaptive value of this variation is not known. On some occasions movement pattern data are consistent with the jellyfish prospecting away from a preferred depth, not finding an improvement in conditions elsewhere and so returning to their original depth. This ‘bounce’ behaviour also sits outside of current Lévy walk search theory. Here, it is shown that the jellyfish movement patterns are consistent with their using optimized ‘fast simulated annealing’—a novel kind of Lévy walk search pattern—to locate the maximum prey concentration in the water column and/or to locate the strongest of many olfactory trails emanating from more distant prey. Fast simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a large search space. This new finding shows that the notion of active optimized Lévy walk searching is not limited to the search for randomly and sparsely distributed resources, as previously thought, but can be extended to embrace other scenarios, including that of the jellyfish R. octopus. In the presence of convective currents, it could become energetically favourable to search the water column by riding the convective currents. Here, it is shown that these passive movements can be represented accurately by Lévy walks of the type occasionally seen in R. octopus. This result vividly illustrates that Lévy walks are not necessarily the result of selection pressures for advantageous searching behaviour but can instead arise freely and naturally from simple processes. It also shows that the family of Lévy walkers is vastly larger than previously thought and includes spores, pollens, seeds and minute wingless arthropods that on warm days disperse passively within the atmospheric boundary layer. PMID:25100323

  13. Signatures of active and passive optimized Lévy searching in jellyfish.

    PubMed

    Reynolds, Andy M

    2014-10-06

    Some of the strongest empirical support for Lévy search theory has come from telemetry data for the dive patterns of marine predators (sharks, bony fishes, sea turtles and penguins). The dive patterns of the unusually large jellyfish Rhizostoma octopus do, however, sit outside of current Lévy search theory which predicts that a single search strategy is optimal. When searching the water column, the movement patterns of these jellyfish change over time. Movement bouts can be approximated by a variety of Lévy and Brownian (exponential) walks. The adaptive value of this variation is not known. On some occasions movement pattern data are consistent with the jellyfish prospecting away from a preferred depth, not finding an improvement in conditions elsewhere and so returning to their original depth. This 'bounce' behaviour also sits outside of current Lévy walk search theory. Here, it is shown that the jellyfish movement patterns are consistent with their using optimized 'fast simulated annealing'--a novel kind of Lévy walk search pattern--to locate the maximum prey concentration in the water column and/or to locate the strongest of many olfactory trails emanating from more distant prey. Fast simulated annealing is a powerful stochastic search algorithm for locating a global maximum that is hidden among many poorer local maxima in a large search space. This new finding shows that the notion of active optimized Lévy walk searching is not limited to the search for randomly and sparsely distributed resources, as previously thought, but can be extended to embrace other scenarios, including that of the jellyfish R. octopus. In the presence of convective currents, it could become energetically favourable to search the water column by riding the convective currents. Here, it is shown that these passive movements can be represented accurately by Lévy walks of the type occasionally seen in R. octopus. This result vividly illustrates that Lévy walks are not necessarily the result of selection pressures for advantageous searching behaviour but can instead arise freely and naturally from simple processes. It also shows that the family of Lévy walkers is vastly larger than previously thought and includes spores, pollens, seeds and minute wingless arthropods that on warm days disperse passively within the atmospheric boundary layer. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.

    PubMed

    Bratholm, Lars A; Jensen, Jan H

    2017-03-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift.

  15. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  16. Insurance principles and the design of prospective payment systems.

    PubMed

    Ellis, R P; McGuire, T G

    1988-09-01

    This paper applies insurance principles to the issues of optimal outlier payments and designation of peer groups in Medicare's case-based prospective payment system for hospital care. Arrow's principle that full insurance after a deductible is optimal implies that, to minimize hospital risk, outlier payments should be based on hospital average loss per case rather than, as at present, based on individual case-level losses. The principle of experience rating implies defining more homogenous peer groups for the purpose of figuring average cost. The empirical significance of these results is examined using a sample of 470,568 discharges from 469 hospitals.

  17. Robust Portfolio Optimization Using Pseudodistances.

    PubMed

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  18. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  19. The optimization of total laboratory automation by simulation of a pull-strategy.

    PubMed

    Yang, Taho; Wang, Teng-Kuan; Li, Vincent C; Su, Chia-Lo

    2015-01-01

    Laboratory results are essential for physicians to diagnose medical conditions. Because of the critical role of medical laboratories, an increasing number of hospitals use total laboratory automation (TLA) to improve laboratory performance. Although the benefits of TLA are well documented, systems occasionally become congested, particularly when hospitals face peak demand. This study optimizes TLA operations. Firstly, value stream mapping (VSM) is used to identify the non-value-added time. Subsequently, batch processing control and parallel scheduling rules are devised and a pull mechanism that comprises a constant work-in-process (CONWIP) is proposed. Simulation optimization is then used to optimize the design parameters and to ensure a small inventory and a shorter average cycle time (CT). For empirical illustration, this approach is applied to a real case. The proposed methodology significantly improves the efficiency of laboratory work and leads to a reduction in patient waiting times and increased service level.

  20. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    NASA Astrophysics Data System (ADS)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  1. More on the Best Evolutionary Rate for Phylogenetic Analysis

    PubMed Central

    Massingham, Tim; Goldman, Nick

    2017-01-01

    Abstract The accumulation of genome-scale molecular data sets for nonmodel taxa brings us ever closer to resolving the tree of life of all living organisms. However, despite the depth of data available, a number of studies that each used thousands of genes have reported conflicting results. The focus of phylogenomic projects must thus shift to more careful experimental design. Even though we still have a limited understanding of what are the best predictors of the phylogenetic informativeness of a gene, there is wide agreement that one key factor is its evolutionary rate; but there is no consensus as to whether the rates derived as optimal in various analytical, empirical, and simulation approaches have any general applicability. We here use simulations to infer optimal rates in a set of realistic phylogenetic scenarios with varying tree sizes, numbers of terminals, and tree shapes. Furthermore, we study the relationship between the optimal rate and rate variation among sites and among lineages. Finally, we examine how well the predictions made by a range of experimental design methods correlate with the observed performance in our simulations. We find that the optimal level of divergence is surprisingly robust to differences in taxon sampling and even to among-site and among-lineage rate variation as often encountered in empirical data sets. This finding encourages the use of methods that rely on a single optimal rate to predict a gene’s utility. Focusing on correct recovery either of the most basal node in the phylogeny or of the entire topology, the optimal rate is about 0.45 substitutions from root to tip in average Yule trees and about 0.2 in difficult trees with short basal and long-apical branches, but all rates leading to divergence levels between about 0.1 and 0.5 perform reasonably well. Testing the performance of six methods that can be used to predict a gene’s utility against our simulation results, we find that the probability of resolution, signal-noise analysis, and Fisher information are good predictors of phylogenetic informativeness, but they require specification of at least part of a model tree. Likelihood quartet mapping also shows very good performance but only requires sequence alignments and is thus applicable without making assumptions about the phylogeny. Despite them being the most commonly used methods for experimental design, geometric quartet mapping and the integration of phylogenetic informativeness curves perform rather poorly in our comparison. Instead of derived predictors of phylogenetic informativeness, we suggest that the number of sites in a gene that evolve at near-optimal rates (as inferred here) could be used directly to prioritize genes for phylogenetic inference. In combination with measures of model fit, especially with respect to compositional biases and among-site and among-lineage rate variation, such an approach has the potential to greatly improve marker choice and should be tested on empirical data. PMID:28595363

  2. IMPACT OF VENTILATION FREQUENCY AND PARENCHYMAL STIFFNESS ON FLOW AND PRESSURE DISTRIBUTION IN A CANINE LUNG MODEL

    PubMed Central

    Amini, Reza; Kaczka, David W.

    2013-01-01

    To determine the impact of ventilation frequency, lung volume, and parenchymal stiffness on ventilation distribution, we developed an anatomically-based computational model of the canine lung. Each lobe of the model consists of an asymmetric branching airway network subtended by terminal, viscoelastic acinar units. The model allows for empiric dependencies of airway segment dimensions and parenchymal stiffness on transpulmonary pressure. We simulated the effects of lung volume and parenchymal recoil on global lung impedance and ventilation distribution from 0.1 to 100 Hz, with mean transpulmonary pressures from 5 to 25 cmH2O. With increasing lung volume, the distribution of acinar flows narrowed and became more synchronous for frequencies below resonance. At higher frequencies, large variations in acinar flow were observed. Maximum acinar flow occurred at first antiresonance frequency, where lung impedance achieved a local maximum. The distribution of acinar pressures became very heterogeneous and amplified relative to tracheal pressure at the resonant frequency. These data demonstrate the important interaction between frequency and lung tissue stiffness on the distribution of acinar flows and pressures. These simulations provide useful information for the optimization of frequency, lung volume, and mean airway pressure during conventional ventilation or high frequency oscillation (HFOV). Moreover our model indicates that an optimal HFOV bandwidth exists between the resonant and antiresonant frequencies, for which interregional gas mixing is maximized. PMID:23872936

  3. Hemodynamic and oxygen transport patterns for outcome prediction, therapeutic goals, and clinical algorithms to improve outcome. Feasibility of artificial intelligence to customize algorithms.

    PubMed

    Shoemaker, W C; Patil, R; Appel, P L; Kram, H B

    1992-11-01

    A generalized decision tree or clinical algorithm for treatment of high-risk elective surgical patients was developed from a physiologic model based on empirical data. First, a large data bank was used to do the following: (1) describe temporal hemodynamic and oxygen transport patterns that interrelate cardiac, pulmonary, and tissue perfusion functions in survivors and nonsurvivors; (2) define optimal therapeutic goals based on the supranormal oxygen transport values of high-risk postoperative survivors; (3) compare the relative effectiveness of alternative therapies in a wide variety of clinical and physiologic conditions; and (4) to develop criteria for titration of therapy to the endpoints of the supranormal optimal goals using cardiac index (CI), oxygen delivery (DO2), and oxygen consumption (VO2) as proxy outcome measures. Second, a general purpose algorithm was generated from these data and tested in preoperatively randomized clinical trials of high-risk surgical patients. Improved outcome was demonstrated with this generalized algorithm. The concept that the supranormal values represent compensations that have survival value has been corroborated by several other groups. We now propose a unique approach to refine the generalized algorithm to develop customized algorithms and individualized decision analysis for each patient's unique problems. The present article describes a preliminary evaluation of the feasibility of artificial intelligence techniques to accomplish individualized algorithms that may further improve patient care and outcome.

  4. On the acoustic wedge design and simulation of anechoic chamber

    NASA Astrophysics Data System (ADS)

    Jiang, Changyong; Zhang, Shangyu; Huang, Lixi

    2016-10-01

    This study proposes an alternative to the classic wedge design for anechoic chambers, which is the uniform-then-gradient, flat-wall (UGFW) structure. The working mechanisms of the proposed structure and the traditional wedge are analyzed. It is found that their absorption patterns are different. The parameters of both structures are optimized for achieving minimum absorber depth, under the condition of absorbing 99% of normal incident sound energy. It is found that, the UGFW structure achieves a smaller total depth for the cut-off frequencies ranging from 100 Hz to 250 Hz. This paper also proposes a modification for the complex source image (CSI) model for the empirical simulation of anechoic chambers, originally proposed by Bonfiglio et al. [J. Acoust. Soc. Am. 134 (1), 285-291 (2013)]. The modified CSI model considers the non-locally reactive effect of absorbers at oblique incidence, and the improvement is verified by a full, finite-element simulation of a small chamber. With the modified CSI model, the performance of both decorations with the optimized parameters in a large chamber is simulated. The simulation results are analyzed and checked against the tolerance of 1.5 dB deviation from the inverse square law, stipulated in the ISO standard 3745(2003). In terms of the total decoration depth and anechoic chamber performance, the UGFW structure is better than the classic wedge design.

  5. Body Bias usage in UTBB FDSOI designs: A parametric exploration approach

    NASA Astrophysics Data System (ADS)

    Puschini, Diego; Rodas, Jorge; Beigne, Edith; Altieri, Mauricio; Lesecq, Suzanne

    2016-03-01

    Some years ago, UTBB FDSOI has appeared in the horizon of low-power circuit designers. With the 14 nm and 10 nm nodes in the road-map, the industrialized 28 nm platform promises highly efficient designs with Ultra-Wide Voltage Range (UWVR) thanks to extended Body Bias properties. From the power management perspective, this new opportunity is considered as a new degree of freedom in addition to the classic Dynamic Voltage Scaling (DVS), increasing the complexity of the power optimization problem at design time. However, so far no formal or empiric tool allows to early evaluate the real need for a Dynamic Body Bias (DBB) mechanism on future designs. This paper presents a parametric exploration approach that analyzes the benefits of using Body Bias in 28 nm UTBB FDSOI circuits. The exploration is based on electrical simulations of a ring-oscillator structure. These experiences show that a Body Bias strategy is not always required but, they underline the large power reduction that can be achieved when mandatory. Results are summarized in order to help designers to analyze how to choose the best dynamic power management strategy for a given set of operating conditions in terms of temperature, circuit activity and process choice. This exploration contributes to the identification of conditions that make DBB more efficient than DVS, and vice versa, and when both methods are mandatory to optimize power consumption.

  6. The Analysis of RDF Semantic Data Storage Optimization in Large Data Era

    NASA Astrophysics Data System (ADS)

    He, Dandan; Wang, Lijuan; Wang, Can

    2018-03-01

    With the continuous development of information technology and network technology in China, the Internet has also ushered in the era of large data. In order to obtain the effective acquisition of information in the era of large data, it is necessary to optimize the existing RDF semantic data storage and realize the effective query of various data. This paper discusses the storage optimization of RDF semantic data under large data.

  7. Virtuous States and Virtuous Traits: How the Empirical Evidence Regarding the Existence of Broad Traits Saves Virtue Ethics from the Situationist Critique

    ERIC Educational Resources Information Center

    Jayawickreme, Eranda; Meindl, Peter; Helzer, Erik G.; Furr, R. Michael; Fleeson, William

    2014-01-01

    A major objection to the study of virtue asserts that the empirical psychological evidence implies traits have little meaningful impact on behavior, as slight changes in situational characteristics appear to lead to large changes in virtuous behavior. We argue in response that the critical evidence is not these effects of situations observed in…

  8. A Bayesian Analysis of Scale-Invariant Processes

    DTIC Science & Technology

    2012-01-01

    Earth Grid (EASE- Grid). The NED raster elevation data of one arc-second resolution (30 m) over the continental US are derived from multiple satellites ...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...empirical and ME distributions, yet ensuring computational efficiency. Instead of com- puting empirical histograms from large amount of data , only some

  9. On Burst Detection and Prediction in Retweeting Sequence

    DTIC Science & Technology

    2015-05-22

    We conduct a comprehensive empirical analysis of a large microblogging dataset collected from the Sina Weibo and report our observations of burst...whether and how accurate we can predict bursts using classifiers based on the extracted features. Our empirical study of the Sina Weibo data shows the...feasibility of burst prediction using appropriately extracted features and classic classifiers. 1 Introduction Microblogging, such as Twitter and Sina

  10. Implications of alternative field-sampling designs on Landsat-based mapping of stand age and carbon stocks in Oregon forests

    Treesearch

    Maureen V. Duane; Warren B. Cohen; John L. Campbell; Tara Hudiburg; David P. Turner; Dale Weyermann

    2010-01-01

    Empirical models relating forest attributes to remotely sensed metrics are widespread in the literature and underpin many of our efforts to map forest structure across complex landscapes. In this study we compared empirical models relating Landsat reflectance to forest age across Oregon using two alternate sets of ground data: one from a large (n ~ 1500) systematic...

  11. Digital Voting Systems and Communication in Classroom Lectures--An Empirical Study Based around Physics Teaching at Bachelor Level at Two Danish Universities

    ERIC Educational Resources Information Center

    Mathiasen, Helle

    2015-01-01

    Studies on the use of digital voting systems in large group teaching situations have often focused on the "non-anonymity" and control and testing functions that the technology provides. There has also been some interest in how students might use their votes tactically to gain "credits". By focusing on an empirical study of…

  12. Waiting time distribution in public health care: empirics and theory.

    PubMed

    Dimakou, Sofia; Dimakou, Ourania; Basso, Henrique S

    2015-12-01

    Excessive waiting times for elective surgery have been a long-standing concern in many national healthcare systems in the OECD. How do the hospital admission patterns that generate waiting lists affect different patients? What are the hospitals characteristics that determine waiting times? By developing a model of healthcare provision and analysing empirically the entire waiting time distribution we attempt to shed some light on those issues. We first build a theoretical model that describes the optimal waiting time distribution for capacity constraint hospitals. Secondly, employing duration analysis, we obtain empirical representations of that distribution across hospitals in the UK from 1997-2005. We observe important differences on the 'scale' and on the 'shape' of admission rates. Scale refers to how quickly patients are treated and shape represents trade-offs across duration-treatment profiles. By fitting the theoretical to the empirical distributions we estimate the main structural parameters of the model and are able to closely identify the main drivers of these empirical differences. We find that the level of resources allocated to elective surgery (budget and physical capacity), which determines how constrained the hospital is, explains differences in scale. Changes in benefits and costs structures of healthcare provision, which relate, respectively, to the desire to prioritise patients by duration and the reduction in costs due to delayed treatment, determine the shape, affecting short and long duration patients differently. JEL Classification I11; I18; H51.

  13. Multiplex networks in metropolitan areas: generic features and local effects.

    PubMed

    Strano, Emanuele; Shai, Saray; Dobson, Simon; Barthelemy, Marc

    2015-10-06

    Most large cities are spanned by more than one transportation system. These different modes of transport have usually been studied separately: it is however important to understand the impact on urban systems of coupling different modes and we report in this paper an empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York. We observe a similar behaviour for network quantities related to quickest paths suggesting the existence of generic mechanisms operating beyond the local peculiarities of the specific cities studied. An analysis of the betweenness centrality distribution shows that the introduction of underground networks operate as a decentralizing force creating congestion in places located at the end of underground lines. Also, we find that increasing the speed of subways is not always beneficial and may lead to unwanted uneven spatial distributions of accessibility. In fact, for London—but not for New York—there is an optimal subway speed in terms of global congestion. These results show that it is crucial to consider the full, multimodal, multilayer network aspects of transportation systems in order to understand the behaviour of cities and to avoid possible negative side-effects of urban planning decisions. © 2015 The Author(s).

  14. Multi-Fault Detection of Rolling Element Bearings under Harsh Working Condition Using IMF-Based Adaptive Envelope Order Analysis

    PubMed Central

    Zhao, Ming; Lin, Jing; Xu, Xiaoqiang; Li, Xuejun

    2014-01-01

    When operating under harsh condition (e.g., time-varying speed and load, large shocks), the vibration signals of rolling element bearings are always manifested as low signal noise ratio, non-stationary statistical parameters, which cause difficulties for current diagnostic methods. As such, an IMF-based adaptive envelope order analysis (IMF-AEOA) is proposed for bearing fault detection under such conditions. This approach is established through combining the ensemble empirical mode decomposition (EEMD), envelope order tracking and fault sensitive analysis. In this scheme, EEMD provides an effective way to adaptively decompose the raw vibration signal into IMFs with different frequency bands. The envelope order tracking is further employed to transform the envelope of each IMF to angular domain to eliminate the spectral smearing induced by speed variation, which makes the bearing characteristic frequencies more clear and discernible in the envelope order spectrum. Finally, a fault sensitive matrix is established to select the optimal IMF containing the richest diagnostic information for final decision making. The effectiveness of IMF-AEOA is validated by simulated signal and experimental data from locomotive bearings. The result shows that IMF-AEOA could accurately identify both single and multiple faults of bearing even under time-varying rotating speed and large extraneous shocks. PMID:25353982

  15. The Role of Diverse Strategies in Sustainable Knowledge Production

    PubMed Central

    Wu, Lingfei; Baggio, Jacopo A.; Janssen, Marco A.

    2016-01-01

    Online communities are becoming increasingly important as platforms for large-scale human cooperation. These communities allow users seeking and sharing professional skills to solve problems collaboratively. To investigate how users cooperate to complete a large number of knowledge-producing tasks, we analyze Stack Exchange, one of the largest question and answer systems in the world. We construct attention networks to model the growth of 110 communities in the Stack Exchange system and quantify individual answering strategies using the linking dynamics on attention networks. We identify two answering strategies. Strategy A aims at performing maintenance by doing simple tasks, whereas strategy B aims at investing time in doing challenging tasks. Both strategies are important: empirical evidence shows that strategy A decreases the median waiting time for answers and strategy B increases the acceptance rate of answers. In investigating the strategic persistence of users, we find that users tends to stick on the same strategy over time in a community, but switch from one strategy to the other across communities. This finding reveals the different sets of knowledge and skills between users. A balance between the population of users taking A and B strategies that approximates 2:1, is found to be optimal to the sustainable growth of communities. PMID:26934733

  16. The Role of Diverse Strategies in Sustainable Knowledge Production.

    PubMed

    Wu, Lingfei; Baggio, Jacopo A; Janssen, Marco A

    2016-01-01

    Online communities are becoming increasingly important as platforms for large-scale human cooperation. These communities allow users seeking and sharing professional skills to solve problems collaboratively. To investigate how users cooperate to complete a large number of knowledge-producing tasks, we analyze Stack Exchange, one of the largest question and answer systems in the world. We construct attention networks to model the growth of 110 communities in the Stack Exchange system and quantify individual answering strategies using the linking dynamics on attention networks. We identify two answering strategies. Strategy A aims at performing maintenance by doing simple tasks, whereas strategy B aims at investing time in doing challenging tasks. Both strategies are important: empirical evidence shows that strategy A decreases the median waiting time for answers and strategy B increases the acceptance rate of answers. In investigating the strategic persistence of users, we find that users tends to stick on the same strategy over time in a community, but switch from one strategy to the other across communities. This finding reveals the different sets of knowledge and skills between users. A balance between the population of users taking A and B strategies that approximates 2:1, is found to be optimal to the sustainable growth of communities.

  17. Multiplex networks in metropolitan areas: generic features and local effects

    PubMed Central

    Strano, Emanuele; Shai, Saray; Dobson, Simon; Barthelemy, Marc

    2015-01-01

    Most large cities are spanned by more than one transportation system. These different modes of transport have usually been studied separately: it is however important to understand the impact on urban systems of coupling different modes and we report in this paper an empirical analysis of the coupling between the street network and the subway for the two large metropolitan areas of London and New York. We observe a similar behaviour for network quantities related to quickest paths suggesting the existence of generic mechanisms operating beyond the local peculiarities of the specific cities studied. An analysis of the betweenness centrality distribution shows that the introduction of underground networks operate as a decentralizing force creating congestion in places located at the end of underground lines. Also, we find that increasing the speed of subways is not always beneficial and may lead to unwanted uneven spatial distributions of accessibility. In fact, for London—but not for New York—there is an optimal subway speed in terms of global congestion. These results show that it is crucial to consider the full, multimodal, multilayer network aspects of transportation systems in order to understand the behaviour of cities and to avoid possible negative side-effects of urban planning decisions. PMID:26400198

  18. Environmental Gerontology at the Beginning of the New Millennium: Reflections on Its Historical, Empirical, and Theoretical Development

    ERIC Educational Resources Information Center

    Wahl, Hans-Werner; Weisman, Gerald D.

    2003-01-01

    Over the past four decades the environmental context of aging has come to play an important role in gerontological theory, research, and practice. Environmental gerontology (EG)--focused on the description, explanation, and modification or optimization of the relation between elderly persons and their sociospatial surroundings--has emerged as a…

  19. Beliefs and Willingness to Act about Global Warming: Where to Focus Science Pedagogy?

    ERIC Educational Resources Information Center

    Skamp, Keith; Boyes, Eddie; Stanisstreet, Martin

    2013-01-01

    Science educators have a key role in empowering students to take action to reduce global warming. This involves assisting students to understand its causes as well as taking pedagogical decisions that have optimal probabilities of leading to students being motivated to take actions based on empirically based science beliefs. To this end New South…

  20. Bayesian Just-So Stories in Psychology and Neuroscience

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.

    2012-01-01

    According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak.…

  1. Do Postsecondary Internships Address the Four Learning Modes of Experiential Learning Theory? An Exploration through Document Analysis

    ERIC Educational Resources Information Center

    Stirling, Ashley; Kerr, Gretchen; MacPherson, Ellen; Banwell, Jenessa; Bandealy, Ahad; Battaglia, Anthony

    2017-01-01

    The educational benefits of embedding hands-on experience in higher education curriculum are widely recognized (Beard & Wilson, 2013). However, to optimize the learning from these opportunities, they need to be grounded in empirical learning theory. The purpose of this study was to examine the characteristics of internships in Ontario colleges…

  2. Providing Feedback on Computer-Based Algebra Homework in Middle-School Classrooms

    ERIC Educational Resources Information Center

    Fyfe, Emily R.

    2016-01-01

    Homework is transforming at a rapid rate with continuous advances in educational technology. Computer-based homework, in particular, is gaining popularity across a range of schools, with little empirical evidence on how to optimize student learning. The current aim was to test the effects of different types of feedback on computer-based homework.…

  3. Modeling Explosive Cladding of Metallic Liners to Gun Tubes

    DTIC Science & Technology

    2010-01-01

    a Jones Wilkins Lee ( JWL ) equation of state was parameterized using nonlinear optimization (ref. 8) and scaling the empirical v2£ for other volume...expansions based on TNT. The JWL equation of state is ( ,. A i -RiV* , GJE RiV* V IS./K v f +7~* (2) where P is pressure, V is

  4. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  5. Optimal design of high damping force engine mount featuring MR valve structure with both annular and radial flow paths

    NASA Astrophysics Data System (ADS)

    Nguyen, Q. H.; Choi, S. B.; Lee, Y. S.; Han, M. S.

    2013-11-01

    This paper focuses on the optimal design of a compact and high damping force engine mount featuring magnetorheological fluid (MRF). In the mount, a MR valve structure with both annular and radial flows is employed to generate a high damping force. First, the configuration and working principle of the proposed MR mount is introduced. The MRF flows in the mount are then analyzed and the governing equations of the MR mount are derived based on the Bingham plastic behavior of the MRF. An optimal design of the MR mount is then performed to find the optimal structure of the MR valve to generate a maximum damping force with certain design constraints. In addition, the gap size of MRF ducts is empirically chosen considering the ‘lockup’ problem of the mount at high frequency. Performance of the optimized MR mount is then evaluated based on finite element analysis and discussions on performance results of the optimized MR mount are given. The effectiveness of the proposed MR engine mount is demonstrated via computer simulation by presenting damping force and power consumption.

  6. Modelling and multi objective optimization of WEDM of commercially Monel super alloy using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu

    2016-09-01

    In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.

  7. Brief report: Assessing dispositional optimism in adolescence--factor structure and concurrent validity of the Life Orientation Test--Revised.

    PubMed

    Monzani, Dario; Steca, Patrizia; Greco, Andrea

    2014-02-01

    Dispositional optimism is an individual difference promoting psychosocial adjustment and well-being during adolescence. Dispositional optimism was originally defined as a one-dimensional construct; however, empirical evidence suggests two correlated factors in the Life Orientation Test - Revised (LOT-R). The main aim of the study was to evaluate the dimensionality of the LOT-R. This study is the first attempt to identify the best factor structure, comparing congeneric, two correlated-factor, and two orthogonal-factor models in a sample of adolescents. Concurrent validity was also assessed. The results demonstrated the superior fit of the two orthogonal-factor model thus reconciling the one-dimensional definition of dispositional optimism with the bi-dimensionality of the LOT-R. Moreover, the results of correlational analyses proved the concurrent validity of this self-report measure: optimism is moderately related to indices of psychosocial adjustment and well-being. Thus, the LOT-R is a useful, valid, and reliable self-report measure to properly assess optimism in adolescence. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  8. Optical dosimetry probes to validate Monte Carlo and empirical-method-based NIR dose planning in the brain.

    PubMed

    Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M

    2016-12-01

    A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.

  9. The methodological quality of guidelines for hospital-acquired pneumonia and ventilator-associated pneumonia: A systematic review.

    PubMed

    Ambaras Khan, R; Aziz, Z

    2018-05-02

    Clinical practice guidelines serve as a framework for physicians to make decisions and to support best practice for optimizing patient care. However, if the guidelines do not address all the important components of optimal care sufficiently, the quality and validity of the guidelines can be reduced. The objectives of this study were to systematically review current guidelines for hospital-acquired pneumonia (HAP) and ventilator-associated pneumonia (VAP), evaluate their methodological quality and highlight the similarities and differences in their recommendations for empirical antibiotic and antibiotic de-escalation strategies. This review is reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement. Electronic databases including MEDLINE, CINAHL, PubMed and EMBASE were searched up to September 2017 for relevant guidelines. Other databases such as NICE, Scottish Intercollegiate Guidelines Network (SIGN) and the websites of professional societies were also searched for relevant guidelines. The quality and reporting of included guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation II (AGREE-II) instrument. Six guidelines were eligible for inclusion in our review. Among 6 domains of AGREE-II, "clarity of presentation" scored the highest (80.6%), whereas "applicability" scored the lowest (11.8%). All the guidelines supported the antibiotic de-escalation strategy, whereas the majority of the guidelines (5 of 6) recommended that empirical antibiotic therapy should be implemented in accordance with local microbiological data. All the guidelines suggested that for early-onset HAP/VAP, therapy should start with a narrow spectrum empirical antibiotic such as penicillin or cephalosporins, whereas for late-onset HAP/VAP, the guidelines recommended the use of a broader spectrum empirical antibiotic such as the penicillin extended spectrum carbapenems and glycopeptides. Expert guidelines promote the judicious use of antibiotics and prevent antibiotic overuse. The quality and validity of available HAP/VAP guidelines would be enhanced by improving their adherence to accepted best practice for the management of HAP and VAP. © 2018 John Wiley & Sons Ltd.

  10. Large trucks involved in fatal crashes : the North Carolina data 1993-1997

    DOT National Transportation Integrated Search

    1999-03-01

    An analysis of large, truck-involved crash outcomes in North Carolina for the period 1993-1997 was conducted by the UNC Highway Safety Research Center (HSRC) for the purpose of establishing an empirical basis for subsequent Governor's Highway Safety ...

  11. A comprehensive formulation for volumetric modulated arc therapy planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dan; Lyu, Qihui; Ruan, Dan

    2016-07-15

    Purpose: Volumetric modulated arc therapy (VMAT) is a widely employed radiation therapy technique, showing comparable dosimetry to static beam intensity modulated radiation therapy (IMRT) with reduced monitor units and treatment time. However, the current VMAT optimization has various greedy heuristics employed for an empirical solution, which jeopardizes plan consistency and quality. The authors introduce a novel direct aperture optimization method for VMAT to overcome these limitations. Methods: The comprehensive VMAT (comVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term to penalize the difference between the optimized dose and the prescribed dose, as well as an anisotropicmore » total variation term to promote piecewise continuity in the fluence maps, preparing it for direct aperture optimization. A level set function was used to describe the aperture shapes and the difference between aperture shapes at adjacent angles was penalized to control MLC motion range. A proximal-class optimization solver was adopted to solve the large scale optimization problem, and an alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc comVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme case, a lung (LNG) case, and two head and neck cases—one with three PTVs (H&N{sub 3PTV}) and one with foue PTVs (H&N{sub 4PTV})—to test the efficacy. The plans were optimized using an alternating optimization strategy. The plans were compared against the clinical VMAT (clnVMAT) plans utilizing two overlapping coplanar arcs for treatment. Results: The optimization of the comVMAT plans had converged within 600 iterations of the block minimization algorithm. comVMAT plans were able to consistently reduce the dose to all organs-at-risk (OARs) as compared to the clnVMAT plans. On average, comVMAT plans reduced the max and mean OAR dose by 6.59% and 7.45%, respectively, of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N{sub 3PTV} case. PTV coverages measured by D95, D98, and D99 were within 0.25% of the prescription dose. By comprehensively optimizing all beams, the comVMAT optimizer gained the freedom to allow some selected beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel nongreedy VMAT approach simultaneously optimizes all beams in an arc and then directly generates deliverable apertures. The single arc VMAT approach thus fully utilizes the digital Linac’s capability in dose rate and gantry rotation speed modulation. In practice, the new single VMAT algorithm generates plans superior to existing VMAT algorithms utilizing two arcs.« less

  12. High-Lift Optimization Design Using Neural Networks on a Multi-Element Airfoil

    NASA Technical Reports Server (NTRS)

    Greenman, Roxana M.; Roth, Karlin R.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    The high-lift performance of a multi-element airfoil was optimized by using neural-net predictions that were trained using a computational data set. The numerical data was generated using a two-dimensional, incompressible, Navier-Stokes algorithm with the Spalart-Allmaras turbulence model. Because it is difficult to predict maximum lift for high-lift systems, an empirically-based maximum lift criteria was used in this study to determine both the maximum lift and the angle at which it occurs. Multiple input, single output networks were trained using the NASA Ames variation of the Levenberg-Marquardt algorithm for each of the aerodynamic coefficients (lift, drag, and moment). The artificial neural networks were integrated with a gradient-based optimizer. Using independent numerical simulations and experimental data for this high-lift configuration, it was shown that this design process successfully optimized flap deflection, gap, overlap, and angle of attack to maximize lift. Once the neural networks were trained and integrated with the optimizer, minimal additional computer resources were required to perform optimization runs with different initial conditions and parameters. Applying the neural networks within the high-lift rigging optimization process reduced the amount of computational time and resources by 83% compared with traditional gradient-based optimization procedures for multiple optimization runs.

  13. How to resolve the SLOSS debate: lessons from species-diversity models.

    PubMed

    Tjørve, Even

    2010-05-21

    The SLOSS debate--whether a single large reserve will conserve more species than several small--of the 1970s and 1980s never came to a resolution. The first rule of reserve design states that one large reserve will conserve the most species, a rule which has been heavily contested. Empirical data seem to undermine the reliance on general rules, indicating that the best strategy varies from case to case. Modeling has also been deployed in this debate. We may divide the modeling approaches to the SLOSS enigma into dynamic and static approaches. Dynamic approaches, covered by the fields of island equilibrium theory of island biogeography and metapopulation theory, look at immigration, emigration, and extinction. Static approaches, such as the one in this paper, illustrate how several factors affect the number of reserves that will save the most species. This article approaches the effect of different factors by the application of species-diversity models. These models combine species-area curves for two or more reserves, correcting for the species overlap between them. Such models generate several predictions on how different factors affect the optimal number of reserves. The main predictions are: Fewer and larger reserves are favored by increased species overlap between reserves, by faster growth in number of species with reserve area increase, by higher minimum-area requirements, by spatial aggregation and by uneven species abundances. The effect of increased distance between smaller reserves depends on the two counteracting factors: decreased species density caused by isolation (which enhances minimum-area effect) and decreased overlap between isolates. The first decreases the optimal number of reserves; the second increases the optimal number. The effect of total reserve-system area depends both on the shape of the species-area curve and on whether overlap between reserves changes with scale. The approach to modeling presented here has several implications for conservational strategies. It illustrates well how the SLOSS enigma can be reduced to a question of the shape of the species-area curve that is expected or generated from reserves of different sizes and a question of overlap between isolates (or reserves). Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  14. Understanding non-radiative recombination processes of the optoelectronic materials from first principles

    NASA Astrophysics Data System (ADS)

    Shu, Yinan

    The annual potential of the solar energy hit on the Earth is several times larger than the total energy consumption in the world. This huge amount of energy source makes it appealing as an alternative to conventional fuels. Due to the problems, for example, global warming, fossil fuel shortage, etc. arising from utilizing the conventional fuels, a tremendous amount of efforts have been applied toward the understanding and developing cost effective optoelectrical devices in the past decades. These efforts have pushed the efficiency of optoelectrical devices, say solar cells, increases from 0% to 46% as reported until 2015. All these facts indicate the significance of the optoelectrical devices not only regarding protecting our planet but also a large potential market. Empirical experience from experiment has played a key role in optimization of optoelectrical devices, however, a deeper understanding of the detailed electron-by-electron, atom-by-atom physical processes when material upon excitation is the key to gain a new sight into the field. It is also useful in developing the next generation of solar materials. Thanks to the advances in computer hardware, new algorithms, and methodologies developed in computational chemistry and physics in the past decades, we are now able to 1). model the real size materials, e.g. nanoparticles, to locate important geometries on the potential energy surfaces(PESs); 2). investigate excited state dynamics of the cluster models to mimic the real systems; 3). screen large amount of possible candidates to be optimized toward certain properties, so to help in the experiment design. In this thesis, I will discuss the efforts we have been doing during the past several years, especially in terms of understanding the non-radiative decay process of silicon nanoparticles with oxygen defects using ab initio nonadiabatic molecular dynamics as well as the accurate, efficient multireference electronic structure theories we have developed to fulfill our purpose. The new paradigm we have proposed in understanding the nonradiative recombination mechanisms is also applied to other systems, like water splitting catalyst. Besides in gaining a deeper understanding of the mechanism, we applied an evolutionary algorithm to optimize promising candidates towards specific properties, for example, organic light emitting diodes (OLED).

  15. Neural networks: What non-linearity to choose

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.; Quintana, Chris

    1991-01-01

    Neural networks are now one of the most successful learning formalisms. Neurons transform inputs (x(sub 1),...,x(sub n)) into an output f(w(sub 1)x(sub 1) + ... + w(sub n)x(sub n)), where f is a non-linear function and w, are adjustable weights. What f to choose? Usually the logistic function is chosen, but sometimes the use of different functions improves the practical efficiency of the network. The problem of choosing f as a mathematical optimization problem is formulated and solved under different optimality criteria. As a result, a list of functions f that are optimal under these criteria are determined. This list includes both the functions that were empirically proved to be the best for some problems, and some new functions that may be worth trying.

  16. The empirical status of the third-wave behaviour therapies for the treatment of eating disorders: A systematic review.

    PubMed

    Linardon, Jake; Fairburn, Christopher G; Fitzsimmons-Craft, Ellen E; Wilfley, Denise E; Brennan, Leah

    2017-12-01

    Although third-wave behaviour therapies are being increasingly used for the treatment of eating disorders, their efficacy is largely unknown. This systematic review and meta-analysis aimed to examine the empirical status of these therapies. Twenty-seven studies met full inclusion criteria. Only 13 randomized controlled trials (RCT) were identified, most on binge eating disorder (BED). Pooled within- (pre-post change) and between-groups effect sizes were calculated for the meta-analysis. Large pre-post symptom improvements were observed for all third-wave treatments, including dialectical behaviour therapy (DBT), schema therapy (ST), acceptance and commitment therapy (ACT), mindfulness-based interventions (MBI), and compassion-focused therapy (CFT). Third-wave therapies were not superior to active comparisons generally, or to cognitive-behaviour therapy (CBT) in RCTs. Based on our qualitative synthesis, none of the third-wave therapies meet established criteria for an empirically supported treatment for particular eating disorder subgroups. Until further RCTs demonstrate the efficacy of third-wave therapies for particular eating disorder subgroups, the available data suggest that CBT should retain its status as the recommended treatment approach for bulimia nervosa (BN) and BED, and the front running treatment for anorexia nervosa (AN) in adults, with interpersonal psychotherapy (IPT) considered a strong empirically-supported alternative. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Prior robust empirical Bayes inference for large-scale data by conditioning on rank with application to microarray data

    PubMed Central

    Liao, J. G.; Mcmurry, Timothy; Berg, Arthur

    2014-01-01

    Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072

  18. Development features in large-range nanoscale coordinate metrology

    NASA Astrophysics Data System (ADS)

    Gruhlke, Martin; Recknagel, Christian; Rothe, Hendrik

    2008-04-01

    The Nanometer-Coordinate-Measuring-Machine (NCMM) has the ability to scan large areas at nanometer resolution for the purpose of quality assurance of nanostructured products. The device combines a conventional atomic force microscope (AFM) with a precise positioning system. By locating the AFM at a fixed point and moving the sample with the positioning system a scan range of 2.5 x 2.5 x 0.5 cm 3 and a repeatability of 0.1 nm is achieved. Since all movements of the positioning system are measured via laser interferometers, the Abbe-principle is kept in every dimension, the use of materials with a low thermal expansion coefficient (like Zerodur and FeNi36) and an overall coordinate system the system provides unique measurement conditions (traceability to the meter definition; repeatable and fast scans of the region of interest). In the past the NCMM was used to make the first large area scan of a microelectronic sample. Our present work focuses on automating critical dimension measurement through the use of a-priori-knowledge of the sample and optical navigation. A-priori-knowledge can be generated by the use of CAD-Data of the sample or scans with white light interferometry. Another present objective is the optimization of the measurement parameters for specific sample topologies using simulation and also empirical methods like the Ziegler-Nichols method. The need of efficient data processing and handling is also part of our current research.

  19. Automated 3D trajectory measuring of large numbers of moving particles.

    PubMed

    Wu, Hai Shan; Zhao, Qi; Zou, Danping; Chen, Yan Qiu

    2011-04-11

    Complex dynamics of natural particle systems, such as insect swarms, bird flocks, fish schools, has attracted great attention of scientists for years. Measuring 3D trajectory of each individual in a group is vital for quantitative study of their dynamic properties, yet such empirical data is rare mainly due to the challenges of maintaining the identities of large numbers of individuals with similar visual features and frequent occlusions. We here present an automatic and efficient algorithm to track 3D motion trajectories of large numbers of moving particles using two video cameras. Our method solves this problem by formulating it as three linear assignment problems (LAP). For each video sequence, the first LAP obtains 2D tracks of moving targets and is able to maintain target identities in the presence of occlusions; the second one matches the visually similar targets across two views via a novel technique named maximum epipolar co-motion length (MECL), which is not only able to effectively reduce matching ambiguity but also further diminish the influence of frequent occlusions; the last one links 3D track segments into complete trajectories via computing a globally optimal assignment based on temporal and kinematic cues. Experiment results on simulated particle swarms with various particle densities validated the accuracy and robustness of the proposed method. As real-world case, our method successfully acquired 3D flight paths of fruit fly (Drosophila melanogaster) group comprising hundreds of freely flying individuals. © 2011 Optical Society of America

  20. Empirically constrained estimates of Alaskan regional Net Ecosystem Exchange of CO2, 2012-2014

    NASA Astrophysics Data System (ADS)

    Commane, R.; Lindaas, J.; Benmergui, J. S.; Luus, K. A.; Chang, R. Y. W.; Miller, S. M.; Henderson, J.; Karion, A.; Miller, J. B.; Sweeney, C.; Miller, C. E.; Lin, J. C.; Oechel, W. C.; Zona, D.; Euskirchen, E. S.; Iwata, H.; Ueyama, M.; Harazono, Y.; Veraverbeke, S.; Randerson, J. T.; Daube, B. C.; Pittman, J. V.; Wofsy, S. C.

    2015-12-01

    We present data-driven estimates of the regional net ecosystem exchange of CO2 across Alaska for three years (2012-2014) derived from CARVE (Carbon in the Arctic Reservoirs Vulnerability Experiment) aircraft measurements. Integrating optimized estimates of annual NEE, we find that the Alaskan region was a small sink of CO2 during 2012 and 2014, but a significant source of CO2 in 2013, even before including emissions from the large forest fire season during 2013. We investigate the drivers of this interannual variability, and the larger spring and fall emissions of CO2 in 2013. To determine the optimized fluxes, we couple the Polar Weather Research and Forecasting (PWRF) model with the Stochastic Time-Inverted Lagrangian Transport (STILT) model, to produce footprints of surface influence that we convolve with a remote-sensing driven model of NEE across Alaska, the Polar Vegetation Photosynthesis and Respiration Model (Polar-VPRM). For each month we calculate a spatially explicit additive flux (ΔF) by minimizing the difference between the measured profiles of the aircraft CO2 data and the modeled profiles, using a framework that combines a uniform correction at regional scales and a Bayesian inversion of residuals at smaller scales. A rigorous estimate of total uncertainty (including atmospheric transport, measurement error, etc.) was made with a combination of maximum likelihood estimation and Monte Carlo error propagation. Our optimized fluxes are consistent with other measurements on multiple spatial scales, including CO2 mixing ratios from the CARVE Tower near Fairbanks and eddy covariance flux towers in both boreal and tundra ecosystems across Alaska. For times outside the aircraft observations (Dec-April) we use the un-optimized polar-VPRM, which has shown good agreement with both tall towers and eddy flux data outside the growing season. This approach allows us to robustly estimate the annual CO2 budget for Alaska and investigate the drivers of both the seasonal cycle and the interannual variability of CO2 for the region.

  1. The Materials Genome Project

    NASA Astrophysics Data System (ADS)

    Aourag, H.

    2008-09-01

    In the past, the search for new and improved materials was characterized mostly by the use of empirical, trial- and-error methods. This picture of materials science has been changing as the knowledge and understanding of fundamental processes governing a material's properties and performance (namely, composition, structure, history, and environment) have increased. In a number of cases, it is now possible to predict a material's properties before it has even been manufactured thus greatly reducing the time spent on testing and development. The objective of modern materials science is to tailor a material (starting with its chemical composition, constituent phases, and microstructure) in order to obtain a desired set of properties suitable for a given application. In the short term, the traditional "empirical" methods for developing new materials will be complemented to a greater degree by theoretical predictions. In some areas, computer simulation is already used by industry to weed out costly or improbable synthesis routes. Can novel materials with optimized properties be designed by computers? Advances in modelling methods at the atomic level coupled with rapid increases in computer capabilities over the last decade have led scientists to answer this question with a resounding "yes'. The ability to design new materials from quantum mechanical principles with computers is currently one of the fastest growing and most exciting areas of theoretical research in the world. The methods allow scientists to evaluate and prescreen new materials "in silico" (in vitro), rather than through time consuming experimentation. The Materials Genome Project is to pursue the theory of large scale modeling as well as powerful methods to construct new materials, with optimized properties. Indeed, it is the intimate synergy between our ability to predict accurately from quantum theory how atoms can be assembled to form new materials and our capacity to synthesize novel materials atom-by-atom that gives to the Materials Genome Project its extraordinary intellectual vitality. Consequently, in designing new materials through computer simulation, our primary objective is to rapidly screen possible designs to find those few that will enhance the competitiveness of industries or have positive benefits to society. Examples include screening of cancer drugs, advances in catalysis for energy production, design of new alloys and multilayers and processing of semiconductors.

  2. Semi-Empirical Modeling of SLD Physics

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Potapczuk, Mark G.

    2004-01-01

    The effects of supercooled large droplets (SLD) in icing have been an area of much interest in recent years. As part of this effort, the assumptions used for ice accretion software have been reviewed. A literature search was performed to determine advances from other areas of research that could be readily incorporated. Experimental data in the SLD regime was also analyzed. A semi-empirical computational model is presented which incorporates first order physical effects of large droplet phenomena into icing software. This model has been added to the LEWICE software. Comparisons are then made to SLD experimental data that has been collected to date. Results will be presented for the comparison of water collection efficiency, ice shape and ice mass.

  3. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm

    PubMed Central

    Tamjidy, Mehran; Baharudin, B. T. Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-01-01

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon’s entropy. PMID:28772893

  4. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  5. Multi-Objective Optimization of Friction Stir Welding Process Parameters of AA6061-T6 and AA7075-T6 Using a Biogeography Based Optimization Algorithm.

    PubMed

    Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz

    2017-05-15

    The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.

  6. EON: software for long time simulations of atomic scale systems

    NASA Astrophysics Data System (ADS)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  7. A review of the efficacy of transcranial magnetic stimulation (TMS) treatment for depression, and current and future strategies to optimize efficacy.

    PubMed

    Loo, Colleen K; Mitchell, Philip B

    2005-11-01

    There is a growing interest in extending the use of repetitive transcranial magnetic stimulation (rTMS) beyond research centres to the widespread clinical treatment of depression. Thus it is timely to critically review the evidence for the efficacy of rTMS as an antidepressant treatment. Factors relevant to the efficacy of rTMS are discussed along with the implications of these for the further optimization of rTMS. Clinical trials of the efficacy of rTMS in depressed subjects are summarized and reviewed, focusing mainly on sham-controlled studies and meta-analyses published to date. There is a fairly consistent statistical evidence for the superiority of rTMS over a sham control, though the degree of clinical improvement is not large. However, this data is derived mainly from two-week comparisons of rTMS versus sham, and evidence suggests greater efficacy with longer treatment courses. Studies so far have also varied greatly in approaches to rTMS stimulation (with respect to stimulation site, stimulus parameters etc) with little empirical evidence to inform on the relative merits of these approaches. Only studies published in English were reviewed. Many of the studies in the literature had small sample sizes and different methodologies, making comparisons between studies difficult. Current published studies and meta-analyses have evaluated the efficacy of rTMS as given in treatment paradigms that are almost certainly suboptimal (e.g of two weeks' duration). While the data nevertheless supports positive outcomes for rTMS, there is much scope for the further refinement and development of rTMS as an antidepressant treatment. Ongoing research is critical for optimizing the efficacy of rTMS.

  8. Automatic design of basin-specific drought indexes for highly regulated water systems

    NASA Astrophysics Data System (ADS)

    Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea Francesco; Pulido-Velazquez, Manuel

    2018-04-01

    Socio-economic costs of drought are progressively increasing worldwide due to undergoing alterations of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, traditional drought indexes often fail at detecting critical events in highly regulated systems, where natural water availability is conditioned by the operation of water infrastructures such as dams, diversions, and pumping wells. Here, ad hoc index formulations are usually adopted based on empirical combinations of several, supposed-to-be significant, hydro-meteorological variables. These customized formulations, however, while effective in the design basin, can hardly be generalized and transferred to different contexts. In this study, we contribute FRIDA (FRamework for Index-based Drought Analysis), a novel framework for the automatic design of basin-customized drought indexes. In contrast to ad hoc empirical approaches, FRIDA is fully automated, generalizable, and portable across different basins. FRIDA builds an index representing a surrogate of the drought conditions of the basin, computed by combining all the relevant available information about the water circulating in the system identified by means of a feature extraction algorithm. We used the Wrapper for Quasi-Equally Informative Subset Selection (W-QEISS), which features a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The preferred variable subset is selected among the efficient solutions and used to formulate the final index according to alternative model structures. We apply FRIDA to the case study of the Jucar river basin (Spain), a drought-prone and highly regulated Mediterranean water resource system, where an advanced drought management plan relying on the formulation of an ad hoc state index is used for triggering drought management measures. The state index was constructed empirically with a trial-and-error process begun in the 1980s and finalized in 2007, guided by the experts from the Confederación Hidrográfica del Júcar (CHJ). Our results show that the automated variable selection outcomes align with CHJ's 25-year-long empirical refinement. In addition, the resultant FRIDA index outperforms the official State Index in terms of accuracy in reproducing the target variable and cardinality of the selected inputs set.

  9. Identifying ideal brow vector position: empirical analysis of three brow archetypes.

    PubMed

    Hamamoto, Ashley A; Liu, Tiffany W; Wong, Brian J

    2013-02-01

    Surgical browlifts counteract the effects of aging, correct ptosis, and optimize forehead aesthetics. While surgeons have control over brow shape, the metrics defining ideal brow shape are subjective. This study aims to empirically determine whether three expert brow design strategies are aesthetically equivalent by using expert focus group analysis and relating these findings to brow surgery. Comprehensive literature search identified three dominant brow design methods (Westmore, Lamas and Anastasia) that are heavily cited, referenced or internationally recognized in either medical literature or by the lay media. Using their respective guidelines, brow shape was modified for 10 synthetic female faces, yielding 30 images. A focus group of 50 professional makeup artists ranked the three images for each of the 10 faces to generate ordinal attractiveness scores. The contemporary methods employed by Anastasia and Lamas produce a brow arch more lateral than Westmore's classic method. Although the more laterally located brow arch is considered the current trend in facial aesthetics, this style was not empirically supported. No single method was consistently rated most or least attractive by the focus group, and no significant difference in attractiveness score for the different methods was observed (p = 0.2454). Although each method of brow placement has been promoted as the "best" approach, no single brow design method achieved statistical significance in optimizing attractiveness. Each can be used effectively as a guide in designing eyebrow shape during browlift procedures, making it possible to use the three methods interchangeably. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  10. VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data

    PubMed Central

    Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel

    2014-01-01

    This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198

  11. Empirical modeling of high-intensity electron beam interaction with materials

    NASA Astrophysics Data System (ADS)

    Koleva, E.; Tsonevska, Ts; Mladenov, G.

    2018-03-01

    The paper proposes an empirical modeling approach to the prediction followed by optimization of the exact shape of the cross-section of a welded seam, as obtained by electron beam welding. The approach takes into account the electron beam welding process parameters, namely, electron beam power, welding speed, and distances from the magnetic lens of the electron gun to the focus position of the beam and to the surface of the samples treated. The results are verified by comparison with experimental results for type 1H18NT stainless steel samples. The ranges considered of the beam power and the welding speed are 4.2 – 8.4 kW and 3.333 – 13.333 mm/s, respectively.

  12. Adaptive neural coding: from biological to behavioral decision-making

    PubMed Central

    Louie, Kenway; Glimcher, Paul W.; Webb, Ryan

    2015-01-01

    Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666

  13. A trade based view on casino taxation: market conditions.

    PubMed

    Li, Guoqiang; Gu, Xinhua; Wu, Jie

    2015-06-01

    This article presents a trade based theory of casino taxation along with empirical evidence found from Macao as a typical tourism resort. We prove that there is a unique optimum gaming tax in a particular market for casino gambling, argue that any change in this tax is engendered by external demand shifts, and suggest that the economic rent from gambling legalization should be shared through such optimal tax between the public and private sectors. Our work also studies the tradeoff between economic benefits and social costs arising from casino tourism, and provides some policy recommendations for the sustainable development of gaming-led economies. The theoretical arguments in this article turn out to be consistent with empirical observations on Macao realities over the recent decade.

  14. Application of a semi-empirical model for the evaluation of transmission properties of barite mortar.

    PubMed

    Santos, Josilene C; Tomal, Alessandra; Mariano, Leandro; Costa, Paulo R

    2015-06-01

    The aim of this study was to estimate barite mortar attenuation curves using X-ray spectra weighted by a workload distribution. A semi-empirical model was used for the evaluation of transmission properties of this material. Since ambient dose equivalent, H(⁎)(10), is the radiation quantity adopted by IAEA for dose assessment, the variation of the H(⁎)(10) as a function of barite mortar thickness was calculated using primary experimental spectra. A CdTe detector was used for the measurement of these spectra. The resulting spectra were adopted for estimating the optimized thickness of protective barrier needed for shielding an area in an X-ray imaging facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Fuzzy Adaptive Decentralized Optimal Control for Strict Feedback Nonlinear Large-Scale Systems.

    PubMed

    Sun, Kangkang; Sui, Shuai; Tong, Shaocheng

    2018-04-01

    This paper considers the optimal decentralized fuzzy adaptive control design problem for a class of interconnected large-scale nonlinear systems in strict feedback form and with unknown nonlinear functions. The fuzzy logic systems are introduced to learn the unknown dynamics and cost functions, respectively, and a state estimator is developed. By applying the state estimator and the backstepping recursive design algorithm, a decentralized feedforward controller is established. By using the backstepping decentralized feedforward control scheme, the considered interconnected large-scale nonlinear system in strict feedback form is changed into an equivalent affine large-scale nonlinear system. Subsequently, an optimal decentralized fuzzy adaptive control scheme is constructed. The whole optimal decentralized fuzzy adaptive controller is composed of a decentralized feedforward control and an optimal decentralized control. It is proved that the developed optimal decentralized controller can ensure that all the variables of the control system are uniformly ultimately bounded, and the cost functions are the smallest. Two simulation examples are provided to illustrate the validity of the developed optimal decentralized fuzzy adaptive control scheme.

  16. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  17. Do Not Fear Your Opponent: Suboptimal Changes of a Prevention Strategy when Facing Stronger Opponents

    ERIC Educational Resources Information Center

    Slezak, Diego Fernandez; Sigman, Mariano

    2012-01-01

    The time spent making a decision and its quality define a widely studied trade-off. Some models suggest that the time spent is set to optimize reward, as verified empirically in simple-decision making experiments. However, in a more complex perspective compromising components of regulation focus, ambitions, fear, risk and social variables,…

  18. Validation of the Chinese Version of the Life Orientation Test with a Robust Weighted Least Squares Approach

    ERIC Educational Resources Information Center

    Li, Cheng-Hsien

    2012-01-01

    Of the several measures of optimism presently available in the literature, the Life Orientation Test (LOT; Scheier & Carver, 1985) has been the most widely used in empirical research. This article explores, confirms, and cross-validates the factor structure of the Chinese version of the LOT with ordinal data by using robust weighted least…

  19. When Clients' Morbid Avoidance and Chronic Anger Impede Their Response to Cognitive-Behavioral Therapy for Depression

    ERIC Educational Resources Information Center

    Newman, Cory F.

    2011-01-01

    In spite of the fact that cognitive-behavioral therapy (CBT) for major depressive disorder is an empirically supported treatment, some clients do not respond optimally or readily. The literature has provided a number of hypotheses regarding the factors that may play a role in these clients' difficulties in responding to CBT, with the current paper…

  20. Predictive and mechanistic multivariate linear regression models for reaction development

    PubMed Central

    Santiago, Celine B.; Guo, Jing-Yao

    2018-01-01

    Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711

Top